text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
2017-01-21 23:30:21 8 Comments
I am going to prepare some test data for a compression test. One of the them is the 'worst case test', which should make the compressor work worst. Use random number to generate such a file is an idea, but there still contain some kind of patterns in the data. Using 7zip to compress such a file, I get a output file which is a little bit smaller than input file.
So I make a small piece of code to generate a file, which does not contain any repeated two bytes. To make life difficult, I shuffle those byte pairs in special order, hope the compressor will have more difficulty to find any match, even with predication.
#include <iostream> #include <fstream> int main(int argc, char* argv[]) { const char* filename = "c:\\_Test\\hard.dat"; std::ofstream ofs(filename, std::ios::binary | std::ios::out); if (ofs.bad()) { std::cerr << "fail to open file\n"; return -1; } for (unsigned int i = 0; i < 0x10000; ++i) { unsigned int t = (i * 0xc369) & 0xFFFF; ofs.write((char*)&t, 2); } std::cout << "job done\n"; return 0; }
It's running in windows, and I think it's easy to change filename to make it work on other system, or even use command line argument as filename, but that's not the point.
I tried use 7zip to compress it, both format (7z and zip) created file bigger than original file. I have some other compressors, will test them later.
Any suggestions and help are appreciated.
Related Questions
Sponsored Content
2 Answered Questions
[SOLVED] C++ LZ77 compression algorithm
- 2017-05-24 00:03:40
- Jeremy
- 2267 View
- 3 Score
- 2 Answer
- Tags: c++ compression
1 Answered Questions
[SOLVED] Compression unit test data "easy case"
- 2017-01-24 16:41:40
- Tiger Hwang
- 97 View
- 2 Score
- 1 Answer
- Tags: c++ unit-testing random compression
2 Answered Questions
[SOLVED] Analyzer for compression algorithms (or tools)
- 2016-08-31 13:59:07
- user116360
- 150 View
- 3 Score
- 2 Answer
- Tags: c file compression
2 Answered Questions
[SOLVED] First time writing test case
- 2016-04-16 15:15:13
- user102577
- 114 View
- 3 Score
- 2 Answer
- Tags: python python-3.x unit-testing
0 Answered Questions
MSTest Data Driven Test Inline Data
- 2016-04-21 16:01:37
- Nkosi
- 1543 View
- 6 Score
- 0 Answer
- Tags: c# unit-testing
1 Answered Questions
[SOLVED] Custom compression tool
- 2016-02-25 15:16:15
- Adnan
- 83 View
- 2 Score
- 1 Answer
- Tags: python compression
4 Answered Questions
[SOLVED] Test case for a caching library
- 2011-01-20 00:42:35
- edorian
- 308 View
- 12 Score
- 4 Answer
- Tags: php unit-testing cache
2 Answered Questions
[SOLVED] Create better Unit test
- 2014-04-30 22:16:51
- user41403
- 3509 View
- 9 Score
- 2 Answer
- Tags: java unit-testing junit
2 Answered Questions
[SOLVED] Create and destroy object in Unit Test
- 2014-03-18 15:10:21
- RvdK
- 1289 View
- 6 Score
- 2 Answer
- Tags: python unit-testing
1 Answered Questions
[SOLVED] htaccess for URL remapping, caching, and compression
- 2014-02-18 14:35:48
- Leo
- 309 View
- 3 Score
- 1 Answer
- Tags: cache .htaccess compression
@Tiger Hwang 2017-01-22 20:52:48
Here is the new version, with one fix and some small fix. I did wrong calculation in the beginning, the max possible none repeated bytes pair is 64K+1, not 128K.
It cannot be calculate with one single lineal equation. To get the result, I have to use a 2D array to keep trace which pair exists already. Each time calculate a new byte with equation 1, when it was marked as used, use equation 2 to get a new byte. By set up different multipilier, recursive result is avoid.
I had some code to check if the output is correct use simple loop to compare all bytes pairs. It was so slow so I removed it.
After generate the file, I use 7zip again to compress it with 7z and zip format. What interesting is with 7z format, the file is 'compressed' to a bigger size. With zip format, the file is not compressed but stored directly in the zip file, with additional head information.
@user1118321 2017-01-22 03:26:18
This is a cool idea! Here are some thoughts I had:
What are these magic numbers?
You have 4 magic numbers in your main loop:
0x10000,
0xc369,
0xFFFF, and
2. What do they mean? It looks like
0x10000is the number of 2-byte words you're writing to the file. It's also the limit of a 16-bit number. It would be nice if there were a named constant for that. Perhaps something like:
or
Or something along those lines. (Technically it's 1 more than the max, so maybe something more descriptive.)
Honestly, I can live with
0xFFFFbut it wouldn't hurt to give it a name like
kLSWMask(where
LSWis least significant word) or something similar.
You could get rid of the
2by making
tbe a
uint16_tand then using
sizeof(t).
That leaves
0xc369. I have no idea what it is. I assume this is some sort of linear congruential pseudo-random number generator, but I'm not really well versed in such things. How is it derived? What significance does it have? Give it a name so it's understandable to someone else 6 months from now.
I'm a big fan of self-documenting code. However, in this case, it would be nice to have at least a sentence explaining what the loop does. Coming across this code in the code base I work on would leave me scratching my head. I'd have to go back through source control comments to see if there was any clue of what it was about. Maybe just a comment like:
would really help a lot! And if you have a link or a short sentence to explain the algorithm, that would be nice, too.
@Tiger Hwang 2017-01-22 11:32:08
Nice suggeation. I will update it soon. The number 0xc396 was with a ^ operator, but it did not work well. I changed it to * and got a better output. I think it should even better to use a primirive number instead.
@Rakete1111 2017-01-22 15:26:20
@TigerHwang Also, the
std::ios::outflag is redundant, as when you create a
std::ofstream, it is already specified.
|
https://tutel.me/c/codereview/questions/153258/create+39worst+case+test39+data+for+compression+test
|
CC-MAIN-2019-18
|
refinedweb
| 1,074
| 70.53
|
Ah, then yes, iBATIS 3 might be more aggressive about using the nested
objects cache...(even without join mapping), I can't remember if iBATIS 2
would behave the same way. Might be worth a try... if it works in 2, then
I'll revisit the design to see if I can get 3 to only cache if it detects a
nested result map.
Clinton
On Tue, Sep 15, 2009 at 10:44 AM, K. Arnold <akarnold@comcast.net> wrote:
>
> I have not used iBATIS 2 to do a comparison. It is very well that the
> assumption given to me about how RowHandler works is false. I am in the
> process of trying to figure out a good limit size to pick utilizing an
> AS400
> jdbc driver working on a DB2 database.
>
> We have decided that utilizing the select statement with the limit is the
> better decision choice for us. Now we need to figure out how to optimize
> it.
>
> Any suggestions would be appreciated :ninja:
>
> Just to make sure I understand your reply for any future work. I don't
> believe I am using a join mapper. I have a POJO that maps directly to a
> custom (yet extremely complicated) sql statement. The POJO doesn't have
> any
> collections, and is basically stand alone. It is my understanding from
> your
> code, that my type of POJO isn't going to utilize the join mapper method,
> since the next POJO will be unique.
>
> Clinton Begin wrote:
> >
> > The difference between RowHandler and ResultHandler is only in name.
> They
> > did the same thing.
> > I'd be interested to know if iBATIS 2.x works for you in this regard. As
> > far
> > as I can recall, 2.x had the same design if you invoked the join
> mapper...
> >
> > See, as soon as you try to use a JOIN to load a complex object graph, we
> > need to cache the results so that iBATIS knows to append to the parent,
> > rather than instantiate a new object.
> >
> > Clinton
> >
> > On Tue, Sep 15, 2009 at 9:04 AM, K. Arnold <akarnold@comcast.net> wrote:
> >
> >>
> >> Thank you for the reply. Is it possible to make the join mapping cache
> a
> >> configurable attribute on the mapping file? This is my first time
> using
> >> Ibatis and I was confidently informed that we could walk row by row over
> >> a
> >> result set. I was told to look up the RowHandler interface. It is my
> >> understanding that in Ibatis 3 that the ResultHandler has replaced the
> >> RowHandler. Was there a specific design decision made not to allow a
> row
> >> by
> >> row walk through?
> >>
> >> As a work around I was looking into the Plugin feature. It looks like I
> >> can
> >> Intercept the resultSetsHandler method, and write my own code to process
> >> it.
> >> In theory I could utilize the DefaultResultSetHandler code and remove
> the
> >> cache feature in my implementation. However I am not sure if I have
> >> access
> >> to the same constructor parameters at the point of the intercept.
> >>
> >> There is some concern around the performance of also introducing
> multiple
> >> call backs to the Database for each "set" of data to process. I need to
> >> balance the design choice of writing my own resultSetsHandler with the
> >> number of sql calls being made to the database via an offset Select
> call.
> >>
> >> I appreciate your replies to this subject.
> >>
> >> Clinton Begin wrote:
> >> >
> >> > The nestedResultObjects is necessary for join mapping. One way to
> deal
> >> > with
> >> > this though, is to use batches of reads as well as writes. Use the
> >> > pagination facilities and possibly even the proprietary offset/limit
> >> > features of your database to grab subsets of the results.
> >> > Incidentally I'm rewriting the DefaultResultSetHandler to be easier to
> >> > understand. But I don't see the need for that cache going away
> anytime
> >> > soon...
> >> >
> >> > Clinton
> >> >
> >> > On Mon, Sep 14, 2009 at 1:47 PM, K. Arnold <akarnold@comcast.net>
> >> wrote:
> >> >
> >> >>
> >> >> I am trying to iterate over a result set of 2million records, for a
> >> large
> >> >> bulk load and transformation into a new ODS. It appears that I am
> >> >> getting
> >> >> an OutOfMemoryException because the DefaultResultSetHandler is
> caching
> >> >> the
> >> >> object in the nestedResultObjects property. Is there some property
I
> >> >> should
> >> >> set or statement/ method call I should be using that will allow me
to
> >> >> process one line at a time and not have the nestedResultObjects store
> >> >> each
> >> >> object?
> >> >>
> >> >> My goal:
> >> >> * Grab a row
> >> >> * Send it to be processed
> >> >> * Once processed, move on to the next row.
> >> >> Note: Once a row is processed I no longer need a tie back to the
> >> object.
> >> >>
> >> >>
> >> >>
> >> >> I have included the custom ResultHandler, the unit test and the
> >> >> configuration file. Please let me know if you need other
> information.
> >> >>
> >> >> package com.primetherapeutics.benplanmgr.entity.rxclaim;
> >> >>
> >> >> import org.apache.ibatis.executor.result.ResultContext;
> >> >> import org.apache.ibatis.executor.result.ResultHandler;
> >> >> import org.apache.log4j.Logger;
> >> >>
> >> >> /**
> >> >> * @author kjarnold
> >> >> *
> >> >> */
> >> >> public class GroupEligibilityResultHandler implements ResultHandler
{
> >> >> Logger logger =
> >> >> Logger.getLogger(GroupEligibilityResultHandler.class);
> >> >>
> >> >> int count = 0;
> >> >>
> >> >> public void handleResult(ResultContext context) {
> >> >> if(context.getResultObject() != null) {
> >> >> count++;
> >> >> logger.debug(count);
> >> >> }
> >> >> //context.stop();
> >> >> }
> >> >>
> >> >> public int getCount() {
> >> >> return count;
> >> >> }
> >> >>
> >> >> }
> >> >>
> >> >> @Test
> >> >> public void getGroupElibibilitiesByResultHandler() {
> >> >> Map<String, String> parameterMap = new HashMap<String,
> >> >> String>();
> >> >> parameterMap.put("gelThruDate", "1090101");
> >> >> parameterMap.put("addDate", "1090911");
> >> >> parameterMap.put("chgDate", "1090911");
> >> >> parameterMap.put("planDate", "1090101");
> >> >> try {
> >> >> GroupEligibilityResultHandler handler = new
> >> >> GroupEligibilityResultHandler();
> >> >>
> >> >>
> >> >>
> >>
> session.select("com.primetherapeutics.benplanmgr.entity.rxclaim.data.GroupEligibilityMapper.getGroupEligibilities",
> >> >> parameterMap, handler);
> >> >> logger.debug(handler.getCount());
> >> >>
> >> >> } finally {
> >> >> session.close();
> >> >> }
> >> >>
> >> >> }
> >> >>
> >> >>
> >> >> Here are my mapping files:
> >> >>
> >> >> <?xml version="1.0" encoding="UTF-8" ?>
> >> >> <!DOCTYPE configuration PUBLIC "-//ibatis.apache.org//DTD Config
> >> 3.0//EN"
> >> >> "">
> >> >> <configuration>
> >> >> <settings>
> >> >> <setting name="multipleResultSetsEnabled" value="false"/>
> >> >> <setting name="defaultExecutorType" value="BATCH"/>
> >> >> </settings>
> >> >> <mappers>
> >> >> <mapper
> >> >>
> >> >>
> >>
>
> >> >> <mapper
> >> >>
> >> >>
> >>
>
> >> >> </mappers>
> >> >> </configuration>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >>
>
>
|
http://mail-archives.apache.org/mod_mbox/ibatis-user-java/200909.mbox/%3C16178eb10909151004p5a4b0251p304603782f714f89@mail.gmail.com%3E
|
CC-MAIN-2018-30
|
refinedweb
| 929
| 58.48
|
public abstract class Shape { protected String name; public String getName() { return name; } public void setName(String name) { this.name = name; } public abstract double getSurfaceArea(); protected double normalize(double entry) { if (entry < 0) return 0.0; else return entry; } public String toString() { return getName() + "surface area: " + getSurfaceArea(); } }
public class Square extends Shape { double length; public Square(double length) { this.length = length; } public double getSurfaceArea() { double squareSurfaceArea = length * length; return squareSurfaceArea; } }
Anyone have any idea why the memory address would print out? I want the ToString method to print out but I can't seem to figure out the problem.
EDIT: Now it is printing out nullSurfaceArea: and then the correct number but I don't understand why it is printing out null.
This post has been edited by tkess17: 21 February 2012 - 12:18 PM
|
http://www.dreamincode.net/forums/topic/267620-memory-address-printing-out/
|
CC-MAIN-2013-20
|
refinedweb
| 134
| 61.77
|
Algorithms and OOD (CSC 207 2013F) : Readings)]
Summary: We consider the ways in which programmers and languages typically handle errors in programs. We then explore exceptions, Java's primary error-handling mechanism.
Important Classes
java.lang.Exception
java.lang.Integer
java.lang.NumberFormatException
java.io.IOException
java.io.BufferedReader
Prerequisites: Basics of Java. Conditionals.
Experience suggests that many things can (and do) go wrong during the execution of a program or a piece of code. At times, the problem is due to the programmer who wrote the original code. Perhaps the programmer made a mistake in the logic of the program. Perhaps the programmer misunderstood some aspect of the system (such as the limitations on fixed-size representation of numbers). Perhaps the programmer forgot a special case.
Some problems are also far outside of the programmer's control. For example, some programs fail because power fails, because someone accidentally takes a backhoe to a network cable, or because a disk becomes nonoperative.
More frequently, however, code fails because some aspect of the input is invalid. At times, programmers can predict and check for these problems in advance (e.g., before dividing by the length of a list of values in a computation of the average value, the programmer should make sure that the list is nonempty). At other times, programmers cannot easily check for problems. For example, an exponentiation method working with a fixed-size numeric representation may produce a number that is larger than can be stored, and it is both expensive (in terms of computation) and difficult (in terms of determining a reasonable way to check in advance whether or not the computation will be successful). Similarly, it doesn't seem reasonable to check whether an input can be parsed before parsing it.
As the exponentiation example suggests, programmers must think about potential problems in a number of ways. First, they must identify possible problems. Next, they must choose how to respond to the problems. Next, they must determine how and when to check for those potential problems. Finally, they must implement all of their prior decisions.
Let us consider as an example a piece of code that downloads and installs an update to a program. This code might call a helper method to read an IP address from a file, open a connection to that IP address, read a patch, and install that patch. We might express that process in pseudo-Java as
ip = file.getIPAddress(); connection = new FTPConnection(ip); update = connection.download(); software.update(update); update.noteSuccessfulUpdate();
Let us focus on the
getIPAddress method. That
method may fail for a variety of reasons. For example,
the file may not exist, the program may no longer have read
permission for that file, the file may not contain any
more input, or the next line of input may not contain an
IP address. (You can probably identify a variety of other
issues.)
How can one recover from such an error? The program could use a failsafe address (one that we'd prefer not to use, but will do so if nothing else works). The program could decide not to update and try again later. It all depends on the context of the call.
However, it is rarely the case that your code should recover from an error by printing an error message for the user to read. Think about how annoying you find the error messages you get from programs? Wouldn't it be better if the programmer found a way to recover or to delay the issue until later?
You now know what can go wrong and what you want to do about it. What comes next? You need to decide how you identify when something has gone wrong. Unfortunately, there is not a clear answer. In fact, one of the central debates in the software engineering community is when and how the code should check for problems.
One camp believes strongly in preconditions and postconditions. That is, this camp suggests that programmers should (a) carefully document what requirements must be met for the code to work successfully and what results the code produces and (b) verify (either through analysis or through code that checks that the preconditions are met) that any requirements are met before executing that section of code. One advantage of such a strategy is that it requires you to think very carefully about your code. A second advantage is that you find errors as early as possible.
If we chose such a strategy, we might update the sample code to something like the following:
if (file.isAvailable() && file.containsMoreInput() && file.nextInputIsAnIPAddress()) { ip = file.getIPAddress(); } else { // Something to deal with the problem. } connection = new FTPConnection(ip); update = connection.download(); software.repair(update); software.noteSuccessfulUpdate();
Some programmers worry that the code that checks whether or not
preconditions are met places an unnecessary computational burden
on the program. Consider, for example, a method that sums the
numbers in a list. If we must check in advance that the list
contains only numbers, we end up traversing the list twice, once
to check and once to sum. It is certainly more efficient (and,
to many, more natural) to check each entry in the list when we
reach it and add it to the running sum. Similarly, it is likely
that
file.nextInputIsAnIPAddress() in the code above
must read the next input, check whether it's an IP address, and then
“unread” it so that
getIPAddress() can again
do the reading and interpretation. In addition, programmers often
find that there are some preconditions which one cannot easily check
in advance.
Hence, as an alternative, some software engineers advocate having
methods observe errors as they do their work and return a special
value to indicate inability to compute. For example one might have
the
sum method return false to indicate some
problem occurred (if the language is so casual about typing) or use
some unlikely number, such as
the smallest number, for a similar purpose. In object-oriented languages,
one common technique is to return the special value
null
when the operation fails.
For the continuing example, we might write
ip = file.getIPAddress(); if (ip == null) { // Something to deal with the problem. } ...
As this sample code suggests, a disadvantage of this special-return-value strategy is that programmers must add extra code after any method call to verify that the result of the method is not an error signal. The extra code also requires extra computation (although not very much) to check the result, even when the method works correctly.
Even more unfortunately, history suggests that many programmers are less careful than they should be, no matter which of these two techniques they use. That is, if they employ the precondition technique, they leave out the precondition tests with a note (usually implicit) that “I'll add those later.” Similarly, if they employ the special-return-value technique, they leave out the tests for special value with an assumption that they can add them in the future.
Has the software engineering community developed another, perhaps better, solution? Yes. Does Java use it? Yes. Java uses a variant of the error signaling technique that (a) provides a more uniform mechanism for indicating errors and (b) makes it difficult for programmers to delay the decision of how to handle errors. Java's error handling mechanism, the exception, is inherited from a number of other languages, particularly CLU.
In the exception model, when a method is unable to compute a result, instead of simply crashing or returning a special value, the method sidesteps the normal method return mechanism and instead invokes what is called an exception handler. We typically say that a method “throws an exception” when it fails.
In the simplest exception handling form, we might write (in pseudocode)
ip = file.getIPAddress(); handle failure in getIPAddress by { ip = new IPAddress("localhost"); } connection = new FTPConnection(ip); update = connection.download(); software.repair(update); software.noteSuccessfulUpdate();
In some ways, this code is like the special-return-value solution.
That is, it seems that you execute the code and then check afterwards whether
or not it succeeded.
However, there are subtle differences, particularly in how the
program decides whether or not to use the exception handler. If the
execution of
getIPAddress concludes with a command
to return a value, then the first assignment to
ip
is executed and computation skips the handler, moving on to the
next command (in this case, the command to create a connection).
If the execution of
getIPAddress finishes by throwing
an exception, then the first assignment is not
executed and computation moves directly to the exception handler.
Note that this solution does not require
getIPAddress to return
a special value to indicate failure. Instead,
getIPAddress
uses different commands indicate successful computation (e.g.,
return) and another to
indicate failure. Similarly, the code that calls
getIPAddress
need not check the result, because it is confident that the handler
gets called automatically on failure and not at all upon success.
One particularly nice aspect of exception handling in most languages
is that you can permit an exception to escape from a block of code,
and not just a single method. For example, if we decide to
handle the failure of
getIPAddress by simply skipping
the download, we can write (again, in pseudocode)
block ip = file.getIPAddress(); connection = new FTPConnection(ip); update = connection.download(); software.repair(update); software.noteSuccessfulUpdate(); end block handle failure in block by ...
In this case, when
getIPAddress fails, we don't even
attempt the subsequent lines (creating the connection, downloading
the update, etc.).
In working with exceptions in Java, you must pay attention to three different, but related, issues: First, when you write a method that may fail, you must indicate that it may fail and how it may fail. Second, when you reach a point in the method in which the method fails, you must throw the appropriate exception. Finally, when you call methods that may fail, you must add handlers (or indicate that the caller may also fail).
To indicate that a method may fail, add
throws Exception
after the parameter list and before the body of a method. For example,
public IPAddress getIPAddress() throws Exception { ... } // getIPAddress()
Within the body of the procedure, you will throw an exception when
you encounter an error. Instead of writing
return XXX,
you write
throw new Exception("description of problem");
Whenever your method calls a method that may throw an exception, you must explicitly state what you want to do when the method throws an exception. You have two basic choices: You may deal explicitly with the exception or you may pass the exception on to whatever method called your method. If you do neither, the Java compiler will refuse to compile your program.
To deal explicitly with an exception, you write a try/catch block, which has the following form
try { // stuff that may have problems } catch (Exception e) { // Handle one type of exception }
For example,
try { ip = file.getIPAddress(); } catch (Exception e) { ip = new IPAddress("localhost"); }
There are also times in which you will decide that a failure in a method your method calls leads naturally to a failure in your method. You can explicitly pass the exception on as follows
try { // stuff that may have problems } catch (Exception e) { throw new Exception("explanation"); }
The designers of Java decided that such code was overly verbose, and so permit a simpler mechanism for passing along exceptions. In particular, if you explicitly state that your method throws exceptions and you don't put the exceptional method in a try/catch clause, then Java assumes that when the called method throws an exception, your method should, too. For example,
public void myProc() throws Exception { // stuff that may have problems } // myProc()
You may note that we've used this strategy in the past when writing
main procedures and our first static procedures. Because I
didn't want you to worry about exceptions until now, the
throws
Exception prevented Java from complaining that you were calling
exceptional methods without handling their potential exceptions.
As you may have noted, one disadvantage of the basic exception handling mechanism is that it does not distinguish between things that go wrong. For example, consider the problem of reading an integer. We might express such a method as follow:
public static int promptForInt(PrintWriter pw, BufferedReader br, String prompt) throws Exception { if (prompt != null) { pw.print(prompt); pw.flush(); } return Integer.parseInt(br.readLine()); } // promptForInt(PrintWriter, BufferedReader, String)
Why might this code fail? It might fail because the user entered
something other than an integer (e.g.,
1.42 or even
something that the user thinks is reasonable, like
two).
It might fail because the call to
readLine failed (e.g.,
if someone had accidentally closed the reader). It might even fail
because the writer is closed (although I believe Java's current
implementation of
PrintWriter indicates no such error).
The reactions to the different modes of failure may be different.
For example, if the
readLine failed, we probably
have to give up, because it's unlikely a subsequent call will
succeed. However, if the user entered something that is not
an integer, we can ask the user to try again.
Java deals with this multiplicity of types of exceptions by permitting more specific exceptions and exception handlers. In particular, you can write
try { // code that may fail in multiple ways } catch (FirstKindOfException e1) { // recover from the first kind of exception } catch (SecondKindOfException e2) { // recover from the second kind of exception } catch (Exception e) { // recover from any remaining kinds of exceptions }
It is then the custom for a method to indicate what particular
kinds of exceptions it may throw.
For
promptForInt, we might write:
public static int promptForInt(PrintWriter pw, BufferedReader br, String prompt) throws NumberFormatException, IOException { if (prompt != null) { pw.print(prompt); pw.flush(); } try { return Integer.parseInt(br.readLine()); } catch (NumberFormatException nfe) { if (prompt != null) { br.println("Sorry, but that was not an integer. Try again."); return promptForInt(pw, br, prompt); } else { throw nfe; } } catch (IOException ioe) { // No way to recover. throw ioe; } } // readInt(PrintWriter, BufferedReader, String)
How did I know that
parseInt could throw a
NumberFormatException and that
readLine
could throw an
IOException? I read the documentation.
Why haven't I worried about any other kinds of exceptions? Because no other method I've called has indicated that it can throw an exception.
You can also define your own kinds of exceptions. To do so,
you create a class file,
YourException.java that
looks like the following:
public class YourException extends Exception { public YourException() { super(); } public YourException(String reason) { super(reason); } } // class YourException
We will explore the ideas behind the form of this declaration in a future reading. For now, just follow the form. Once you have declared your own exception, you may throw it with
throw new YourException();
or
throw new YourException("some extra notes");)]
This work is licensed under a Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit
or send a letter to Creative Commons, 543 Howard Street, 5th Floor,
San Francisco, California, 94105, USA.
|
http://www.math.grin.edu/~rebelsky/Courses/CSC207/2013F/readings/exceptions.html
|
CC-MAIN-2017-51
|
refinedweb
| 2,518
| 53.31
|
Point in region functions. More...
#include <grass/gis.h>
Go to the source code of this file.
Point in region functions.
This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details.
Definition in file wind_in.c.
Returns TRUE if coordinate is within the current region settings.
Definition at line 26 of file wind_in.c.
References G_get_window(), and G_point_in_window().
Returns TRUE if coordinate is within the given map region.
Use instead of G_point_in_region() when used in a loop (it's more efficient to only fetch the window once) or for checking if a point is in another region (e.g. contained with a raster map's bounds).
Definition at line 51 of file wind_in.c.
References FALSE, and TRUE.
Referenced by G_point_in_region().
|
http://grass.osgeo.org/programming6/wind__in_8c.html
|
crawl-003
|
refinedweb
| 133
| 71.51
|
.\" Copyright (c) 1994 .\" The Regents of the University of California._union.8 8.7 (Berkeley) 5/1/95 .\" .Dd May 1, 1995 .Dt MOUNT_UNION 8 .Os BSD 4.4 .Sh NAME .Nm mount_union .Nd mount union filesystems .Sh SYNOPSIS .Nm mount_union .Op Fl bfr .Op Fl o Ar options .Ar directory .Ar uniondir .Sh DESCRIPTION The .Nm mount_union command attaches .Ar directory above .Ar uniondir in such a way that the contents of both directory trees remain visible. By default, .Ar directory becomes the .Em upper layer and .Ar uniondir becomes the .Em lower layer. .Pp The options are as follows: .Bl -tag -width indent .It Fl b Invert the default position, so that .Ar directory becomes the lower layer and .Ar uniondir becomes the upper layer. However, .Ar uniondir remains the mount point. .It Fl f Get the files from the lower layer to upper layer on first access to the file. .It Fl o Options are specified with a .Fl o flag followed by a comma separated string of options. See the .Xr mount 8 man page for possible options and their meanings. .It Fl r Hide the lower layer completely in the same way as mounting with .Xr mount_null 8 . .El .Pp To enforce filesystem security, the user mounting the filesystem must be superuser or else have write permission on the mounted-on directory. .Pp Filenames are looked up in the upper layer and then in the lower layer. If a directory is found in the lower layer, and there is no entry in the upper layer, then a .Em shadow directory will be created in the upper layer. It will be owned by the user who originally did the union mount, with mode .Dq rwxrwxrwx (0777) modified by the umask in effect at that time. .Pp. .Pp Except in the case of a directory, access to an object is granted via the normal filesystem access checks. For directories, the current user must have access to both the upper and lower directories (should they both exist). .Pp Requests to create or modify objects in .Ar uniondir are passed to the upper layer with the exception of a few special cases. An attempt to open for writing a file which exists in the lower layer causes a copy of the .Em .Dv EROFS . .Pp The union filesystem manipulates the namespace, rather than individual filesystems. The union operation applies recursively down the directory tree now rooted at .Ar uniondir . Thus any filesystems which are mounted under .Ar uniondir will take part in the union operation. This differs from the .Em union option to .Xr mount 8 which only applies the union operation to the mount point itself, and then only for lookups. .Sh EXAMPLES The commands .Bd -literal -offset indent mount -t cd9660 -o ro /dev/cd0a /usr/src mount -t union -o /var/obj /usr/src .Ed .Pp mount the CD-ROM drive .Pa /dev/cd0a on .Pa /usr/src and then attaches .Pa /var/obj on top. For most purposes the effect of this is to make the source tree appear writable even though it is stored on a CD-ROM. .Pp The command .Bd -literal -offset indent mount -t union -o -b /sys $HOME/sys .Ed .Pp attaches the system source tree below the .Pa sys directory in the user's home directory. This allows individual users to make private changes to the source, and build new kernels, without those changes becoming visible to other users. Note that the files in the lower layer remain accessible via .Pa /sys . .Sh SEE ALSO .Xr intro 2 , .Xr mount 2 , .Xr unmount 2 , .Xr fstab 5 , .Xr mount 8 , .Xr mount_null 8 .Sh BUGS Without whiteout support from the filesystem backing the upper layer, there is no way that delete and rename operations on lower layer objects can be done. .Dv EROFS is returned for this kind of operations along with any others which would make modifications to the lower layer, such as .Xr chmod 1 . .Pp Running .Xr find 1 over a union tree has the side-effect of creating a tree of shadow directories in the upper layer. .Sh HISTORY The .Nm mount_union command first appeared in .Bx 4.4 .
|
http://opensource.apple.com//source/diskdev_cmds/diskdev_cmds-491.3.3/mount_union.tproj/mount_union.8
|
CC-MAIN-2016-40
|
refinedweb
| 704
| 70.39
|
In a world where we hear and talk a lot about making code run concurrent or in parallel, there’s sometimes a little bit of confusion between the two. It happens that many times we use one term when referring to the other or even use them indistinguishably. Let’s shed some light on the matter.
When we say that we have concurrency in our code, that means that we have tasks running in periods of time that overlap. That doesn’t mean that they run at the exact same time. When we have parallel tasks that means that they they run at the same time.
In a multi-core world it might seem that concurrency doesn’t make sense, but as everything, we should the right approach for the job at hand. Imagine for example a very simple web application where one thread handles requests and another one handles database queries: they can run concurrently. Parallelism has become very useful in recent times in the Big Data era, where we need to process huge amounts of data.
Let’s see an example of each, run and compare run times.
Concurrent:
from threading import Thread LIMIT = 50000000 def cycle(n): while n < LIMIT: n += 1 t1 = Thread(target=cycle,args=(LIMIT/2,)) t2 = Thread(target=cycle,args=(LIMIT/2,)) t1.start() t2.start() t1.join() t2.join()
Parallel:
from multiprocessing import Process LIMIT = 50000000 def cycle(n): while n < LIMIT: n += 1 p1 = Process(target=cycle, args=(LIMIT/2,)) p2 = Process(target=cycle, args=(LIMIT/2,)) p1.start() p2.start() p2.join() p2.join()
Now, the times to run:
$ time python concurrent.py
real0m4.174s
user0m3.729s
sys0m2.272s
$ time python parallel.py
real0m1.764s
user0m3.422s
sys0m0.027s
As we can see, the parallel code runs much faster than the concurrent. Which accordingly to what was said previously makes sense,doesn’t it? In this example, we can only gain time if the tasks run simultaneously.
Your programming language of choice will give the tools needed to implement both the approaches. Analyze you problem, devise a strategy and start coding!
P.S. Please note, that an imperative implementation would run faster than the concurrent one due to the Python’s GIL.
|
https://mccricardo.com/concurrent-vs-parallel/
|
CC-MAIN-2022-33
|
refinedweb
| 372
| 63.39
|
The. This assignment operator takes its argument by value, making use of the existing copy and move constructor implementations.
To implement the assignment operator, we simply need to swap the contents of *this and the argument, other. When other goes out of scope at the end of the function, it will destroy any resources that were originally associated with the current object.
To achieve this, we define a swap function for our class on lines 36–41, which itself calls swap on the class’s members (line 40). We use a using-declaration on line 38 to allow swap to be found via argument-dependent lookup before using std::swap — this is not strictly necessary in our case, because we are only swapping a pointer, but is good practice in general. Our assignment operator then simply swaps *this with other on line 26.
The copy-and-swap idiom has inherent strong exception safety because all allocations (if any) occur when copying into the other argument, before any changes have been made to *this. It is generally, however, less optimized than a more custom implementation of the assignment operators.
Note: We can typically avoid manual memory management and having to write the copy/move constructors, assignment operators, and destructor entirely by using the rule of zero
#include <utility> class resource { int x = 0; }; class foo { public: foo() : p{new resource{}} { } foo(const foo& other) : p{new resource{*(other.p)}} { } foo(foo&& other) : p{other.p} { other.p = nullptr; } foo& operator=(foo other) { swap(*this, other); return *this; } ~foo() { delete p; } friend void swap(foo& first, foo& second) { using std::swap; swap(first.p, second.p); } private: resource* p; };
|
https://tfetimes.com/c-copy-and-swap/
|
CC-MAIN-2020-29
|
refinedweb
| 277
| 50.06
|
This week’s article is all about classes in JScript .NET. I
introduced the class statement in article four of this series. This article
goes into more detail covering class access rules, inheritance,
and polymorphism. The article includes some sample code that
demonstrates how to use the features I discuss in this
article.
Understanding class access rules
Article four of this series introduced the
class statement, member
variables, constructors, and static members. While classes are a
great way to encapsulate behavior and data into a single type,
they’re not useful if you cannot hide certain details from other
code in your application. For example, a class that has an amount member would not be aware if
some other code in the application directly modifies its value –
the class could not respond to the change and would not be able
to determine if the new value is appropriate. This is where class
member attributes come in.
By default, all class members are visible to all code within
an application. This means that any code can modify a class’s
member variables or call its member functions without
restrictions – this is similar to regular JScript objects you
create yourself.
When you create a new class, you can grant or deny access to
the class’s member variables and functions using member
attributes. There are three types of member attributes:
- public : This is the
default attribute; public
members are accessible from all code within an application
- private : These types of
members are accessible only within the class that
declares them – all other code within an application cannot
access private members.
- protected : These types of
members are accessible within the class that declares them
and classes that derive from the class that declares
them (I’ll discuss the term ‘derives’ shortly).
You can change a class member’s attributes as shown in the
following listing:
class kitchenAppliance { private var type : int; protected var weight : int; // public is implicit - the following declarations are public var height: int; function kitchenAppliance() { //... } }
The listing changes the visibility of the type member to private (meaning the variable is
accessible only within the class that defines it), and the
visibility of the weight
member to protected (meaning
that it is visible to the defining class and classes that derive
from it). The height member
variable and kitchenAppliance
function (the class’s constructor) are both publicly visible.
Understanding inheritance
Most people have more than one kitchen appliance: stove,
fridge, and perhaps a dishwasher. One thing in common with each
of these appliances is that they play a role in a typical
kitchen: we use them to prepare or store food, or clean up when
we’re done. When you think of a stove, you implicitly understand
that it is a kitchen appliance, as is a fridge and dishwasher. In
effect, a stove is a type of appliance.
When you notice or discover an “is a” relationship between two
objects, they are said to share some characteristic that
establishes the relationship between them. In the case of stoves
and dishwashers, both are types of kitchen appliances
based on their role in a typical home. You can model the
relationship between a stove and a dishwasher using JScript .NET
classes and inheritance, as shown in the following listing:
class kitchenAppliance { //... } class Stove extends kitchenAppliance { //... } class Dishwasher extends kitchenAppliance { //... }
The code uses the extends
keyword to express the relationship between the Stove and Dishwasher classes through the kitchenAppliance class. The Stove and Dishwasher classes derive from
the base class kitchenAppliance. You can think of the
relationship in another way: a Stove is a specialized type of kitchenAppliance.
Inheritance is a very powerful means of managing complexity
since it allows you to compose new types based on existing types,
specializing them as necessary. Consider the following
listing:
class Dishwasher extends kitchenAppliance { // note: this is a partial implementation of the class function washDishes() { print("Washing dishes...done"); } } class Stove extends kitchenAppliance { // note: this is a partial implementation of the class function boilWater() { print("Mmmmm....coffee...."); } }
The listing demonstrates that the Dishwasher and Stove class
each have operations that are specialized for each type – washDishes for the Dishwasher class and boilWater for the Stove class. There’s another aspect of
inheritance, called polymorphism that allows you to treat
specialized types as more general types.
Understanding Polymorphism and Casting
A polymorphic object can assume different forms based on where
it exists within its class hierarchy. For example, consider the
following listing:
// Note: genericAppliance is a kitchenAppliance type var genericAppliance : kitchenAppliance = new Dishwasher(); print("The appliance is: " + genericAppliance.applianceAsString); // prints: The appliance is: Dishwasher
The code declares a variable that’s a kitchenAppliance type, but creates an
instance of a Dishwasher. The
code confirms that the genericAppliance variable refers to a
Dishwasher by having it print
its string representation (third line in the above listing).
JScript .NET allows you to do this because the Dishwasher is polymorphic: you can treat
it as a Dishwasher or a kitchenAppliance. Polymorphism
allows you to create general functions that take base types for
their parameters but operate on more specialized types.
There is a minor catch, though. When you call the washDishes method, and compile the code,
the JScript .NET compiler complains since it knows that a kitchenAppliance does not have a
washDishes method. You can
resolve this by casting the genericAppliance from a kitchenAppliance (its declared type) to a
Dishwasher (its actual type),
as shown in the following listing:
genericAppliance.washDishes(); // compile-time error... // 'Objects of type 'kitchenAppliance' do not have such a member' Dishwasher(genericAppliance).washDishes(); // ok // prints: Washing dishes...done
The second line of code casts (transforms) the kitchenAppliance into a Dishwasher and then calls the washDishes method. The JScript .NET
compiler attempts to confirm that all casts you attempt are legal
at compile time. If the JScript .NET compiler allows a case at
compile time, which subsequently fails at runtime it raises in
Illegal Cast exception.
Understanding how to use Inheritance and Polymorphism
As it stands, the kitchenAppliance class is adequate but has
some fundamental design problems:
- The type member variable
is a String, which means that
a “stove” and “StoVe” are different types of appliances.
- The weight member variable
is a simple integer type, which does not capture the weight’s
units (100 pounds, or 100 kilograms?). There aren’t any means of
converting units of measure from one unit to another.
- The dimensions of the kitchen appliance share the same
problem as the weight member
variable. In addition, the dimensions are all simple integers,
and do not enforce any type of constraints (you could provide the
height measurements and forget
to provide a width and depth measurements).
Here’s some code that uses a newer implementation of the
kitchen appliances sample:
var myFridge = new Fridge(); // the Fridge weighs 350 pounds myFridge.weight = new quantity(350 , new unitPounds() ); // cubicDimentions has a constructor that makes it easy to record // initial measurements myFridge.dimentions = new cubicDimentions(2,new unitMeter(), 1.5 , new unitMeter(), 2.5 , new unitMeter()); // display the details of my Fridge... print(myFridge); /* output: Fridge / 350 pounds Height: 2 meters Width : 1.5 meters Depth : 2.5 meters */
The code demonstrates that the newer implementation is much
easier to use and the code is a lot more concise. Chances are
that you would have been able to predict most of the output based
only on the above code. The newer implementation goes a little
further – consider the following fragment (a continuation of the
above sample):
// continued... var demoQty : quantity; demoQty = converter.convert( myFridge.weight , new quantity( 0 , new unitKg())); print("Equivilent weight in " + demoQty.units.typeAsString + " is: " + demoQty.amount); /* output: Equivilent weight in kilograms is: 159.09091186523438 */
The fragment demonstrates a conversion class (converter) converting the fridge’s weight,
from whatever units it happens to currently be in, into a new
unit of measure (kilograms). The fragment illustrates that a
quantity type encapsulates
measurements, like weights and dimensions, making them easier to
work with and also shows how the class encapsulates units of
measure.
The converter class has a
single static function called
convert, which performs the
conversion from one unit of measure to another. Here’s what the
convert function looks like:
static function convert(fromQty:quantity ,toQty: quantity) : quantity { var ratio:float; var newQty : quantity; ratio = toQty.units.conversionRatio(fromQty.units); if(ratio == -1) newQty = new quantity( -1,new unit()); else newQty = new quantity( (ratio*fromQty.amount), toQty.units); return newQty; }
The function takes two parameters, both of which are quantity types and returns a new
quantity that represents the
new units and units of measure. The line that does all of the
work in the function is this one:
newQty = new quantity( (ratio*fromQty.amount),toQty.units);
The line simply multiplies a quantity’s amount member by a ratio and constructs a new quantity object. So where does the ratio come from? A quantity has two
members: an amount which is a
float type and units which is a unit type. Here’s part of what a typical
unit class looks like:
class unitPounds extends unit { function conversionRatio(intoUnit:unit) : float { if(intoUnit.type == unitsOfMeasure.kg) return (2.2); else return -1; } }
The unit’s conversionRatio
member function takes a unit
type as a parameter and evaluates it to determine if it’s
possible to convert from itself to the units that the parameter’s
unit type uses. If the
conversion is possible, the function returns a conversion ratio
and returns -1 if the conversion is not possible. If you take a
look back at the convert
function, you’ll see that it checks for the -1 result, otherwise
it just performs the conversion using the ratio. This is a simple
approach that attempts to encapsulate as much information as
possible within each class, thereby relieving class users of
having to understand the intricacies of units and their
conversion ratios.
The key benefit that the design demonstrates is that classes
enable you to work at very high levels of abstraction making code
easier to design, write, and maintain. Although you can just as
easily implement this sample using only procedural code, it would
get very difficult to maintain once the system uses a certain
number of units of measure and measurements.
The implementation of the classes uses a number of object
oriented programming techniques including inheritance and
overloading. Refer to the sample code that accompanies this
article for more details.
Downloading and working with the sample code
The sample code is a JScript .NET console application that
implements the functionality I described in this article. The
primary feature of the application is its code, not its output;
as a result, the code does very little with regards to generating
output or exercising the application at large. The intent of the
sample is primarily to provide an object oriented implementation
that you can study, experiment with, and extend. Once you
download the application, compile it using the command jsc appliance.js and then run the
sample by typing appliance at
the command prompt. I originally wrote the application on an
early beta version of the .NET Framework and have since modified
it to work with .NET version 1.0 – your mileage may vary if you
try the application on any beta versions.
here
Note: you may have to right-click and select “Save As”
from the pop-up menu to download the file in case it loads into
your browser when you left-click on the link [email protected],
or at his Web site
|
https://www.codeguru.com/dotnet/object-oriented-features-of-jscript-net/
|
CC-MAIN-2021-49
|
refinedweb
| 1,919
| 53
|
span8
span4
span8
span4
I'm removing duplicates from a File Geodatabase (FGDB) feature class using Sorter then DuplicateFilter, then writing the Unique features back to the same FGDB feature class.
I want to write the Duplicate features to a CSV log file with the date/time of the translation in the file name so I have a CSV listing the duplicates removed each time the Workspace is run.
First I tried including @Timestamp(^Y^m^d^H^M^S) in the CSV File Name in the Writer but I ended up with multiple CSVs, presumably because features were being written as they were received by the Writer. I haven't noticed this problem with FGDBs but is that because of difference in the way the FGDB Writer works?
I then tried a Creator followed by a TimeStamper running parallel to the actual data processing with the output from the TimeStamper going to the CSV Writer along with the data. But what I get is two CSV files - one with no date/time stamp in the name and all the data, and another with the date/time stamp in the name but no data.
Would "Append to file" in the CSV Writer properties solve this? If so, how can I ensure that the CSV file with the date/time stamp in the name is created first?
Yes, there is more than one way. Another approach: once create the destination CSV file with temporary name (e.g. "temp.csv") by a FeatureWriter, and then rename it to the timestamp. i.e. move the file finally.
You could also use a variable setter and a variable retriever
I agree with @egomm, but I would highly recommend setting "Suppliers first" to yes to avoid it blocking the data flow and consuming huge amounts of memory if you have a lot of data.
Another way to do it is to use a private Python scripted parameter to give you the timestamp at the start of the translation, e.g.
from datetime import datetime return datetime.now().strftime('%Y%m%d%H%M%S')
Useful if you need the same timestamp many places in your workspace and you want to avoid all those FeatureMergers.
You need to merge the timestamper with the rest of your data, e.g. with a FeatureMerger
I am having the same issue and tried this solution but the I'm stumped on the join. I can choose _timestamp in supplier but what do I choose to join on Requester? If I choose a unique identifier nothing outputs from the Merged.
Answers Answers and Comments
8 People are following this question.
Write to csv with timestamp 1 Answer
|
https://knowledge.safe.com/questions/35824/write-csv-file-where-filename-includes-datetimesta.html
|
CC-MAIN-2020-16
|
refinedweb
| 447
| 69.01
|
UN.
Both x86 and x64 binaries with float and double coordinate precision are supported: all C# Wrapper libraries are compatible with double precision builds if the UNIGINE_DOUBLE definition is used.
Requirements
- The minimum supported C# version is 4.0.
- The supported platforms are Windows and Linux.
- Under Windows, Microsoft .NET Framework 4 and up is required. Under Linux, the latest Mono release version must be installed.
C# Wrapper Initialization
In your code, you should initialize the C# Wrapper by calling Wrapper.init() before Engine.init() in order to enable working with the engine before it is initialized: create packages, add functions to the interpreter and so on.
UnigineScript Interoperation.
Export of constant values from C# to UnigineScript is also available.
See the examples in the source/csharp/samples/Api/Scripts/ folder for more details.
Callbacks
The callback functions can receive optional arguments of the int or IntPtr type that are used to store user data. IntPtr values can be wrapped in classes, for example:
IntPtr ptr; // create a node and then create a Unigine object Unigine.Object.create(new Node(ptr));
Managing Pointers
Each class of C# API has functions for managing pointers, which are managed in the same way as in C++. The full list of pointers is given in the C# API Reference article.
C# Samples
The C# samples are located in the source/csharp/samples/ folder. These samples are similar to the C++ API ones.
Each C# API sample folder contains the *.csproj MS Visual Studio C# project files in single precision.
Also there are 3 samples that demonstrate how to embed Unigine into a C# Windows Form application with:
- Direct3D11 initialized via the SlimDX library (see the source/csharp/samples/App/D3D11SlimAppForm/ folder).
- Direct3D11 initialized via the SharpDX library (see the source/csharp/samples/App/D3D11SharpAppForm/ folder).
- OpenGL initialized via the OpenTK library (see the source/csharp/samples/App/GLTKAppForm/ folder).
The samples can be compiled by typing make_x86 (or make_x64) -f Makefile.win32 on Windows (or Makefile on Linux) in the command prompt or via MS Visual Studio.
Renderer
- Added a render_simple_deferred console variable. If this console variable is set to 1, the additional SIMPLE_DEFERRED shader definition will be used. This definition is used inside the terrain shader for increasing the shading speed in case when the deferred buffer quality is not so important.
- Improved triangulation provided by ObjectVolumeBox in corner cases.
- The length parameter of the post_blur_radial shader has been renamed radius.
- Added a new materials/procedural_01 stress sample. This sample can be used for comparing C++ / C# / UnigineScript performance.
- The post_filter_wet material now creates an effect of flowing down water by default. Make sure that the auxiliary buffer is enabled.
- Increased the maximum number of ObjectMeshSkinned bones per surface to 128.
- Added the render / depth_00 sample. It shows how to use the post_deferred_depth postprocess material, which displays the linear depth.
- Improved the render_composite material. Now the image can be displayed with the alpha buffer.
- Added a new mesh_shadow_based material. It receives shadows casted from objects lightened by world, projected and omnidirectional lights and has adjustable light and shadow colors. Visibility of the shadow can also be modulated by the alpha channel.
Terrain
- The terrain shader performs fast prefetching which is based on diffuse texture sampling only. Prefetching can disable complex back-to-front blending calculations. It checks each terrain material to find the one that meets the following requirements:
- The material is checked as Overlap.
- Diffuse, normal and specular scales of the material are set to 1.
- The alpha channel of the material diffuse texture is equal to 1.
- Added a new stress sample with 48 active terrain materials (stress/terrain_02).
- Increased the maximum number of per-terrain materials to 48.
UnigineEditor
Undo/Redo System
Undo/redo system has been completely refactored. Now there is a unified undo stack for all editor windows (except for Tracker for now) and plugins. Overall stability of the undo/redo operations has been drastically improved as well.
Terrain
- Fixed terrain grabber.
- Fixed blending for diffuse brushes.
- Added import of 3D texture masks.
- Fixed updating of coarse textures.
- Height limits for brushes are now calculated from the actual terrain size.
- Fixed updating of arrays.
- Fixed saving of terrain .settings files.
- Landscape plugin: added an experimental fast mode for import.
- Fixed compression and updating of textures.
City Import Plugin
- All of the nodes are recalculated before generating.
- Fixed nodes placement.
- Fixed bug with duplicates.
File Dialog
- Speeding up for large amount of files.
- Added mesh info.
- Fixed saving of LUT textures.
- Added support for TIFF files.
- Fixed saving of bookmarks.
Other
- Added the Edit -> Convert into NodeReference feature: it exports all selected nodes to the .node file and replaces them in the current world with the NodeReference to that file.
- Improved the Edit -> Group selected feature.
- Fixed terrain export to the Node Export plugin.
- Fixed Randomizer plugin crashes.
- Fixed Game Framework plugin.
- Camera settings button is disabled if the camera is locked.
- Async jobs can be interrupted now.
- Added a world_init event.
- Fixed focusing on objects with partially disabled surfaces.
- Improved loading of the FBX files: meshes with joints, but without geometry, are now loaded as animations.
- Improved the OpenFlight import plugin: added support for hidden and 2-sided flags of Face nodes.
- Default speed presets have been changed: 5, 50, 500 units/s.
- Editor camera position is stored on exit in the config file for each world. If there is no stored position in the config, the editor camera will be initialized with game camera settings.
- Added a Mesh Combiner plugin: it combines surfaces of selected objects and creates a new mesh. Surfaces that have the same material assigned are combined into one and can be seen in the Surfaces tab of the resulting mesh.
Tools
- Added support for the FBX file format to the MeshImport tool.
- Added scene rotation around the Z axis by the mouse X axis in ResourceEditor. You can rotate the scene by using the horizontal mouse wheel or by pushing the vertical mouse wheel left or right.
- Added a -define "extern_define" command line argument, which is used to pass an extern definition into the USC interpreter.
C++ API
- Added the following node interfaces:
- ObjectGui
- ObjectGuiMesh
- ObjectParticles
- ObjectVolumeBox
- ObjectVolumeSphere
- ObjectVolumeOmni
- ObjectVolumeProj
- DecalDeferredOrtho
- DecalDeferredProj
- DecalDeferredMesh
- DecalObjectOrtho
- DecalObjectProj
- DecalObjectOmni
- DecalTerrainOrtho
- DecalTerrainProj
- Field
- FieldAnimation
- FieldSpacer
- Physical
- PhysicalForce
- PhysicalNoise
- PhysicalTrigger
- PhysicalWater
- PhysicalWind
- Added generic API for the Interface plugin that provides cross-plugin availability for external windows. The plugin API functions are available in source/plugins/Interface/Interface/InterfaceBase.h. Now you can call methods of the Interface plugin from your C++ code or from other C++ plugins as follows:
- Get the Interface plugin number via the Engine::findPlugin() function.
- Get a pointer to an instance of the InterfaceBase class via the Engine::getPluginData() and cast the received value to the InterfaceBase type.
- Call the required methods.
- Added a new InterfaceWindow sample. It creates an external InterfaceWindow class via the Interface plugin.
- Added mesh streaming and file list access functions to the Unigine::FileSystem class.
- Unigine.h has been renamed UnigineEngine.h. You will have to rename this header in your projects manually.
- The Unigine::Engine::init() functions will treat the empty-string app and home_path arguments as NULL.
- Added per-vertex and per-index setter functions for the Unigine::Mesh class.
- Added the Unigine::Engine::isInitialized() and Unigine::Memory::isInitialized() functions.
- Added support for SHA1 checksum to Unigine::Checksum class.
- Added a getSHA1() function to the Buffer class. This function calculates the SHA1 hash and returns the hexadecimal string.
- Added getTranslate(), getRotate(), getScale() functions to the Unigine::mat4 and Unigine::dmat4 structures.
- Added a new Types sample. It shows how to convert user defined classes into the script variables.
UnigineScript
- The call() container function can be used to call multiple user/extern functions with the same number of arguments by using their identifiers. For example, if you have an array that contains identifiers of different functions with the same number of arguments, you can call these functions at ones as follows:Source code (UnigineScript)
void foo_1() { log.message(__FUNC__ + ": called\n"); } void foo_2() { log.message(__FUNC__ + ": called\n"); } void foo_3() { log.message(__FUNC__ + ": called\n"); } // declare array of functions identifiers int functions[0]; int init() { // add the functions identifiers to the array functions.append(functionid(foo_1)); functions.append(functionid(foo_2)); functions.append(functionid(foo_3)); // call at ones all the functions stored in the array functions.call(); }
- Added the anonymous function declaration in the style of C++ syntax. If you need to call a short function or a function that is not used elsewhere in the code (and therefore, the function name is not important), you can use the anonymous function. Anonymous functions always return the functions identifiers. For example, the following examples perform the same:Source code (UnigineScript)
call( [](int a) { log.message("%d\n",a); }, 13);Source code (UnigineScript)The anonymous functions can be also used for async and array calls. The sample showing how to utilize anonymous functions inside the Async class: systems / socket_3.
print_13(int a) { log.message("%d\n",a); } call(functionid(print_13),13);
- Added the delete() container function that can receive two arguments. You can delete a specified number of container elements starting with a specified position. On the delete() function call, destructors will be called and the container elements in the specified range will be deleted.
- You can write and read values with different endian notations to/from the Stream class. The default is the little-endian notation.
- Improved the blend() function of the Image class. Now you can blend images of 32-bit floating point format.
- Improved the function chaining. Now you can use this feature with the user defined classes in the script without casting the return value of the first function to the required type.
- Changed the return type of the getCRC32() and getMD5() functions of the Buffer class. Now they return the hexadecimal string instead of the integer value.
- Improved the addMeshSurface() function of the ObjectMeshStatic class. If the color array (or texture coordinates arrays) of the source or current mesh surface is empty, it will be filled with zero elements.
GUI
- Added detection of the mouse horizontal scroll for WidgetScrollBox. You can now scroll horizontally by using the horizontal mouse wheel or by pushing the vertical mouse wheel when moving left or right.
- Tooltips for WidgetTabBox are now positioned under the mouse cursor.
- Added the size constraints for displaying the target image by WidgetDialogImage. The maximum image width and height is 8192 pixels. So now 1024×1024×32 texture arrays can be visualized in the image dialog window.
- Fixed incorrect restoring of the permanent window focus under Win32.
- The menu toggle key of the system script can be configured via systemSetToggle() / systemGetToggle() system script functions. These functions can be found in the data/core/scripts/system.h file.
- Updated the widgets/ui_01 sample: now it shows how to declare and register widget callback functions in a single line of code.
- Added the flags argument to the Unigine::Widget::Window / Dialog / DialogColor / DialogFile / DialogImage / DialogMessage constructors. This argument receives one of the InterfaceWindow flags.
- Fixed the bug with inversion of a horizontal slider. If you scroll up the mouse wheel, the slider value will increase.
- Fixed WidgetSpriteShader class. It will correctly call blending functions and set buffer mask parameters.
Mobile Platforms
- Restored the Android platform support. Now there are two types of Android applications:
- Native applications that have no Java code. These applications are useful for development and fast iterations. However, such applications are difficult to customize.
- Activity-based applications that use Java code for OpenGLES and input handling as before. The number of arguments that can be passed from a system script to Java code (and vice versa) has been increased to 4. Moreover, support for double arguments has been added. A user can interact with the world and editor scripts via Java functions.
- All Android-specific functions are available inside the engine.activity namespace instead of engine.tablet. The availability of these functions can be checked via the HAS_ACTIVITY definition.
- Removed the Android client application and the FileClient plugin.
- Added support for gcc-4.6 and gcc-4.8 for ARM-based Android builds.
- Added an IPHONESDK environment variable, which is required for iOS builds. The variable must be equal to your iPhone SDK version, for example, "iPhoneOS8.1.sdk".
Documentation
- Added the C# API section.
- Added the UnigineEditor / Managing Worlds article.
- Added the UnigineEditor / Scene Navigation article.
- Added the UnigineEditor / Setting Up Cameras article.
- Added the UnigineEditor / Selecting and Positioning Nodes article.
- Updated the Adding Object into a New World tutorial.
- Updated the Adding Animated Object into the Loaded World tutorial.
- Updated the Adding Scripts to the Project tutorial. Added section on adding script logic via WorldExpression objects.
Other
- Added the AppGrabber plugin that grabs the current viewport each frame. This plugin can be used for streaming data to the different broadcasting applications.
- Fixed the bug with the AppWall plugin crash on its start-up.
- Added a new bool type to store console variables of the boolean type in the configuration file. Now the bool type is used instead of the int type. WARNING: It is recommended to delete your old .cfg files to re-configure projects in order to prevent errors.
- Changed the behavior of the world_reload console command. Now it reloads the last world loaded via the world_load console command even if the loading operation has failed.
- Added a config_readonly console variable. It blocks any attempts to write data to the configuration file.
- Removed support for MS Visual Studio 2008.
- Added a 3840×2160 (4K) video mode. To set it via console, specify the following: video_mode 8.
- Increased the speed of saving RBG8 and RGBA8 images to DDS files in all of the dedicated applications.
|
https://developer.unigine.com/devlog/20141120-unigine-2.0-alpha2
|
CC-MAIN-2020-05
|
refinedweb
| 2,293
| 51.44
|
A form field for conveniently editing a date using a calendar popup. More...
#include <Wt/Ext/DateField>
A form field for conveniently editing a date using a calendar popup.
You can set a WDateValidator to specify in more detail the valid range, and the client-side validation messages. When using a WDateValidator, however, make sure to use the same date format as the format used by the field.
Here is a snapshot taken on 01/09/2007 (shown as today), and with current value 12/09/2007 currently selected.
Return the date value.
When the date could not be parsed, an invalid date is returned (for which WDate::isValid() returns false).
Return the date format.
|
https://www.webtoolkit.eu/wt/doc/reference/html/classWt_1_1Ext_1_1DateField.html
|
CC-MAIN-2018-09
|
refinedweb
| 115
| 66.33
|
Hi all,
I have a question I'd like to ask regarding JFrames and JMenuBars. My question is I have a main JFrame window which extends JFrame, and in this JFrame I have created a JMenuBar. I have created other JFrames which open up after JButtons have been clicked on and found that in each JFrame file, in order to show the same JMenuBar that is in the first JFrame window, I have to create the same methods that I created in the first JFrame in order to display the JMenuBar. To me this feels like unnecessary coding of the same thing when I could just have one that is common to all my JFrames but how do i do this?
I have tried several things in order to save myself having to code the same methods in each JFrame, but have not come up with a successful solution, I can create the JMenuBar in the first JFrame, but it will not show in the next JFrame.
Code :
public class FirstWindow extends JFrame implements ActionListener { protected JMenuBar myMenuBar = new JMenuBar(); protected JMenu myMenuFile = new JMenu("File"); protected JMenuItem myMenuItemExit = new JMenuItem("Exit"); private JButton btn; protected JPanel pnl; public FirstWindow() { setTitle("First Window"); setSize(700, 430); setJMenuBar(myMenuBar); this.getContentPane().setBackground( Color.yellow); createMenu(); setVisible(true); setResizable(false); pnl = new JPanel(); btn = new JButton("Switch to Next Window"); btn.addActionListener(this); pnl.add(btn, BorderLayout.CENTER); add( pnl, BorderLayout.CENTER ); } protected void buildMenu() { myMenuFile.add(myMenuItemExit); myMenuItemExit.addActionListener( this ); myMenuBar.add(myMenuFile); } public void actionPerformed( ActionEvent e ) { if ( e.getSource() == btn ) { string msg = "Add An exit here"; JOptionPane.showMessageDialog( this, msg, "Exit Message", JOptionPane.OK_OPTION ); }
Code :
public class TestExtend extends JFrame { public TestExtend() { setTitle("Test Frame 1"); setSize(700, 430); } public TestExtend(Mgw w) { w.buildMenu(); } }
Code :
public class MainP { public static void main(String[] args) { FirstWindow firstwin = new FirstWindow(); firstwin.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); } }
From my code above, the new window will show, but without the JMenuBar which I created in the first one.
Just to clarify, I would like to know if there is a way to save myself coding the same JMenuBar in each frame I create by sharing the one from the first frame?
Thanks if you can help
|
http://www.javaprogrammingforums.com/%20awt-java-swing/5887-jframes-jmenubars-printingthethread.html
|
CC-MAIN-2017-34
|
refinedweb
| 372
| 52.39
|
I'm trying to understand socket programming in python by experimenting a little. I'm trying to create a server that you can connect to by telnet and that echoes what you type in the telnet prompt. I don't want to start using threads just yet. This is my code.
import socket host = "127.0.0.1" port = 8080 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind((host,port)) sock.listen(1) remote, address = sock.accept() print "Connection from", address while True: data = sock.recv(1024) remote.send(data)
The server starts without errors, but when I connect with the telnet client I get, on the client side:
> telnet 127.0.0.1 8080 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host.
and on the server side:
> python server_test.py Connection from ('127.0.0.1', 35030) Traceback (most recent call last): File "server_test.py", line 16, in <module> data = sock.recv(1024) socket.error: [Errno 107] Transport endpoint is not connected
Can anyone tell me what this error comes from and how to solve it?
|
https://www.daniweb.com/programming/software-development/threads/371134/socket-error-errno-107-transport-endpoint-is-not-connected
|
CC-MAIN-2018-13
|
refinedweb
| 189
| 71.41
|
Finding the nth root of a periodic function
Posted March 05, 2013 at 02:06 PM | categories: nonlinear algebra | tags: heat transfer | View Comments
Updated March 05, 2013 at 03:12 PM
There is a heat transfer problem where one needs to find the n^th root of the following equation: \(x J_1(x) - Bi J_0(x)=0\) where \(J_0\) and \(J_1\) are the Bessel functions of zero and first order, and \(Bi\) is the Biot number. We examine an approach to finding these roots.
First, we plot the function.
from scipy.special import jn, jn_zeros import matplotlib.pyplot as plt import numpy as np Bi = 1 def f(x): return x * jn(1, x) - Bi * jn(0, x) X = np.linspace(0, 30, 200) plt.plot(X, f(X)) plt.savefig('images/heat-transfer-roots-1.png')
You can see there are many roots to this equation, and we want to be sure we get the n^{th} root. This function is pretty well behaved, so if you make a good guess about the solution you will get an answer, but if you make a bad guess, you may get the wrong root. We examine next a way to do it without guessing the solution. What we want is the solution to \(f(x) = 0\), but we want all the solutions in a given interval. We derive a new equation, \(f'(x) = 0\), with initial condition \(f(0) = f0\), and integrate the ODE with an event function that identifies all zeros of \(f\) for us. The derivative of our function is \(df/dx = d/dx(x J_1(x)) - Bi J'_0(x)\). It is known () that \(d/dx(x J_1(x)) = x J_0(x)\), and \(J'_0(x) = -J_1(x)\). All we have to do now is set up the problem and run it.
from pycse import * # contains the ode integrator with events from scipy.special import jn, jn_zeros import matplotlib.pyplot as plt import numpy as np Bi = 1 def f(x): "function we want roots for" return x * jn(1, x) - Bi * jn(0, x) def fprime(f, x): "df/dx" return x * jn(0, x) - Bi * (-jn(1, x)) def e1(f, x): "event function to find zeros of f" isterminal = False value = f direction = 0 return value, isterminal, direction f0 = f(0) xspan = np.linspace(0, 30, 200) x, fsol, XE, FE, IE = odelay(fprime, f0, xspan, events=[e1]) plt.plot(x, fsol, '.-', label='Numerical solution') plt.plot(xspan, f(xspan), '--', label='Analytical function') plt.plot(XE, FE, 'ro', label='roots') plt.legend(loc='best') plt.savefig('images/heat-transfer-roots-2.png') for i, root in enumerate(XE): print 'root {0} is at {1}'.format(i, root) plt.show()
root 0 is at 1.25578377377 root 1 is at 4.07947743741 root 2 is at 7.15579904465 root 3 is at 10.2709851256 root 4 is at 13.3983973869 root 5 is at 16.5311587137 root 6 is at 19.6667276775 root 7 is at 22.8039503455 root 8 is at 25.9422288192 root 9 is at 29.081221492
You can work this out once, and then you have all the roots in the interval and you can select the one you want.
Copyright (C) 2013 by John Kitchin. See the License for information about copying.
|
http://kitchingroup.cheme.cmu.edu/blog/2013/03/05/Finding-the-nth-root-of-a-periodic-function/
|
CC-MAIN-2018-30
|
refinedweb
| 552
| 84.17
|
%I
%S 1,2,4,6,8,9,12,15,16,18,20,24,25,27,28,30,32,35,36,40,42,44,45,48,49,
%T 50,52,54,55,56,60,63,64,65,66,70,72,75,77,78,80,81,84,85,88,90,91,95,
%U 96,98,99,100,102,104,105,108,110,112,114,115,117,119,120
%N Positive integers x that are (x-1)/log(x-1) smooth, that is, if a prime p divides x, then p <= (x-1)/log(x-1).
%C This sequence is a monoid under multiplication, since if x and y are terms in the sequence and p < x/log(x), then p < xy/log(xy). However, if a term in the sequence is multiplied by a number outside the sequence, the result need not be in the sequence.
%e 1 is in the sequence because no primes divide 1, 2 is in the sequence since 2 divides 2 and 2 < 2/log(2) ~ 2.9, but 10 is not in the sequence since 5 divides 10 and 5 is not less than 10/log(10) ~ 4.34.
%t ok[n_] := AllTrue[First /@ FactorInteger[n], # Log[n] <= n &]; Select[ Range[120], ok] (* _Giovanni Resta_, Jun 30 2018 *)
%o (PARI) isok(n) = my(f=factor(n)); for (k=1, #f~, if (f[k,1] >= n/log(n), return(0))); return (1); \\ _Michel Marcus_, Jul 02 2018
%Y Cf. A050500.
%K nonn
%O 1,2
%A _Richard Locke Peterson_, Jun 29 2018
|
http://oeis.org/A316350/internal
|
CC-MAIN-2019-13
|
refinedweb
| 256
| 73.41
|
In today’s Programming Praxis exercise, our goal is to implement an algorithm that calculates the bernoulli numbers and one that uses them to quickly calculate the sum of the mth power of numbers 1 through n. Let’s get started, shall we?
A quick import:
import Data.Ratio
To calculate the Bernouilli numbers I initially used the naive version, which simply uses the given mathematical formula. This is quick enough for the test case of 1000 numbers, but too slow for the test case that has a million, so we have to do some memoization. A closer look at the formula reveals that any row in the table depends only on the previous row. Since for the end result we are only interested in the last row, we can use iterate to produce the rows of the table. The value of a given column depends only on the number directly above it and the one to the upper right, so we can use a simple zip to calculate the new row.
a :: (Integral a, Integral b) => a -> a -> Ratio b a i j = iterate (\xs -> zipWith (*) [1..] $ zipWith (-) xs (tail xs)) (map (1 %) [1..]) !! fromIntegral i !! fromIntegral j
With this function calculating the Bernoulli numbers is trivial.
bernoullis :: (Integral a, Integral b) => a -> [Ratio b] bernoullis upto = map (flip a 0) [0..upto]
For the algorithm we also need to calculate binomial coefficients, i.e. the amount of different ways you can choose k objects from a group of size n.
choose :: Integral a => a -> a -> Ratio a choose n k = product [1..n] % (product [1..n-k] * product [1..k])
And some more executable math for the function that calculates the sum of powers.
s :: Integral a => a -> a -> Ratio a s m n = 1 % (m+1) * sum [choose (m+1) k * a k 0 * (n%1)^(m+1-k) | k <- [0..m]]
We have one test case to test if the algorithm works correctly and one to judge the speed.
main :: IO () main = do print $ bernoullis 6 == [1, 1%2, 1%6, 0, -1%30, 0, 1%42] print $ s 10 1000 == 91409924241424243424241924242500 print $ s 100 1000000
The program runs in about 150-170 ms, so we get the same speed as the Scheme version. Good enough for me.
Tags: bernouilli, bonsai, code, Haskell, kata, numbers, powers, praxis, programming, sum
|
http://bonsaicode.wordpress.com/2011/02/11/programming-praxis-sums-of-powers/
|
CC-MAIN-2014-41
|
refinedweb
| 393
| 71.95
|
where this is correct syntax for inserting element
V=inserElementAt(n,m)
V is the class object in my program
can u explain a simple example of vector class normal and simple.....for only undrstandidng the concept of vector.....plzzzzz
i want to update my knowledge in java please update new trends
what is it vector?
what is it vector
Hi, your code is good but I got a question, How can I fill the Vector by keyboard ?, for example, if I need put in there any notes about any number of students, I tried by Scanner method but it dosen´t work, thanks in advance.
Regards
is it possible to store elements in vector with specified index?
what is actually vector?????can you please explain diagrammatically it like stack and linked-list?
Post your Comment
java persistence example
java persistence example java persistence example
Java Client Application example
Java Client Application example Java Client Application example
Example of HashSet class in java
Example of HashSet class in java.
In this part of tutorial, we... unique. You can not store duplicate value.
Java hashset example.
How....
Example of Hashset iterator method in java.
Example of Hashset size() method
Example of HashMap class in java
Example of HashMap class in java.
The HashMap is a class in java collection framwork. It stores values in the
form of key/value pair. It is not synchronized
Java FTP Client Example
Java FTP Client Example How to write Java FTP Client Example code?
Thanks
Hi,
Here is the example code of simple FTP client in Java which downloads image from server FTP Download file example.
Thanks
Java HashMap example.
Java HashMap example.
The HashMap is a class in java. It stores values in name values pair. You can
store null value of key and values.
Here... of map.
Code:
HashMapExample .java
package net.roseindia.java
Java File Management Example
of file.
Read the example Read file in Java for more information on reading...Java File Management Example Hi,
Is there any ready made API in Java for creating and updating data into text file? How a programmer can write code
Java Comparable Example
Java Comparable Example I want to know the use of Comparable Interface. Please provide me one example
Comparable interface is used....
Here is an example that compares two ages using Comparable Interface.
import
how to insert elements in vector ?shireen August 19, 2011 at 10:39 PM
where this is correct syntax for inserting element V=inserElementAt(n,m) V is the class object in my program
java basicsamreen shaikh September 14, 2011 at 7:09 PM
can u explain a simple example of vector class normal and simple.....for only undrstandidng the concept of vector.....plzzzzz
javaMadhu N November 1, 2011 at 3:14 PM
i want to update my knowledge in java please update new trends
vectorMukhtarAhmad January 8, 2012 at 3:54 PM
what is it vector?
i don't no what is itmukhtarahmad January 8, 2012 at 3:57 PM
what is it vector
Vectors at JAVARay March 16, 2012 at 12:52 AM
Hi, your code is good but I got a question, How can I fill the Vector by keyboard ?, for example, if I need put in there any notes about any number of students, I tried by Scanner method but it dosen´t work, thanks in advance. Regards
vectorRohit March 20, 2012 at 9:27 PM
is it possible to store elements in vector with specified index?
vectorpraveen June 18, 2012 at 5:18 PM
what is actually vector?????can you please explain diagrammatically it like stack and linked-list?
Post your Comment
|
http://www.roseindia.net/discussion/18701-Vector-Example-in-java.html
|
CC-MAIN-2015-06
|
refinedweb
| 609
| 56.76
|
add_key — add a key to the kernel's key management facility
#include <keyutils.h>.
There are a number of key types available in the core key management code, and these can be specified to this function:rings.
The keyring doesn't exist.
The keyring has expired.
The keyring has been revoked.
The payload data was invalid.
Insufficient memory to create a key.
The key quota for this user would be exceeded by creating this key or linking it to the keyring.
The keyring wasn't available for modification by the user.
Although this is a Linux system call, it is not present in
libc but can be
found rather in
libkeyutils. When linking,
−lkeyutils should be
specified to the linker.
keyctl(1), keyctl(2), request_key(2)
|
http://man.linuxexplore.com/htmlman2/add_key.2.html
|
CC-MAIN-2021-21
|
refinedweb
| 126
| 69.99
|
- Write a Java program to search an element in an array using linear search algorithm.
Given an array of integer size N and a number K. We have to search number K in given array. If number K is present in input array then we have to print it's index.
Algorithm to search an element in an unsorted array using linear search
Let inputArray is an integer array having N elements and K be the number to search.
Let inputArray is an integer array having N elements and K be the number to search.
- Using a for loop, we will traverse inputArray from index 0 to N-1.
- For every element inputArray[i], we will compare it with K for equality. If equal we will print the index of in inputArray.
- If even after full traversal of inputArray, non of the element matches with K then K is not present in inputArray.
Java program to search an element in an array
package com.tcc.java.programs; import java.util.*; public class ArrayLinearSearch { public static void main(String args[]) { int count, num, i; int[] inputArray = new int[500]; Scanner in = new Scanner(System.in); System.out.println("Enter number of elements"); count = in.nextInt(); System.out.println("Enter " + count + " elements"); for(i = 0; i < count; i++) { inputArray[i] = in.nextInt(); } System.out.println("Enter element to search"); num = in.nextInt(); // Compare each element of array with num for (i = 0; i < count ; i++) { if(num == inputArray[i]){ System.out.println(num+" is present at index "+i); break; } } if(i == count) System.out.println(num + " not present in input array"); } }Output
Enter number of elements 6 Enter 6 elements 3 8 7 2 9 4 Enter element to search 7 7 is present at index 2
Enter number of elements 7 Enter 7 elements 3 8 12 8 11 0 -4 Enter element to search 5 5 not present in input array
Recommended Posts
|
https://www.techcrashcourse.com/2016/04/java-program-to-search-element-in-array.html
|
CC-MAIN-2020-16
|
refinedweb
| 323
| 56.05
|
03 August 2010 13:16 [Source: ICIS news]
LONDON (ICIS)--Dow Chemical swung to a net profit of $659m (€501m) in the second quarter, compared with a loss of $332m in the same period of last year, on stronger volumes and price gains, the US chemical major said on Tuesday.
Reported sales for the three months ended 30 June rose 20% year on year to $13.6bn, Dow said.
Excluding acquisitions and divestitures, Dow said its sales were up 26% from the 2009 second quarter, driven by price gains of 19% and volume growth of 7%.
Dow added that gains were up in all operating segments and in all geographic areas, with particular strength in North America and Europe, the ?xml:namespace>
Emerging geographies collectively posted volume gains nearly double that of the total company, Dow said.
“Dow continued its earnings growth trajectory in the second quarter, with double-digit sales gains, continued progress in growth synergies and above-target structural cost reductions driving higher results,” said Dow chairman and CEO Andrew Liveris.
“Strong demand growth in North America and
Additionally, Liveris said that with the completed divestment of Styron, Dow exceeded its goal of divesting $5bn in non-strategic assets in less than two years.
“With the proceeds of these divestments and positive operating cash flows, we made further meaningful progress in strengthening our balance sheet,” he added.
Dow said its second-quarter underlying earnings before interest, tax, depreciation and amortisation (EBITDA) were $1.9bn.
“Improved demand and price gains overcame a $100m increase in turnaround costs and a $1.6bn increase in purchased feedstock and energy costs,” it said.
Dow’s combined performance segments delivered more than 70% of EBITDA in the quarter, the group added.
Looking ahead, Liveris said the company continued to have confidence that momentum was gradually building.
Dow, he said, expects a sustained global recovery led by
“Dow has continued to experience high demand for products in downstream, market-driven sectors. Against this backdrop, we remain focused on executing our strategic and financial plan,” Liveris said.
($1 = €0.76)
For more on Dow Chemical
|
http://www.icis.com/Articles/2010/08/03/9381838/dow-chemical-swings-to-net-profit-of-659m-as-sales-rise-20.html
|
CC-MAIN-2014-52
|
refinedweb
| 350
| 51.38
|
Introdu.
JavaScript in the JVM
A few years back, I read a blog post by a fellow named Steve Yegge, which talked about JavaScript on the JVM. The post is long, but well worth the read. At one point, he talks about the benefits of scripting on the JVM, and all of what he wrote and talked about back then is still valid today.
First, if there ever has been a computing problem, there is a solution for it in Java. Many times, the Java implementation of some library will be superior to what you might cobble together from other sources (see Apache Lucene). Why not leverage all this prior work? On top of the availability of all this code, in .jar format, it is portable between operating systems and CPUs – it almost runs everywhere.
Second, the JVM itself has a considerable number of man hours of research and development applied to it and it is ongoing. When they figure out how to make something smaller/faster/better for the JVM, it benefits everything that uses the JVM – including JavaScript execution and the libraries we’d call from JavaScript. We also get the benefit of Java’s excellent garbage collection schemes.
Third, the JVM features native threads. This means multiple JVM threads can be executing in the same JavaScript context concurrently. If v8 supported threads in this manner, nobody would be talking about event loops, starving them, asynchronous programming, nested callbacks, etc. Threads trivially allow your application to scale to use all the CPU cores in your system and to share data between the threads.
I’ll add a fourth, that you can compile your JavaScript programs into Java class files and distribute your code like you would any Java code.
So let’s have a look at JavaScript and the JVM.
Introducing Mozilla Rhino
Rhino is an open source JavaScript engine written in Java, and is readily available for most operating systems.
For OSX, I use HomeBrew to install it:
$ brew install rhino
For Ubuntu, the following command should work:
$ sudo apt-get install rhino
Once installed, we can run it from the command line and we get a REPL similar to what we’re used to with NodeJS:
$ rhino Rhino 1.7 release 4 2012 06 18 js> var x = 10; js> print(x) 10 js>
You can run rhino from the command line passing it the name of a JavaScript to run:
$ cat test.js print('hello'); $ rhino test.js hello $
Rhino has a few built-in global functions, but I’ll only elaborate on a few. We’ve already seen that the
print() function echoes strings to the console window.
The
load() function loads and runs one or more JavaScript files. This is basically the server-side equivalent of the HTML <script> tag.
$ rhino Rhino 1.7 release 4 2012 06 18 js> load('test.js') hello js>
The
spawn(fn) function creates a new thread and runs the passed function
(fn) in it.
js> spawn(function() { print('hello'); }); Thread[Thread-1,5,main] js> hello js>
Note the
hello was printed on what looks like the command line. That was printed from the background thread and I had to hit return to see the next prompt. The
Thread[Thread-1,5,main] was the return value of the
spawn() method; it is a variable containing a Java Thread instance.
Spawning threads is that easy!
The JVM has first class synchronization built in. In Java, you use the synchronized keyword something like this:
//java public class bar { private int n; //... public synchronized int foo() { return this.n; } }
This allows only one thread at a time to enter the
foo() method. If a second thread attempts to call the function while a first has entered it (but not returned yet), the second thread will block until the first returns.
Rhino provides a
sync(function [,obj]) method that allows us to implement synchronized functions. The equivalent JavaScript looks like:
//javascript function bar() { this.n = ...; } bar.foo = sync(function() { return this.n; });
If we
spawn() two threads that call
bar.foo(), only one will be allowed to enter the function at a time.
Synchronization is vital for multithreaded applications to avoid race conditions where one thread might be modifying a variable/array/object while another thread is trying to examine it. The state of the variable/array/object is inconsistent until the modification is complete.
To recap so far, Rhino provides
print(),
load(),
spawn(), and
sync() functions, among others. In practice, I only see the
load() and
sync() methods being necessary because Rhino and other JVM JavaScript implementations allow us to “script Java” from JavaScript programs.
Scripting Java
Rhino makes scripting Java rather easy. It exposes a global variable
Packages that is a namespace for every Java package, class, interface, etc., on the
CLASSPATH.
The Java 7 API JavaDocs for the java.lang.System class can be found here:
On that page is the definition of the field, “out” and an example that reads something like:)
From rhino, we can access
System.out.println():
js> Packages.java.lang.System.out.println function println() {/* void println(long) void println(int) void println(char) void println(boolean) void println(java.lang.Object) void println(java.lang.String) void println(float) void println(double) void println(char[]) void println() */} js>
What this is showing is that there are a number of implementations of
println() in Java with different signatures. Rhino is smart enough to choose the right implementation based upon how we call it. Also note that the types in the
println() signatures are Java native types.
For example:
js> Packages.java.lang.System.out.println('hello') hello js>
Rhino also exposes a global
java variable which is identical to
Packages.java – this is a handy way to access the builtin Java classes.
A minimal console class
We can now use
load() to load a primitive JavaScript console implementation:
$ cat console.js console = { log: function(s) { java.lang.System.out.println(s); } }; $ rhino Rhino 1.7 release 4 2012 06 18 js> load('console.js') js> console.log('hello') hello js>
Java types in JavaScript
When writing JavaScript, things work as expected. An object is an object, an array is an array, a string is a string, and so on. But when we script Java from JavaScript, our variables often are instances of Java objects. A trivial example:
js> var a = new java.lang.String('a'); js> a a js> // seems like a javascript string js> typeof a object js> // but it's an object js> typeof 'a' string js> // javascript strings are typeof string js> var b = 'b'; js> a.getBytes() [B@4f124609 js> b.getBytes() js: uncaught JavaScript runtime exception: TypeError: Cannot find function getBytes in object b. js> var c = String(a) js> c.getBytes() js: uncaught JavaScript runtime exception: TypeError: Cannot find function getBytes in object a.
Note that
getBytes() is a method you can call on Java strings, but not on JavaScript strings. Also note that we can cast Java strings to JavaScript strings.
Fortunately, we rarely have to instantiate Java strings, but we will have to deal with binary data when scripting Java. JavaScript has no real native binary type, but we can have our variables refer to instances of Java binary types.
Java Byte Arrays
One thing we’re certainly going to do is deal with Java byte arrays. We can instantiate one (1024 bytes) like this:
js> var buf = java.lang.reflect.Array.newInstance(java.lang.Byte.TYPE, 1024); js> buf [B@44d4ba66 js> buf[0] js> buf[1] js> buf[1] = 10; 10 js> buf[0] js> buf[1] 10
Useful example
Let’s look at how to read in a text file by scripting Java, and it does look a lot like Java. All the Java classes we use are in the package java.io and you can read up on
FileInputStream,
BufferedInputStream, and
ByteArrayOutputStream. There are certainly many examples of their use (in Java) on the web.
$ cat cat.js var FileInputStream = java.io.FileInputStream, BufferedInputStream = java.io.BufferedInputStream, ByteArrayOutputStream = java.io.ByteArrayOutputStream; function cat(filename) { var buf = java.lang.reflect.Array.newInstance(java.lang.Byte.TYPE, 1024), contents = new ByteArrayOutputStream(), input = new BufferedInputStream(new FileInputStream(filename)), count; while ((count = input.read(buf)) > -1) { contents.write(buf, 0, count); } input.close(); return String(contents.toString()); } $ rhino Rhino 1.7 release 4 2012 06 18 js> load('cat.js') js> cat('console.js') console = { log: function(s) { java.lang.System.out.println(s); } }; js> var s = cat('console.js') js> s.length 74 js>
Maybe this is a bit ugly, but we can encapsulate all the bridging between JavaScript and Java in nice JavaScript classes. Then we only need to call our JavaScript from JavaScript and not care so much about how Java is being called or the conversions between JavaScript native objects and Java ones is being done. One thing for sure is that this seems a lot cleaner and simpler than writing C++ modules to link with NodeJS or other V8 alternatives.
In other words, we only had to write the
cat() function once. We can
load() it in any or all of our applications from now on and not have to write the interface code to Java again.
Threads without
spawn()
This example is a bit longer, but it demonstrates how to implement a Runnable interface in JavaScript.
$ cat threads.js load('console.js'); var Thread = java.lang.Thread; var x = 0; function thread1() { console.log('thread1 alive'); while (1) { Thread.sleep(10); console.log('thread1 x = ' + x); x++; } } function thread2() { console.log('thread2 alive'); while (1) { Thread.sleep(10); console.log('thread2 x = ' + x); x++; } } new Thread({ run: thread1 }).start(); new Thread({ run: thread2 }).start();
When I run it, you can see from the output the effect of the race condition where both threads are incrementing the x variable:
$ rhino ./threads.js thread2 alive thread1 alive thread2 x = 0 thread1 x = 0 thread1 x = 2 thread2 x = 2 thread2 x = 4 thread1 x = 4 thread2 x = 6 thread1 x = 6 thread1 x = 8 thread2 x = 8 thread2 x = 10 thread1 x = 10 thread1 x = 12 thread2 x = 12 thread1 x = 14 thread2 x = 15
This is why we need the
sync() function.
I’ll implement proper synchronization and we’ll see the threads cooperate.
The improved version:
$ cat threads()); } } new Thread({ run: thread1 }).start(); new Thread({ run: thread2 }).start();
Note when we run it, the value of x increments nicely and both threads always see the volatile value.
$ rhino ./threads.js thread1 alive thread2 alive thread1 x = 0 thread2 x = 1 thread1 x = 2 thread2 x = 3 thread1 x = 4 ...
This version works, but it is not quite perfect. You see, the
bumpX() function returned by
sync() synchronizes on the
this object, which isn’t harmful in this example. However if we had another two threads bumping a y variable with a
bumpY() method also synchronized on
this, there’d be unnecessary contention among the 4 threads. When
thread1() calls
bumpX(), the remaining 3 threads will be blocked when they call
bumpX() or
bumpY().
The fix is:
javascript var bumpX = sync(function() { return x++; }, x);
Note the extra argument to
sync(), the object we want to synchronize on. Now the callers that call
bumpX() will block appropriately, not affecting callers of
bumpY().
About synchronization
I wouldn’t count on any JavaScript operation to be atomic. That is,
array.pop() could in theory get interrupted by a thread switch interrupt, so if you have two threads manipulating that array, you have a seriously bad race condition. So be aware of thread safety. If you ever expect to have two threads access the same memory, synchronize around the accesses, as I demonstrated.
Extending Rhino (3rd party java)
We’re interested in calling 3rd party libraries, so here’s an example. I created a file,
Example.java and compiled it into a
.class file:
$ cat Example.java public class Example { public static String foo() { return "foo from java"; } }; $ javac Example.java $ ls -l Example.* -rw-r--r-- 1 mschwartz staff 277 Apr 24 15:26 Example.class -rw-r--r-- 1 mschwartz staff 83 Apr 24 15:25 Example.java $
The rhino executable program is really a bash script that starts up the JVM (java command) with the rhino
.jar file and passes any additional command line arguments to the rhino java program.
$ cat `which rhino` #!/bin/bash exec java -jar /usr/local/Cellar/rhino/1.7R4/libexec/js.jar "$@"
From this we can craft our own command lines, including some that add
.jar files to the class path. To see a full description of the java command and all the command line options, enter this at your shell prompt:
$ man java
We cannot pass a
CLASSPATH via
-cp flags to the java command if we also specify
-jar. So we are going to have to use a form of the java command that specifies
CLASSPATH and the initial class/function to call. I dug into the rhino sources and found that the main function is
org.mozilla.javascript.tools.shell.Main.
Here’s the command in action:
$ java -cp ".:/usr/local/Cellar/rhino/1.7R4/libexec/js.jar" org.mozilla.javascript.tools.shell.Main Rhino 1.7 release 4 2012 06 18 js>
We can see it is running the REPL as if we ran the rhino shell script. Now we can see if our
Example.foo() function is accessible from our JavaScript environment.
js> var x = Packages.Example.foo() js> x foo from java js> typeof x object js> typeof String(x) string js> String(x) foo from java js>
You should note that our x variable holds a reference to a Java String, not a JavaScript string. We can pretty much use it like a JavaScript string, and Rhino does the type conversions automagically as needed.
js> var y = x+10 js> y foo from java10 js> typeof y string js> typeof x object js>
A brief note about the Java
CLASSPATH
We can trivially create our own shell scripts to launch rhino with our own
CLASSPATH.
eIt seems intuitive to me that if a directory is part of your
CLASSPATH that Java runtime should find
.class files as well as
.jar files in that directory. But it does not work that way!
CLASSPATH may specify a directory where only
.class files are considered or it may specify
.jar files that basically act like a directory containing only
.class files.
This means if you want to use classes in two separate
.jar files, you have to include both
.jar files in the
CLASSPATH.
Introducting Nashorn
Nashorn is a completely new JavaScript engine that is officially part of the recently released Java 8.
In order to run it, I installed the Java 8 JDK on my Mac. I haven’t seen any ill effects yet, so I guess it is safe. There were some negative effects of installing Java 7 on a Mac, particularly that Java 7’s browser plugin is 64-bit only and Google Chrome is 32-bit only; you lose the ability to run Java from WWW sites in Chrome. I haven’t tested to see if this is true for Java 8, but I haven’t seen any similar warnings.
The installation process is not 100% right. There is a jjs program that we are supposed to be able to run to execute Nashorn scripts (jjs is roughly Nashorn’s version of the rhino command). After installing Java 8, jjs is not in
/usr/bin as it should be. A little bit of digging turned up the file here:
/Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin
So I made a soft link to it in
/usr/bin:
$ sudo ln -s /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/jjs /usr/bin/jjs
There is also a
/usr/bin/jrunscript and a manual page for that dated 2006. The jrunscript program appears to launch Nashorn as well. There is also a jrunscript in the same directory as jjs that is different than the one in
/usr/bin. A lot of confusion caused by all this, but I will use jjs for the rest of this article.
The jjs program presents a REPL just like rhino does:
$ jjs jjs> print('hello') hello jjs> x = 10 10 jjs> x 10 jjs>
There is quite a bit of useful information about the JavaScript environment provided by Nashorn here:
It didn’t take me very long to figure out how to get the threads demo program working. Here’s the modified source:
$ cat threads.js load("nashorn:mozilla_compat()); } } var t1 = new Thread(thread1); t1.start(); new Thread(thread2).start(); t1.join();
I had to
load("nashorn:mozilla_compat.js") to provide the
sync() function.
The
new Thread calls no longer work with what looks like a Runnable interface, or an object like:
{ run: function() { ... } }
Instead, Nashorn can figure out that Runnable has only one member (run) and Runnable is required for Thread constructor, so it does the right thing if you pass the constructor a JavaScript function.
One other change I had to make was to call
join() on one of the threads started. Without this, jjs exited right away. This is a different behavior from rhino.
Nashorn also features a scripting mode that adds some very non-standard features to the JavaScript language. The concept is a good one if you want to use Nashorn to write shell scripts. The only problem is anything you write using these extensions will not be portable to any other JavaScript environment. For this reason, I won’t go into more depth about this feature.
Nashorn Performance
I created 2 very simple and probably worthless programs to try to get a sense of how fast Nashorn is compared to Rhino (and NodeJS/v8).
The first program simply concatenates 1 million integers into a very long string:
javascript $ cat perf.js var s = ''; for (var i=0; i<1000000; i++) { s += ' ' + i; }
My trial runs follow.
rhino
$ time rhino perf.js rhino perf.js 5.03s user 0.63s system 129% cpu 4.378 total $ time rhino perf.js rhino perf.js 5.07s user 0.64s system 130% cpu 4.386 total $ time rhino perf.js rhino perf.js 5.06s user 0.63s system 129% cpu 4.377 total $
jjs
$ time jjs perf.js jjs perf.js 14.80s user 0.27s system 600% cpu 2.510 total $ time jjs perf.js jjs perf.js 20.19s user 0.31s system 636% cpu 3.221 total $ time jjs perf.js jjs perf.js 15.53s user 0.26s system 611% cpu 2.580 total $ time jjs perf.js jjs perf.js 19.05s user 0.28s system 637% cpu 3.032 total $ time jjs perf.js jjs perf.js 19.30s user 0.29s system 637% cpu 3.075 total
nodejs
$ time node perf.js node perf.js 0.29s user 0.05s system 100% cpu 0.341 total $ time node perf.js node perf.js 0.29s user 0.05s system 100% cpu 0.338 total $ time node perf.js node perf.js 0.29s user 0.05s system 100% cpu 0.338 total $ time node perf.js node perf.js 0.29s user 0.05s system 100% cpu 0.338 total
I happen to know that Rhino 1.7R4 is notoriously slow at string concatenation. It is much faster to
join() an array. So I created a second trial program:
javascript $ cat perf2.js var a = []; for (var i=0; i<1000000; i++) { a[i] = i; } var b = a.join('');
This one creates an array of a million integers and joins it together. The resulting string should be the same as for perf.js.
Here are the trial runs for perf2.js.
rhino
$ time rhino perf2.js rhino perf2.js 1.54s user 0.14s system 240% cpu 0.698 total $ time rhino perf2.js rhino perf2.js 1.53s user 0.14s system 241% cpu 0.689 total $ time rhino perf2.js rhino perf2.js 1.53s user 0.14s system 237% cpu 0.700 total $ time rhino perf2.js rhino perf2.js 1.53s user 0.13s system 237% cpu 0.701 total
jjs
$ time jjs perf2.js jjs perf2.js 7.28s user 0.19s system 438% cpu 1.704 total $ time jjs perf2.js jjs perf2.js 6.98s user 0.19s system 420% cpu 1.705 total $ time jjs perf2.js jjs perf2.js 7.89s user 0.18s system 448% cpu 1.800 total $ time jjs perf2.js jjs perf2.js 7.06s user 0.19s system 431% cpu 1.679 total
nodejs
$ time node perf2.js node perf2.js 0.32s user 0.05s system 100% cpu 0.368 total $ time node perf2.js node perf2.js 0.33s user 0.05s system 100% cpu 0.376 total $ time node perf2.js node perf2.js 0.33s user 0.05s system 100% cpu 0.380 total $ time node perf2.js node perf2.js 0.33s user 0.05s system 100% cpu 0.380 total
Conclusion
Rhino is the gold standard of JavaScript for the JVM. It simply has been around for a very long time (since the 1990s) and it is feature rich and relatively bug free. Nashorn represents a new code base and new commitment by Oracle to JavaScript for the JVM. It’s brand new, and already appears to be a solid implementation in its own right. It’s only going to get better, too. Rhino is likely to run on any new release of Java for a long time to come, but it’s not as likely to get the attention to improvements as Nashorn.
The question is when is it time to ditch Rhino in favor of Nashorn? My guess is soon if Java 8 gains the adoption that I expect.
Mike Schwartz
Related Posts
- Prov.JS Meetup Recap - JavaScript Patterns in Objective C
Last Thursday night, Andrew Goodale gave an excellent presentation at ProvJS that highlighted some of…
- The Top 6 Things We Love About Ext JS 6
“Sencha Ext JS is the most comprehensive MVC/MVVM JavaScript framework for building feature-rich, cross-platform web…
|
https://moduscreate.com/blog/javascript-and-the-jvm/
|
CC-MAIN-2021-43
|
refinedweb
| 3,722
| 77.64
|
A timer is an object of Timer class which is the subclass of Object class present in java.lang package. A timer fires one or more action events after regular interval of time (in milliseconds) called delay.
The amount of delay is specified at the time of creation of timer. A timer is used in a program by creating a timer object and invoking start () method. When the timer is started, an event of ActionEvent type is fired and the code enclosed in actionPerformed () of the corresponding listener class is executed. The corresponding class must implement the ActionListener interface. The timer can also be made to fire an event only once by setting its method setRepeats () to false. The methods stop () and restart () can be called to stop or restart the timer, respectively.
The Timer can be created using the following constructor.
Timer(int delay, ActionListener listener)
where,
delay is the time, in milliseconds, between the action events
listener is the action listener
Consider Example. Here, a countdown of 10 seconds is printed. After 10 seconds, the program exits.
Example: A program to demonstrate the use of timer
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
class TimerExample implements ActionListener
{
int second = 1;
public static void main(String str[])
{
new TimerExample();
while(true);
}
public TimerExample()
{
Timer second= new Timer(1000,this);
second.start();
}
public void actionPerformed(ActionEvent e)
{
System.out.print(" "+second);
second++;
if (second == 11)
{
System.out.print(" ");
System.out.print("Exit out");
System.exit(0);
}
}
}
The output of the program is
|
http://ecomputernotes.com/java/swing/timer-class-example
|
CC-MAIN-2019-39
|
refinedweb
| 255
| 58.28
|
Hi,
I am new to Silverlight and this website, so sorry if I am posting in the wrong section.
I am trying to create a custom border control in Silverlight 2 which acts like the Border control already existing in System.Windows.Controls, however I want it to have a different look (specifically, I am trying to draw sharp corners rather than rounded corners).
I want to be able to set the content property of this control in xaml just like the Border control, so that it is easy to add any kind of item inside ite.g.: <CustomBorder> <TextBox /></CustomBorder>
<
However, when this is used and something is placed in the Content of the CustomBorder as shown in the example above, the entire actual content is replaced, so the Polygon is not shown.
I have tried binding the ContentPresenter to the Content of the actual UserControl as well, and that didn't work.
Can anyone help show me what I am doing wrong here? Any help would be appreciated
Thanks.
Why not use the silverlight 2 Border control? It's doing exactly what you want: You can put any control inside it as content.
Software EngineerAprimo, IncPlease remember to mark the replies as answers if they answered your question
I want to be able to customize the look of the Border more than you are able to with the current Border (I am under the impression you can't change the look of the Border control because it is a Decorator?).
I want the border to initially look something like this:
(this looks like a border, but doesn't have rounded corners - instead it has sharp edges on the corners)
However I want to also be able to change this easily in xaml in the future for all the CustomBorders used.
OK. I see it.
You might be better off to write this border control as Custom Control in stead of UserControl. Have it Inherited from ContentControl or from FrameworkElement but Tag the class with [ContentProperty("Child", true)]
public class CustomBorder : ContentControl
{
}
Or:
[ContentProperty("Child", true)] public class CustomBorder : FrameworkElement { }
Since Polygon can not have content, so you have to write code to position the Content/Child on top of the Polygon shape. You also need to write the code to adjust the Points so it has the right size based on the content.ActualWidth/ActualHeight.
Take a look at some tutorials on how to write Custome Control.
OK I have been trying to do as you suggested. I have searched for tutorials clearly showing the steps involved, but haven't found any.
When trying to make a class inherit from FrameworkElement, I get an error saying that FrameworkElement has no constructors defined.
When trying to make a class inherit from CustomBorder, I am unsure how to attach items to the Control programmatically. I have tried this:
using
but it doesn't work. Can anyone point out what I am doing wrong here?
You are doing the right thing by extending the ContentControl class. But you should not add controls to it the way you have.
Custom controls in Silverlight (and WPF) are "lookless", which means that you only specify the behavior of the control, you do not hard wire the look. So your C# code should only have behavior. But you would want to attach a default template for the control by defining it in generic.xaml. This would allow the consumer of the control to use the default look if he wants, OR change the template the way he likes.
You would also have to specify a contract (UIElements, Storyboards and public properties) so that those consuming your control know what to do.
All of this can be overwhelming in the beginning. This tutorial will help.
Hope this helps,Jim ()
Please MARK the replies as answers if they answered your question
|
http://silverlight.net/forums/t/15727.aspx
|
crawl-001
|
refinedweb
| 644
| 69.11
|
Java struts variable valu - Struts
date format updated
STRUTS2 Dynamic data is not
Loading updated values In my jsp project profile updating is one of that part. While updating i have to show updated values in choice list for birthday date column. How can we show the previously updated values in choice
Servlets Books
with Servlets and related technologies. Thoroughly revised and newly updated...
Servlets Books
...
Courses
Looking for short hands-on training classes on servlets
date format to be updated with current date time
date format to be updated with current date time package LvFrm;
import java.awt.Color;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.ItemEvent;
import
how to convert java fram to java servlets
how to convert java fram to java servlets hi every java master or java professional can you teach me how to convert JFram to Java Servlets program... has been successfully updated.","Suspect sample program",JOptionPane.INFORMATION
Updated Releases
Updated Releases
ICEFaces
1.6 Released: With Seam and JSF 1.2 compatibility... for the Liferay Portal, updated IDE support for Eclipse, Netbeans, and
JDeveloper
Updated Releases
Updated Releases
Sun Java Studio Enterprise 8.1
Recently Sun released Java Studio Enterprise 8.1. It is build on Net Beans platform and contains the powerful
when update is done on parent object child object is also updated
Oracle Corp. introduces updated version of Java
To overcome the recent security loop holes of Java, software giant Oracle Corp. has released a new and updated version of Java Programming Language that runs inside Web browsers and protects it hackers and hacking.
The updated version
|
http://roseindia.net/tutorialhelp/comment/91863
|
CC-MAIN-2014-42
|
refinedweb
| 277
| 50.02
|
thrkill—
#include <signal.h>
int
thrkill(pid_t
tid, int sig,
void *tcb);
thrkill() function sends the signal given by sig to tid, a thread in the same process as the caller.
thrkill() will only succeed if tcb is either
NULLor the address of the thread control block (TCB) of the target thread. sig may be one of the signals specified in sigaction(2) or it may be 0, in which case error checking is performed but no signal is actually sent.
If tid is zero then the current thread is targeted.
thrkill() will fail and no signal will be sent if:
EINVAL]
ESRCH]
ESRCH]
NULLand not the TCB address of the thread with thread ID tid.
thrkill() function is specific to OpenBSD and should not be used in portable applications. Use pthread_kill(3) instead.
thrkill() system call appeared in OpenBSD 5.9.
|
https://man.openbsd.org/OpenBSD-current/man2/thrkill.2
|
CC-MAIN-2019-35
|
refinedweb
| 142
| 72.97
|
Summary
A hotfix is available for Microsoft BizTalk Server 2013, Microsoft BizTalk Server 2010, and Microsoft BizTalk Server 2009.This hotfix enables support for Health Insurance Exchange (HIX) EDI transactions 005010x306 (820) and 005010x220 (834). This hotfix includes HIPAA 5010 compliant schemas that you can build and deploy in your BizTalk Electronic Data Interchange (EDI) application.
Resolution
Cumulative update information
This issue was first fixed in the following cumulative update of BizTalk Server:
More Information
An older version of the transaction 820 schema was previously included with BizTalk Server. Therefore, this hotfix has the same doctype as the existing transaction 820 schema. The new and old transaction 820 schemas cannot coexist in a BizTalk deployment . If both of the transaction 820 schemas have to be used in a BizTalk deployment, you must change the namespace of the new schema so that both schemas can be uniquely identified and used in the BizTalk application. In this case, the new schema has to be treated as a custom schema.
Status
Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
|
https://support.microsoft.com/en-us/topic/fix-support-for-hix-edi-transactions-005010x306-820-and-005010x220-834-for-biztalk-server-2013-biztalk-server-2010-and-biztalk-server-2009-2a175f83-6b59-0982-6d35-0d6edf3858f6
|
CC-MAIN-2021-49
|
refinedweb
| 188
| 50.36
|
ANTLR Section Index | Page 3
Is there a way to match a grammar fragment exactly n times similar to FLEX's (...)?{1,n} construct?
ANTLR currently does not have a {1,n} operator just ?, +, *. It is somewhat of a pain, but I usually just check the count during the semantic analysis or translation phase. In other words, use (...more
How do I make ANTLR generate c++ code in a (nested) namespace?
Just specify the complete 'path' to the namespace in the namespace option. delimited by '::'. e.g. namespace = "foo::bar"; would generate nested namespace calls for a namespace 'foo' an...more
How do I make ANTLR build a lexer that uses my own token class?
You can set the factory used to generate Tokens with a call to setTokenObjectFactory. Check out the java/C++ file for CharScanner. Or, tell the lexer what class you want to use: public static v...more
Where can I find examples of C++, JScript, VBScript, C, XML, Java Class grammars for ANTLR? Is there any repository out there in the internet?
Where can I find examples of C++, JScript, VBScript, C, XML, Java Class grammars for ANTLR? Is there any repository out there in the internet?
Is it possible to build a (valid or invalid) sentence generator from a grammar?
The general answer to getting the set of valid sentences is "no" because there are infinitely long strings, but that only matters in theory. If you limit the input to k-token length sen...more
How does one report a bug?
For now, please post potential bugs to the ANTLR forum.
How can I force ANTLR to match the tokens in the "tokens {...}" section case-insensitive?
When you're extending the Lexer, set the options as caseSensitive=false; caseSensitiveLiterals=false; Here you must Put All Tokens in LowerCase. EX: tokens { ROSE="rose"; ...more
Why doesn't the lexer match any of the literals from my "tokens" section?
The most common problem is that your lexer does not have a rule that can match a pattern that includes your literals. For example, there is no rule that can match the text for "blort", ...more
Why does ANTLR sometimes go into an infinite loop when constructing AST trees?
Turn off default AST construction for that rule eg rule was assignmentExpression : lhs:lhsExpression ASSIGN! ex:expression {#assignmentExpression = #(#[ASSIGN_EXPR,"ASSIGN"],#l...more
Can I use French (UNICODE) characters in my ANTLR grammar (versus the files I parse with it)?
As of 2.7.2, yes, you can.
Why is the (...)* loop below nondeterministic for any k (is this an example of ANTLR computing linear approximate lookahead)?
Why is the (...)* loop below nondeterministic for any k (is this an example of ANTLR computing linear approximate lookahead)? a: b S b B ; b: A (B A)* ;
What does LL(k) means?
The first letter, L for left to right or R for right to left, denotes the order in which to scan the source text. The second letter denotes the derivation of the constructed parse tree. At each ...more
How can I throw my own exception out of a parser or tree walker?
Please see How do I signal the parser to bail out immediately upon detection of a syntax error, instead of trying to consume tokens until it resynchs? more
When should I use ANTLR instead of PERL?
When you come to your senses... ;-) It really depends on what you are trying to do. ANTLR is a parser that allows you to define vocabularies/grammars through a meta-language and then uses the gra...more
How do I distinguish keywords from identifiers and how can I have all keywords returned as KEYWORD?
T. Parr: Just reference strings in your parser and the lexer will add them to a hashtable which it checks upon each completed token. If the token matches a "keyword" it returns that tok...more
|
http://www.jguru.com/faq/java-tools/antlr?page=3
|
CC-MAIN-2017-47
|
refinedweb
| 647
| 66.84
|
API Documentation
2.3
Introduction
Open Babel is a full chemical software toolbox. In addition to converting file formats, it offers a complete programming library for developing chemistry software. The library is written primarily in C++ and also offers interfaces to other languages (e.g., Perl, Python, Ruby, and Java) using essentially the same API.
This documentation outlines the Open Babel programming interface, providing information on all public classes, methods, and data. In particular, strives to provide as much (or as little) detail as needed. More information can also be found on the main website and through the openbabel-discuss mailing list.
- Getting Started
(where to begin, example code, using Open Babel in real life, ...)
- Classes Overview
(overview the most important classes ordered by category)
- What's New in Version 2.3
(Changes since 2.2 releases)
- What's New in Version 2.2
(Changes since 2.1 releases)
- What's New in Version 2.1
(Changes since 2.0 releases)
- All Classes
(all classes with brief descriptions)
|
http://openbabel.org/api/2.3/
|
CC-MAIN-2016-36
|
refinedweb
| 168
| 53.07
|
Problem
You have a class that represents some kind of text field or document, and as text is appended to it, you want to correct automatically misspelled words the way Microsoft Word's Autocorrect feature does.
Solution
Using a map, defined in , strings, and a variety of standard library features, you can implement this with relatively little code. Example 4-31 shows how to do it.
Example 4-31. Autocorrect text
#include #include #include #include using namespace std; typedef map StrStrMap; // Class for holding text fields class TextAutoField { public: TextAutoField(StrStrMap* const p) : pDict_(p) {} ~TextAutoField( ) {} void append(char c); void getText(string& s) {s = buf_;} private: TextAutoField( ); string buf_; StrStrMap* const pDict_; }; // Append with autocorrect void TextAutoField::append(char c) { if ((isspace(c) || ispunct(c)) && // Only do the auto- buf_.length( ) > 0 && // correct when ws or !isspace(buf_[buf_.length( ) - 1])) { // punct is entered string::size_type i = buf_.find_last_of(" f v"); i = (i == string::npos) ? 0 : ++i; string tmp = buf_.substr(i, buf_.length( ) - i); StrStrMap::const_iterator p = pDict_->find(tmp); if (p != pDict_->end( )) { // Found it, so erase buf_.erase(i, buf_.length( ) - i); // and replace buf_ += p->second; } } buf_ += c; } int main( ) { // Set up the map StrStrMap dict; TextAutoField txt(&dict); dict["taht"] = "that"; dict["right"] = "wrong"; dict["bug"] = "feature"; string tmp = "He's right, taht's a bug."; cout << "Original: " << tmp << ' '; for (string::iterator p = tmp.begin( ); p != tmp.end( ); ++p) { txt.append(*p); } txt.getText(tmp); cout << "Corrected version is: " << tmp << ' '; }
The output of Example 4-31 is:
Original: He's right, taht's a bug. Corrected version is: He's wrong, that's a feature.
Discussion
strings and maps are handy for situations when you have to keep track of string associations. TextAutoField is a simple text buffer that uses a string to hold its data. What makes TextAutoField interesting is its append method, which "listens" for whitespace or punctuation, and does some processing when either one occurs.
To make this autocorrect behavior a reality, you need two things. First, you need a dictionary of sorts that contains the common misspelling of a word and the associated correct spelling. A map stores key-value pairs, where the key and value can be of any types, so it's an ideal candidate. At the top of Example 4-31, there is a typedef for a map of string pairs:
typedef map StrStrMap;
See Recipe 4.18 for a more detailed explanation of maps. TextAutoField stores a pointer to the map, because most likely you would want a single dictionary for use by all fields.
Assuming client code puts something meaningful in the map, append just has to periodically do lookups in the map. In Example 4-31, append waits for whitespace or punctuation to do its magic. You can test a character for whitespace with isspace, or for punctuation by using ispunct, both of which are defined in for narrow characters (take a look at Table 4-3).
The code that does a lookup requires some explanation if you are not familiar with using iterators and find methods on STL containers. The string tmp contains the last chunk of text that was appended to the TextAutoField. To see if it is a commonly misspelled work, look it up in the dictionary like this:
StrStrMap::iterator p = pDict_->find(tmp); if (p != pDict_->end( )) {
The important point here is that map::find returns an iterator that points to the pair containing the matching key, if it was found. If not, an iterator pointing to one past the end of the map is returned, which is exactly what map::end returns (this is how all STL containers that support find work). If the word was found in the map, erase the old word from the buffer and replace it with the correct version:
buf_.erase(i, buf_.length( ) - i); buf_ += p->second;
Append the character that started the process (either whitespace or punctuation) and you're done.
See Also
Recipe 4.17, Recipe 4.18, and Table 4-3
Building C++ Applications
Code Organization
Numbers
Strings and Text
Dates and Times
Managing Data with Containers
Algorithms
Classes
Exceptions and Safety
Streams and Files
Science and Mathematics
Multithreading
Internationalization
XML
Miscellaneous
Index
|
https://flylib.com/books/en/2.131.1/autocorrect_text_as_a_buffer_changes.html
|
CC-MAIN-2021-21
|
refinedweb
| 704
| 62.48
|
Sorry for the delay in getting the results out. There is a really funny story behind it with me running 7 hours of tests in debug mode then aggregating results and getting it all ready before I realized what I had done :-/
You can find the original contest here.
One of the key optimizations for the tokenizer is realizing that the order of processing is extremely important. If we know any information about the domain our code will run in we can often make domain specific optimizations by changing the ordering of our statements.
In the English language for instance characters appear on an order of magnitude more frequently than punctuation. In order to optimize for this case we should check our most often circumstances first. This is better shown with a simple example. The following two functions are both doing the exact same thing but the second example is much faster than the first.
static count CountWrongOrder(string s) {
count ret = new count();
foreach (char c in s) {
if (c == ' ' || c == '\t' || c == '\n') {
ret.spaces++;
} else if (char.IsDigit(c)) {
ret.numbers++;
} else if (char.IsUpper(c)) {
ret.uppercase++;
} else if (char.IsLower(c)) {
ret.lowercase++;
}
}
return ret;
}
static count CountRightOrder(string s) {
if (char.IsLower(c)) {
} else if (c == ' ' || c =='\t' || c=='\n') {
ret.spaces++;
Right and wrong order
Run across the string "This is some normal english text. Occasionally you will also get a number such as 2" (concat’ed 10000) times the second version is almost twice the speed of the first. In our most often case in the first example (a lower case letter) we have to fail through all of the other conditions where as in the second example it ius our first condition. The goal should be to check the mutually exclusive conditions in an order that is correct when you statistically analyze the data.
I saw many submissions calling Data.ToCharArray() to get a character array representing the data so they could read it. There is another method on the string object that does a similar job, which although hidden to you is much faster than ToCharArray(). Consider the following test.
static int ORCharsNewArray(string s) {
char[] tmp = s.ToCharArray();
int ret = 0;
foreach (char c in tmp) {
ret |= c;
static int ORCharsSameArray(string s) {
static void Main(string[] args) {
string foo = "123456789012345678901234567890";
Stopwatch s = new Stopwatch();
s.Start();
for (int i = 0; i < 100000; i++) {
int bar = ORCharsNewArray(foo);
s.Stop();
Console.WriteLine(s.ElapsedTicks);
s.Reset();
int bar = ORCharsSameArray(foo);
ToCharArray vs get_Chars
In debug mode they have nearly identical performance but in release the ORCharsSameArray function is about 3 times faster. When you use a foreach on a string (or if you use an index) a special method get_Chars is called (you can’t call this on your own). We can see this occurring by looking at the IL in question.
.method private hidebysig static int32 ORCharsSameArray(string s) cil managed
{
.maxstack 2
.locals init (
[0] int32 num1,
[1] char ch1,
[2] string text1,
[3] int32 num2)
L_0000: ldc.i4.0
L_0001: stloc.0
L_0002: ldarg.0
L_0003: stloc.2
L_0004: ldc.i4.0
L_0005: stloc.3
L_0006: br.s L_0018
L_0008: ldloc.2
L_0009: ldloc.3
L_000a: callvirt instance char string::get_Chars(int32)
L_000f: stloc.1
L_0010: ldloc.0
L_0011: ldloc.1
L_0012: or
L_0013: stloc.0
L_0014: ldloc.3
L_0015: ldc.i4.1
L_0016: add
L_0017: stloc.3
L_0018: ldloc.3
L_0019: ldloc.2
L_001a: callvirt instance int32 string::get_Length()
L_001f: blt.s L_0008
L_0021: ldloc.0
L_0022: ret
}
IL of ORCharsSameArray
This method lets you view the data inside of the string as if it were an array of chars. Since the data is only being read there is no need to generate an array. The reason that they are roughly the same speed in debug is that the method does not get inlined, in release mode the method is inlined and it is nearly as efficient as unsafe code.
A few tried threaded solutions only one did it efficiently. Garmon got the algorithm right.
The problem with threading is locking … In order to maintain a hash properly between two threads one needs to setup critical sections around the hash. The big problem comes in that on every probe to the hash one has to worry about a resize of the hash by the other thread. In order to get around this one has to create a hash per thread, since each thread has its own hash they can operate on their own hash without locking as the other thread will not touch the hash.
Once the two threads are complete one then must merge the two hashes. This will always be a O(n) operation at best generally O(hashsize) which is why threading was not much of an issue here unless you get absolutely huge data sets with lots of repetition. It takes a lot to make up for the O(n) operation on the copy, the creation of the second hash, the delay to actually start the thread, and the additional memory overhead caused by having two hashes.
The key to threads being successful is a high rate of repetition in the data!
All of the submissions used if conditionals to tokenize the data. There were cases such as
if ((c >= 'a') && (c <= 'z')) {
builder.Append(char.ToUpper(value));
Conditional Tokenizing
Could we pre-generate this information into a table and simply do a table lookup?
enum Operations {
EndWord = 0x0100,
MoveNext = 0x0200
}
static string FormatEntry(int value) {
return string.Format("0x{0:x4}", value);
static void MainToRun(string[] args) {
Console.WriteLine("UInt16 [] map = {");
for (int i = 0; i < 255; i++) {
char c = (char)i;
if ((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9')) {
Console.Write(FormatEntry(c | (int)Operations.MoveNext));
} else if (c >= 'A' && c <= 'Z') {
Console.Write(FormatEntry(char.ToLower(c) | (int)Operations.MoveNext));
} else if (c == ' ' || c == '\t' || c == '\n' || c == '\r') {
Console.Write(FormatEntry(0 | (int)Operations.EndWord));
} else {
Console.Write(FormatEntry(0));
if (i < 254) {
Console.Write(", ");
if ((i + 1) % 8 == 0) {
Console.Write("\n");
}
Console.Write("\n}\n");
Code to generate map
Note that this code is doing any transformations that we may want and is also storing some additional information in the high bits of the int16(we only use 8 bits for the char). In particular it is storing a bit that tells us if what was read is a word terminator or not and it tells us whether or not we should add to the current position of the output buffer. This code will produce output similar to the following
static readonly UInt16 [] map = {
0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
0x0000, 0x0100, 0x0100, 0x0000, 0x0000, 0x0100, 0x0000, 0x0000,
0x0100, 0x0000, 0x0000, 0x0000, 0x0224, 0x0000, 0x0000, 0x0000,
0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x022d, 0x0000, 0x0000,
0x0230, 0x0231, 0x0232, 0x0233, 0x0234, 0x0235, 0x0236, 0x0237,
0x0238, 0x0239, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
0x0000, 0x0261, 0x0262, 0x0263, 0x0264, 0x0265, 0x0266, 0x0267,
0x0268, 0x0269, 0x026a, 0x026b, 0x026c, 0x026d, 0x026e, 0x026f,
0x0270, 0x0271, 0x0272, 0x0273, 0x0274, 0x0275, 0x0276, 0x0277,
0x0278, 0x0279, 0x027a, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000
Map outputted by example
Now that we have this map saved off we can write an extremely elegant tokenizer that is also faster than our prior incarnations which used conditionals.
public unsafe WordEntry[] CountWords(string _Text) {
table.Clear();
int loc = 0;
int lastloc = 0;
byte [] buf = new byte[_Text.Length * 2];
fixed (byte* buffer = buf) {
fixed (char* c = _Text) {
char* current = c;
char* stop = c + _Text.Length;
while (current < stop) {
UInt16 val = map[*current];
int add = (val >> 9);
if (add > 0) {
buffer[loc] = (byte) (val & 0xFF);
loc++;
} else if (loc != lastloc && ((val >> 8) & 1) == 1) {
table.Increment(buffer + lastloc, loc - lastloc );
loc += 4;
lastloc = loc;
}
current++;
}
if (loc != lastloc) {
table.Increment(buffer + lastloc, loc - lastloc);
return table.ToArray();
Map Based Parser
Clean, compact, and efficient … what more can you ask for?
I gave myself a side challenge yesterday, could I tokenize the string two chars at a time with only a single branch in my loop? It took me a little while to come up with an answer to this problem but it did exist. Here is the code.
Int32 chars = *current;
chars = (chars ) >> 16 | ((chars & 0xFF) << 8);
Int32 processed = parsermap[chars];
Int16* tmp2 = (Int16*)tmp;
*tmp2 = (Int16)(processed & 0xFFFF);
int add = (processed >> 19); //number to add to output pointer
tmp += add;
tmp2 = (Int16*)current;
tmp2 += (processed >> 17) & 0x03; //number to add to buffer pointer (1 or 2)
current = (Int32*)tmp2;
if ((processed & 0x010000) > 0) {
Int32* t = (Int32*) tmp;
*t=0;
table.Increment(last, (int) (tmp - last));
tmp += 4;
last = tmp;
Parse 2 chars at a time with only one if statement?
In order to really look at this code you will also need to look at the code that generates a 64k map that it references (no I will not post the 64k map hereJ). The complete code and the map can be found in the MappedWordCounter.cs file. If people would like I can explain further just how it works but basically it sets high bits in the map to define how much to move forward both the buffer and the output pointers (it also stores a bit flag as to whether or not the 2 chars contained a word terminator as did the previous tokenizing example). There is however one odd case which is worth discussion.
If you read two characters and the first is an ignored or termination character. In the case of an ignored character you don’t want to end up with a “hole” in your array as such you have to not move the output buffer. In this case one is added to the buffer making it on an odd letter and you add zero to the output as you don’t want to include the character. This means that in this case you read and write the second character twice. A further special case can be seen when you have two ignored or termination characters except the buffer moves two while the output moves zero positions.
In the tests where a new object is created for every call the hash table size becomes extremely important. If the table is too big it will be allocating useless memory and will need to clear that memory. If the table is too small it will have to go through numerous growth operations which are a fairly big hit. I have yet to find a “good” way of initializing the table except for using domain information such as “every 9 letters or so will on average produce a new word”.
In general it is best to error a bit on the side of caution and allocate slightly more memory to your hash table than to error the other direction and to force a growth of your table.
I made YoungCompressed use a 150000 entry table for every entry, check out the performance characteristics compared to the other entries which were using a text.length / 7 sized table.
No submissions used this but it can quickly speed up many of them. I am going to pick on Brandon Grossutti’s submission here but the optimization applies to all of them that use a dictionary<>. By simply changing Brandon’s code from
Dictionary<string, int> MyWords = new Dictionary<string, int>();
To
Dictionary<string, int> MyWords = new Dictionary<string, int> (1, StringComparer.Ordinal);
Took about 7% off the total running time of the submission. This is an important one to remember.
It is well known that code which is designed to be generic will often be less efficient than code tailor made for the job at hand (in other words generality comes at a loss of performance). The .NET dictionary and hash classes are both very generic. By writing your own version of these classes there is a performance gain to be had. I wrote a few varying forms of my hash table that are included with the code.
The first version of the table works in a very similar fashion to the standard table. It stores the string keys as char * and uses a modulus based system for finding the next available slot (i.e. slot = (slot + 1) % tablesize). This method is quite fast but there are a few optimizations which can be made.
One of these optimizations is to get rid of the modulus which is a slow operation on most machines (and it is used extremely often, every probe). In order to get rid of the modulus we need to make some special requirements on the hash table. To be exact you need to control the size of the hash table to only be a power of two (this can be seen in "bithash" and is in my overall submission).
/// <summary>
/// calculates the next highest power of two
/// </summary>
/// <param name="number"></param>
private static UInt32 CalculateNextPowerOfTwo(uint _Number) {
for (int i = 8; i < 32; i++) {
uint current = (uint) 1 << i;
if (current > _Number) {
return current;
return uint.MaxValue;
Code to find the next item that is a power of two given a number
If the table is a power of two you can use a bit mask instead of a modulus to insure that your current value is within the realm of your hash. Once the hash is assured to be a power of two one can calculate the bitmask for the hash table as hashsize – 1. All that is then required is the mask which can be seen here.
uint slot = (uint)(Hash & m_HashMask);
The bit mask is significantly faster on most architectures, this change yielded nearly a 10% gain for the algorithm on my machine.
Since I am using char* in my hash table, I do not need to initialize a specific string for it (and I can deal with as a unit32* when I need to do things like comparisons). The big problem you will run into here is that you need to keep the memory that the string is in pinned. This can either be done explicitly or by holding a GCHandle for the object (such as I do with the actual data in the hash table). If you do not do this you can end up with seemingly random bugs that occur when the GC compacts the heap (the string you were pointing to has moved).
In order to get around the nightmare of trying to maintain GCHandles my tokenizer puts the data into a big output buffer. This output buffer is unsafe and it is pinned for the duration of the operation (so we know it won’t move).
This methodology is not very general but it does help prevent a lot of other overhead.
One of the slowest items in the code is the equality compare of an entry in the hash table. Consider the following code
for(int j=0;j<_Length;j++) {
if(_Word[j] != Current->Chars[j]) {
//failed
Naïve Equality Compare
This code goes through character by character comparing our string in our hash to the string we wish to use (keep in mind that this gets called at least once for every probe to the hash). A better way of doing this would be unroll the loop a bit by using a (uint32*) to perform our comparison, as such we could compare two characters every iteration of the loop instead of just one (remember that a character is 16 bits).
We will however run into a problem comparing two characters at a time. What happens if we have a string with an odd number of characters? We only really have two choices about what to do here. The first option is to loop 1 character short and then check the last character, this is a generally good solution but it can be optimized further if you can control the data coming in.
Since there is only one client for this hash we can definitely control the data coming in. Imagine the words cat, dog, and “the” in memory as shown below.
C
A
T
D
O
G
H
E
In this case we would need to use the above method (where we special case the last character and read it separately). Since we control the memory layout however we can avoid this by “aligning” the memory as follows.
0
By aligning our memory we have insured that we can always use just our integer based comparison to compare two characters at once! There is no need to special case; on the last odd letter we will compare the last odd letter and the character past it. Since both of the words are odd this optimization is completely safe so long as we are diligent to use the same character for padding every time.
As I said above a character is 16 bits, we do not however care about the high bits as we are only dealing with English text which can be treated as ASCII characters (8 bits). In order to do this we can store the data as a byte array. Since we are storing the data as bytes we can now compare four characters at a time when we have to compare our string.
Since we are using a map to handle translations for us the conversion on the way in is pretty much free (it’s just different values in our map). In general we should also save memory as we are only using 8 bits per byte but our alignment is now 4 bytes instead of the 2 it was in the char example. If we happen across an edge condition of many 1 character words we may end up actually using more memory.
There is also a downside to this optimization. When it comes time for us to take our data out of the hash table to create our return value we have to convert all of the data back from being a byte array of ASCII characters to being UTF-16 characters that .NET knows and loves. The code for this can be seen here.
UInt16* stop = (UInt16 *) (ptr + Current->Length);
while (ptr < stop) {
UInt16 chars = *ptr;
*buf = ((chars >> 8) << 16) | (chars & 0xFF);
ptr++;
buf++;
Decompress byte data into a string 2 chars at a time
So in short, this optimization saves us time on the compare and saves us memory in most cases but costs us time to get the string out. For English data it generally comes out to be about even but if you are dealing with larger words and/or with high rated of repetition then this optimization can really pay off.
If you look through my code, I use a reflectorred version of the string.GetHashCode() method to hash my strings (Alois does this as well). One place I sought to optimize was changing this algorithm as the hash function gets called an insane number of times (at a minimum once per word). I tried two other algorithms a variant of a Zobrist hash and a fairly simple hash that I came across. I am including the source of all three.
public static unsafe uint CalculateHash(char* _String, int _Length) {
uint num1 = 0x15051505;
uint num2 = num1;
uint* numPtr1 = (uint*)_String;
for (int num3 = (int)_Length; num3 > 0; num3 -= 4) {
num1 = (((num1 << 5) + num1) + (num1 >> 0x1b)) ^ numPtr1[0];
if (num3 <= 2) {
break;
num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr1[1];
numPtr1 += 2;
return (num1 + (num2 * 0x5d588b65));
Modified dotnet hash
public unsafe static Int32 Hash(char* str, int length) {
UInt32* ptr = (UInt32*)str;
UInt32* stop = (UInt32*)(str + length);
UInt32 Hash = 0;
while(ptr < stop) {
Hash ^= (hashmap[ptr[0] & 0xFFFF] ^ hashmap[ptr[0] >> 16]);
Hash ^= (hashmap[ptr[1] & 0xFFFF] ^ hashmap[ptr[1] >> 16]);
ptr+=2;
return (Int32) Hash;
Zobrist variant
int i = 0;
UInt32* tmp = (UInt32*)_String;
while (_Length > 0) {
i = (int) ((i<<3) ^ (*tmp++));
_Length -= 4;
return (uint) i ;
Simple Hash
I pulled these into a separate application and wrote a simple performance measurement. For 10000000 identical strings I received the following results.
Algorithm
Time (in seconds)
DotNetHash
2.21
ZobristVariant
2.47
Simple
1.4
This would lead one to believe that simple may actually be a better overall choice. Simple does however produce overall slower performance though as it does not do as good of a job hashing which will end up causing more collisions in the hash table (its simplicity causes further issue). I have to say that I am fairly impressed with the .NET implementation performance wise and continued to use it.
The results were extremely varied from entry to entry. Here is an explanation of the various tests (You can download them from the "attachment" at the bottom or from).
1) LotsOfWords (run 200 times) this test uses a string that is 110,000 words separated by spaces. The thought behind this test is to try to catch people “cheating” with hashing algorithms etc, it is also a worst case scenario for things like buffer management if you don’t use a rolling buffer. Used with a new object every time this test can give information as to the effectiveness of the initial hash sizing (if the hash starts off too small this test will suffer greatly due to copies)
2) FewWordLotsOfTimes (run 200 times) this test uses a small string that contains the oddities listed in the original post. This test is designed to test that the hash counts properly (including for large numbers of items) and that the tokenizer is working properly. Used with a new object every time this test can also give information as to the effectiveness of the sizing of the hash as if you make your hash too big initially you will suffer from creating or initializing a huge buffer on every iteration
3) Small.txt (run 500000 times) this reads the file “small.txt” which is a short paragraph on premature optimization (scored as small)
4) Premature.txt (run 200000 times) this reads the file “premature.txt” which is a medium length document discussing premature optimization (scored as medium)
5) Warandpeace.txt (run 2000 times) this reads the file warandpeace.txt which is the full text of war and peace (scored as large)
6) Bible.txt (run 2000 times) this reads the file “bible.txt” which is the full version of the bible. This test is interesting since the bible has a higher rate of repetition than war and peace.
7) Walden.txt (run 2000 times) this reads the file “walden.txt” which is the full version of Walden by Henry Thoreau. This test is interesting because it has a lower rate of repetition than war and peace.
A few people had some slight problems with things like “$”, it was my mistake to update it after the fact … My guess was that they never saw the update after the first day so I let it slide and corrected it where I could.
Wilson was run separately from the rest. It was much slower than the rest as it used an arraylist resulting in an O(n) lookup. I do have to say though that it passed tests with flying colors off the bat (including items such as “$”).
My generalized entry in this would be YoungBitCompressed. It uses most of the optimizations listed above (power of two hash, buffer alignment, compressed buffers (as bytes), and a map based tokenizer). The other entry of interest should be YoungCompressed which I left as optimized for larger data sets (it always had a 150k hash table). This item is quite quick on the larger items (almost as fast as the power of two hash table) but it is very slow on the shorter data sets.
Now for the results (all times are in seconds) *drum roll* (when looking at these remember to multiply by 1.7 or so and have a laugh at the time I wasted running them without JIT optimizations :().
Counter
Safe?
LotsOfWords
LotsofTimes
Small
Medium
Large
Bible
Walden
Young Mapped Compressed
No
13.71
38.09
26.92
156.96
234.4
354.78
54.84
YoungCompressed(const 150k hash)
14.07
29.66
1010.91
510.15
202.45
270.72
51.15
YoungBitCompressed
14.96
30.07
21.03
127.92
189.3
265.54
43.58
Young
15.7
36.36
26.63
148.82
223.06
303.39
50.23
Grossutti
Yes
21.5
159.5
66.75
479.88
770.13
1296.41
174.95
Bowen
82.28
257.28
100.6
739.53
1389.11
1842.78
362.4
Bushman
25.63
110.78
65.35
406.65
621.41
857.29
144.66
Idzi
15.49
136.32
53.09
381.15
698.01
1044.31
150.39
Kraus
17.6
84.21
47.99
320.49
488.15
684.18
101.45
Garmon
40.37
103.84
85.94
661.95
911.94
1078.82
193.9
Wilson
34800
1200
1000 (est)
DNR
Wilson extrapolated
10
30
8
7
23
22
6
18
5
15
Kevin Idzi and Steve Bushman, this was a very heated race for second. Steve won the large category but Kevin squeaked out on top of the smaller categories to win by a single point in a photo finish. Kevin takes second place and a copy Donald Knuth’s TAOCP V4F3 “Generating all Combinations and Partitions” plus the bonus for safe code only...
And of course … Alois Kraus will receive a copy of Things a Computer Scientist Rarely Talks About for winning all three categories way to go Alois!
Great job guys!
I hope everyone has had fun and learned something through this; I know I have in both regards. Let me know any questions/comments as I get ready for the next challenge a bit later in the week!
After speaking with a few people I believe that next time I will choose a slightly smaller problem (although my entry is only about 140 lines of code total). I will be posting another problem this week but would like to get people’s opinions prior to posting it as to whether
1) This problem size is ok
2) The time frame is ok, I understand that people are busy and getting 150 lines of heavily optimized code written can often be a difficult task
Please remember that my code is there for reference only, it is not a “right” answer or as optimized as it could possibly be… I am quite sure my entry could still be optimized by a factor of two. Maybe someone can come through and finish this up with some smart micro-optimizations? :)
[Advertisement]
Frans .. you could precalc those items .. of course in the case of the shift .. the 1 cycle it takes to shift is less than the number of cycles to do a look up. Let's propose that you replaced the 2 shifts and an and with 2 table look ups (or another table lookup and a split apart of the data). Let's presume that the table lookups takes one cycle (which they don't). It would save 1 clock cycle ... Don't get me wrong its savings but I don't think that is the place to look for optimizations. A bigger concern of mine is that the compiler generates the following code on the read. UInt16 val = map[*current]; 000000c7 mov eax,dword ptr ds:[02277444h] 000000cc movzx edx,word ptr [edi] 000000cf cmp edx,dword ptr [eax+4] 000000d2 jae 00000161 000000d8 movzx edx,word ptr [eax+edx*2+8] Now this is a place where some optimization can be done! I will post a version that uses an array instead of a pointer to avoid this.
Also in going through looking at some disassembled code ... "UInt16 val = map[*current]; if ((val >> 9) > 0) { buffer[loc] = (byte) (val & 0xFF); loc++; } else if (loc != lastloc && ((val >> 8) & 1) == 1) { table.Increment(buffer + lastloc, loc - lastloc ); loc += 4; lastloc = loc;" is slightly faster than the original code (by 1 cycle) as the add variable is no longer used in that example and does not really require an assignment (I am kind of surprised that the optimizer didn't pick up on this).
Kevin, a general rule of thumb (depending on the algorithm obviously) is about 10-15% loss compared to a native high levellanguage such as C++ (not including JIT time). Obviously native in say assembler is a whole different story ...
Most of my changes were algorithmic. I mainly used unsafe as some of the algorithmic changes would not have done well in safe code (example: string bufferring + aligning)It's funny you should mention this though .. stay tuned :)
|
http://codebetter.com/blogs/gregyoung/archive/2006/08/15/148292.aspx
|
crawl-002
|
refinedweb
| 4,784
| 69.41
|
Important: Please read the Qt Code of Conduct -
QTreeView separate class implementation
I'm trying to implement treeView widget as a separate class. The idea is I have a mainWindow with pushButton that when pressed opens a file dialog from which users selects directory. The selected directory path is given to instance of separate treeView class with QFileSystemModlel. The code looks something like this:); }
The above is from MainWindow class and "view" is the instance of "TreeView" class which has only treeView widget in its UI file. Now treeView class sets up treeView in UI and gets this string path to set QFileSystemModel's root path by:
void TreeView::openFile(const QString& folderPath) { model->setRootPath(folderPath); QModelIndex index = model->index(folderPath); model->setFilter(QDir::NoDotAndDotDot | QDir::AllDirs); ui->treeView->setRootIndex(index); ui->treeView -> expand(index); }
The treeView class UI is called as child of MainWindow UI so I can see treeView widget in my mainwindow, but the treeView does not gets populated. If I run everything in treeView class and bypass creating new instance of treeView class "view", it works fine. Does creating new instance is causing the issue, I'm just passing path variable to that instance and I thought the treeView should get populated. The folderPath gets passed fine as I was able to read in openFile(const QString&) function. I just need some help in understanding what I'm doing wrong, thanks
- mrjj Lifetime Qt Champion last edited by
Hi
Are you sure that new instance of "TreeView" is the one you are looking at on the screen as the code looks fine
and you say it works if you do it directly in "TreeView".
There should be nothing wrong with creating any number of instances.
@mrjj Hi mrjj
Thanks for quick response. I'm adding the UI of TreeView Widget in MainWindow UI file by promoted widget method.
@sogo
Hi
So in real MainWin, you have promoted a Widget to be a "TreeViev" and that works`?
Im asking as you have
view->openFile(folderPath);
and "view" here does not seem to be in UI-> so that is the new instance which does not want to work?
Can you show how you create and insert the "view" into the real MainWin ?
@mrjj
Sorry for late response, I was trying something and now getting this error. So this is how I am setting my UI widget:
In MainWindow UI:
In TreeView UI:
This is MainWindow looks like:
Code for MainWindow.cxx:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); view = new TreeView; this->connect(ui->pushButton, SIGNAL(clicked()), this, SLOT(openFileDialog())); } MainWindow::~MainWindow() { delete ui; }); }
code for TreeView.cxx:
#include "TreeView.h" #include "ui_TreeView.h" TreeView::TreeView(QWidget *parent) : QWidget(parent), ui(new Ui::TreeView) { ui->setupUi(this); model = new QFileSystemModel; } TreeView::~TreeView() { delete ui; } void TreeView::openFile(const QString& folderPath) { model->setRootPath(folderPath); QModelIndex index = model->index(folderPath); model->setFilter(QDir::NoDotAndDotDot | QDir::AllDirs); ui->treeView->setModel(model); ui->treeView->setRootIndex(index); ui->treeView -> expand(index); }
Edited:
Error was due to model not set to treeView, sorry it was mistake in code.
Hi
Before we look at the error
Why do you both promote and then create a new one?
view = new TreeView; // this is a new one
this->connect(ui->pushButton, SIGNAL(clicked()), this, SLOT(openFileDialog()));
If you mean the one you have on mainWindow via promoting that would be
ui->widget
( you can rename it.. its just called widget pr default )
so Do you really mean to create that new one, that you don't insert into MainWindow ?
Just checing as it perfectly fine if you want it to be a popup that opens in a window over mainwin
and you are ok with its not the one in MainWindow we look at.
I see, I should not create new instance, even if I do, I should insert that instance into MainWindow instead of adding it manually?
If I add manually TreeView UI as ui->widget, I can get that in my MainWindow but how can I get the corresponding TreeView public functions that sets the treeView to specific model "openFile(const QString&)". Wouldn't I require an instance of that class in MainWindow
@sogo
Hi
The promoted widget Is of the right type.
That is the cool thing about promotion.
so
ui->widget->openFile(xxx) will work.
(just rename it in Designer to call it something better and do full recompile)
So no, if you don't mean to have 2 of your TreeView then you should not create a new one.
Oh I see, I misunderstood the promote thing, it is actually taking the header file so it is taking the complete TreeView class. I misunderstood it with UI as being separate class and that's why I started creating new instance of TreeView class. Thanks, I think now I get it. Closing this issue.
|
https://forum.qt.io/topic/117602/qtreeview-separate-class-implementation/4
|
CC-MAIN-2021-17
|
refinedweb
| 817
| 50.16
|
the underlying ECDSA algorithm – in fact it is obtained by multiplying the generator point on the curve by our private key. As any point on the curve, it therefore has an x-coordinate and a y-coordinate, both being 32 bytes unsigned integers. So one way to encode the public key would be as follows.
- take the x-coordinate as a point, represented by an integer smaller than p
- convert this into a 32 byte hexadecimal string, using for instance big endian encoding
- do the same for the y-coordinate
- and concatenate these two strings to obtain a single 64 byte hexadecimal string
This encoding is simple, but it has a drawback. Remember that we encode not just a random pair of integers, but a point on the curve, so the x-coordinate and y-coordinate are related by the curve equation
Thus given x, we almost know y – we know the square of y modulo p, and there can be at most two different roots of this equation. So we could reconstruct y if we have x and an additional bit that tells us which of the two solutions we need.
Let us now assume that p is odd. If y is a solution of the equation for a given value of x, then p – y (which is -y modulo p) is the second solution. As p is odd, exactly one of the two numbers y and p – y is even. We can therefore use an additional bit that is equal to y modulo 2 to distinguish the two solutions. It is convention to store this bit in a full additional byte, using the value 2 if y is even and the value 3 if y is odd, so that we obtain a representation of the public key (and in fact any other point on the curve) in at most 33 bytes: at most 32 bytes for the value of the x-coordinate and the additional byte containing the value of y modulo 2. This representation is called the compressed representation (see for instance the publication of the SECG, section 2.3).
If there is a compressed representation, you might expect that there is also an uncompressed representation. This is simply the representation that we have described above, i.e. storing both x and y, with an additional twist: to be able to distinguish this from a compressed representation that always starts with 0x02 or 0x03, a leading byte with value 0x04 is added so that the total length of an uncompressed representation is at most 65 bytes. Since version 0.6.0, the bitcoin reference implementation defaults to using compressed keys (see the function
CWallet::GenerateNewKey).
Let us summarize what we have learned so far in a short Python code snippet that will take a private key (stored as integer in the variable d), calculate the corresponding point on the elliptic curve SECP256K1 using the ECDSA library and create a compressed representation of the result.
# # Determine the public key from the # secret d # import ecdsa curve = ecdsa.curves.SECP256k1 Q = d * curve.generator # # and assemble the compressed representation # x = Q.x() y = Q.y() pubKey = x.to_bytes(length=32, byteorder="big") pubKey = binascii.hexlify(pubKey).decode('ascii') if 1 == (y % 2): pubKey = "03" + pubKey else: pubKey = "02" + pubKey print("Compressed key: ", pubKey)
This way of encoding a public key is in fact not specific to the bitcoin network, but a standard that is used whenever a point on an elliptic curve needs to be encoded – see for instance RFC5480 by the IETF which is part of the X.509 standard for certificates.
However, this is still a bit confusing. If you known the nuts and bolts of the bitcoin protocol a bit, you will have seen that participants publish something that is called an address which is a string similar to
mx5zVKcjohqsu4G8KJ83esVxN52XiMvGTY
That does not look at all like a compressed or uncompressed public key. We are missing something.
The answer is that an address is in fact not a public key, but it is derived from a public key. More precisely, it is an encoded version of a hash value of the public key. So given the address, it is easy to verify that this address belongs to a certain public key, but it is very hard to reconstruct the public key given the address.
To understand the relation between a public key and an address better, it is again time to take a look at the source code of the reference client. A good starting point is the RPC method
getnewaddress. This method is defined in the file
wallet/rpcwallet.cpp and creates an instance of the class
CBitcoinAddress which in turn is – surprise – derived from our old friend
CBase58Data. The comments are quite helpful, and it is not difficult to figure out that a bitcoin address is obtained as follows from a public key.
- create a hexadecimal compressed representation of the public key
- apply a double hash to turn this into a sequence of 20 bytes – first apply the hash algorithm SHA256, then RIPEMD160 (this is called a Hash160 in the bitcoin terminology as the output will have 160 bits)
- add a prefix to mark this as a public key address – the prefix is again defined in
chainparams.cppand and is zero for the main network and 111 for the test networks
- take the hash256 checksum and append the first four bytes
- apply Base58 decoding to the result
This is already very similar to what we have seen before and can be done in a few lines of Python code.
def hash160(s): _sha256 = hashlib.sha256(s).digest() return hashlib.new("ripemd160", _sha256).digest() # # Apply hash160 # keyId = hash160(bytes.fromhex(pubKey)) # # Append prefix for regtest network # address = bytes([111]) + keyId # # Add checksum # chk = hash256(address)[:4] # # and encode # address = btc.utils.base58Encode(address + chk) print("Address: ", address)
Heureka! If we run this, we get exactly the address
mx5zVKcjohqsu4G8KJ83esVxN52XiMvGTY that the bitcoin client returned when we started our little journey at the beginning of the post on private keys.
As always, the full source code is also available on GitHub repository. If you want to run the code, simply enter
$ git clone $ cd bitcoin $ python Keys.py
That was it for today. We have now covered the basics of what constitutes participants in the bitcoin network. In the next few posts in this series. we will look at the second main object in the bitcoin world – transactions. We will learn how to interpret transactions, and will eventually be able to manually create a transaction to instruct a payment, sign it, hand it over to our test network and see how it is processed.
2 thoughts on “Keys in the bitcoin network: the public key”
|
https://leftasexercise.com/2018/03/08/keys-in-the-bitcoin-network-the-public-key/
|
CC-MAIN-2021-31
|
refinedweb
| 1,128
| 59.23
|
In the previous article we introduced Redux for managing the user interface state and added some rudimentary animation.
In this addition we will refactor the application models and introduce some testing.
To follow along with this article checkout the commit b01c12d.
$ git clone $ npm install $ git checkout b01c12d
Fixing the Model
At this point in the implementation it is time to start thinking about the logic of Tetris and how we will implement things like collision detection, boundaries and collapsing rows. This is stuff that I don’t want in my React components because it is application logic that should be a core part of my application.
I have made a mistake with my domain modelling by representing the different Tetrominos as different types, when in fact they only need to be different configurations of the same type (
Shape).
Also, I have come to the realization that a Tetrominos is not a set of points with a standard orientation that can be rotated. It is better to think of a Tetromino as a set of rotations of a shape (I call the rotations N, S, E and W).
The ‘Shape’ Type
A ‘Shape’ is a kind of Tetromino. It has a name and a function that returns the shape’s points given a rotation.
// a tetromino export class Shape { constructor(name, rotator) { this.name = name; this.rotator = rotator; } pointsRotated(rotation) { return this.rotator(rotation); } }
With the
Shape type we can now fully define all possible rotations of all possible Tetrominos. I put them in a dictionary so that I can easily access the one I want (e.g.
shapes['Z']):
// dictionary of shape type to square offsets export var shapes = { 'O': new Shape('O', rotation => [new Point(1,1),new Point(1,2), new Point(2,1),new Point(2,2)]), 'I': new Shape('I', rotation => { switch (rotation) { case 'N': return [new Point(1,1), new Point(2,1),new Point(3,1), new Point(4,1)]; case 'E': return [new Point(2,1), new Point(2,2),new Point(2,3), new Point(2,4)]; case 'S': return [new Point(1,1), new Point(2,1),new Point(3,1), new Point(4,1)]; case 'W': return [new Point(2,1), new Point(2,2),new Point(2,3), new Point(2,4)]; } }), 'T': new Shape('T', rotation => { switch (rotation) { case 'N': return [new Point(1,1), new Point(1,2), new Point(2,2), new Point(1,3)]; case 'E': return [new Point(1,2), new Point(2,2),new Point(3,2), new Point(2,1)]; case 'S': return [new Point(1,2), new Point(2,1),new Point(2,2), new Point(2,3)]; case 'W': return [new Point(1,1), new Point(2,1),new Point(3,1), new Point(2,2)]; } }), 'L': new Shape('L', rotation => { switch (rotation) { case 'N': return [new Point(1,1), new Point(2,1), new Point(1,2), new Point(1,3)]; case 'E': return [new Point(1,1), new Point(1,2), new Point(2,2), new Point(3,2)]; case 'S': return [new Point(1,3), new Point(2,1), new Point(2,2), new Point(2,3)]; case 'W': return [new Point(1,1), new Point(2,1), new Point(3,1), new Point(3,2)]; } }), 'Z': new Shape('Z', rotation => { switch (rotation) { case 'N': return [new Point(1,1), new Point(1,2), new Point(2,2), new Point(2,3)]; case 'E': return [new Point(1,2), new Point(2,2),new Point(2,1), new Point(3,1)]; case 'S': return [new Point(1,1), new Point(1,2), new Point(2,2), new Point(2,3)]; case 'W': return [new Point(1,2), new Point(2,2),new Point(2,1), new Point(3,1)]; } }) };
The ‘Piece’ Type
To be able to work with instances of shapes on the game board we need a type to represent an instance of a shape with an offset and rotation. That type is
Piece. Note that the rotation defaults to ‘N’, and the offset defaults to row 1 column 10 (top centre).
// an instance of a tetromino on the board export class Piece { constructor(shape, offset = new Point(1,10)) { this.shape = shape; this.offset = offset; this.rotation = 'N'; } points() { return this.shape.pointsRotated(this.rotation).map((point,ix) => point.add(this.offset)); } static rotations() { return ['N','E','S','W']; } }
The set of possible rotations is given by the static function
rotations(). As a static function it is accessed of the type, not an instance of the type, e.g.
Piece.rotations().
The
points() function returns the squares (points) that need to be drawn for the piece. This is done by starting with the correct rotation of the point’s shape and then adding the Piece’s offset to each point.
The ‘Game’ Type
To coordinate the game of Tetris we introduce a new type
Game. Game keeps track of the currently falling piece (there can be only one) and the rubble left by all pieces that have fallen previously. Once a piece has finished falling it can no longer be stored as a piece because Tetris allows full rows to collapse, so the
Game converts fallen pieces to rubble which is just a collection of points.
export class Game { constructor() { this.rows = 15; this.cols = 20; this.rubble = []; this.startAPiece(); } tick() { this.fallingPiece.offset = this.fallingPiece.offset.fallOne(); if (this.fallingPiece.maxRow() >= this.rows) { this.convertToRubble(); } return this; } convertToRubble() { this.rubble = this.rubble.concat(this.fallingPiece.points()); this.startAPiece(); } startAPiece() { this.fallingPiece = new Piece(shapes.selectRandom()); } rotate() { this.fallingPiece.rotate(); return this; } }
startAPiece() is a method that intializes a new falling
Piece. To choose the
Shape for the new piece it uses the
selectRandom() method, which randomly chooses one of the five possible Tetromino shapes:
shapes.selectRandom = function() { var index = Math.floor(Math.random()*1000000%5); return shapes[Object.keys(shapes)[index]]; }
tick() is the method that advances the game by one time unit, by moving the falling piece down by one position. This is done by changing the falling piece’s offset. If the falling piece hits the lower boundary (
this.fallingPiece.maxRow() >= this.rows) then the piece is converted to rubble.
convertToRubble() adds the points from the current falling piece to the existing collection of rubble. It then delegates to
startAPiece() to create a new falling piece.
The React Components
Much of the refactoring in this episode has been to move the application logic into a domain model and away from the React components. The React components can now be very simple, which is good.
From the outside in we start with
GameView. This is a React component responsible for rendering the entire game.
export var GameView = React.createClass({ render: function () { return <div className="border" style={{width: this.props.game.cols*25, height: this.props.game.rows*25}}> <PieceView piece={this.props.game.fallingPiece} /> <RubbleView rubble={this.props.game.rubble} /> </div>; } });
The game is rendered as a div with 25 pixels for each row and column. Within the
GameView there is a
PieceView, which renders the current (falling) piece, and a
RubbleView which renders the rubble of all pieces that have fallen previously.
PieceView has a single prop called
piece. This is how we pass data into the
PieceView.
piece is expected to be an instance of the
Piece model type.
export var PieceView = React.createClass({ render: function () { return <div> {this.props.piece.points().map(sq => <Square key={count++} row={sq.row} col={sq.col} />)} </div>; } });
PieceView is very simple. It extracts the points from the piece and converts each one to a
Square element.
Square is a component responsible for rendering a single point.
export var Square = React.createClass({ render: function() { var s = { left: (this.props.col-1) * 25 + 'px', top: ((this.props.row-1) * 25) + 'px' }; return <div className="square" style={s}></div>; } });
The ‘App’ Component
App (app.js) has been kept simple by moving a lot of the logic into models. It has not changed much from the previous edition.
import * as React from 'react'; import * as ReactDOM from 'react-dom'; import * as Components from './components'; import * as Model from './model'; import {createStore} from 'redux'; function reducer(state = new Model.Game(), action) { switch (action.type) { case 'TICK': return state.tick(); default: return state; } } let store = createStore(reducer); store.subscribe(() => { ReactDOM.render(<Components.GameView game={store.getState()} />, document.getElementById('container')); }); setInterval(() => store.dispatch({ type: 'TICK' }),500);
The result is that we now have pieces falling one after another and stopping at the boundary of the game area.
Testing
In JavaScript at least, if you haven’t tested it you don’t know it works. I want to add some unit tests and I want to be able to verify them without a browser, therefore I choose mocha as my test runner.
babel-register is an adapter that gives mocha the ability to work with ES2015 via babel.
npm install --save mocha babel-register
Mocha runs in the browser or in node.js and has a hierarchical test format that I like.
To be able to run my tests via npm, and without a global install of mocha, I add the following script to
package.json.
"scripts": { "build": "browserify -t [ babelify --presets [ react es2015 ] ] app.js -o bundle.js", "test": "node_modules/mocha/bin/mocha --compilers js:babel-register" }
Now I can run my tests with:
npm run test
but first I will need some tests. Within the
test directory (mocha convention) I write my model tests in a file called
modelTests.js. To start I need to import an assertion library and the module I want to test:
var assert = require('assert'); var Models = require('../model');
The first tests I add are to check that shapes have the points that I expect them to:
function pieceHasPoints(piece, points) { return points.every(item => piece.hasPoint(item)); } describe('models', function () { describe('Piece', ()=> { describe('hasPoint', ()=> { var piece = new Models.Piece(Models.shapes.I, new Models.Point(1,1)); it('should have (1,1)',()=> assert(piece.hasPoint(new Models.Point(1,1)))); it('should have (2,1)',()=> assert(piece.hasPoint(new Models.Point(2,1)))); it('should have (3,1)',()=> assert(piece.hasPoint(new Models.Point(3,1)))); it('should have (4,1)',()=> assert(piece.hasPoint(new Models.Point(4,1)))); it('should not have (2,2)',()=> assert(!piece.hasPoint(new Models.Point(2,2)))); it('should not have (1,2)',()=> assert(!piece.hasPoint(new Models.Point(2,2)))); it('should not have (3,2)',()=> assert(!piece.hasPoint(new Models.Point(2,2)))); it('should not have (3,3)',()=> assert(!piece.hasPoint(new Models.Point(2,2)))); }); });
Next I check that rotation works correctly:
describe('rotation', ()=> { describe('general rotation', ()=> { it('should rotate clockwise indefinitely', ()=> { var piece = new Models.Piece(Models.shapes.I); assert.equal(piece.rotation, 'N') piece.rotate(); assert.equal(piece.rotation, 'E'); piece.rotate(); assert.equal(piece.rotation, 'S'); piece.rotate(); assert.equal(piece.rotation, 'W'); piece.rotate(); assert.equal(piece.rotation, 'N'); piece.rotate(); assert.equal(piece.rotation, 'E'); piece.rotate(); assert.equal(piece.rotation, 'S'); piece.rotate(); assert.equal(piece.rotation, 'W'); piece.rotate(); assert.equal(piece.rotation, 'N'); }); }); describe('rotating an I', ()=> { var piece = new Models.Piece(Models.shapes.I, new Models.Point(1,1)); it('should have the expected points to start with', ()=> assert(pieceHasPoints(piece, [new Models.Point(1,1),new Models.Point(2,1),new Models.Point(3,1),new Models.Point(4,1)]))); it('should rotate to the correct E position', ()=> { piece.rotate(); assert(pieceHasPoints(piece, [new Models.Point(2,1),new Models.Point(2,2),new Models.Point(2,3),new Models.Point(2,4)])); }); it('should rotate to the correct S position', ()=> { piece.rotate(); assert(pieceHasPoints(piece, [new Models.Point(1,1),new Models.Point(2,1),new Models.Point(3,1),new Models.Point(4,1)])); }); it('should rotate to the correct W position', ()=> { piece.rotate(); assert(pieceHasPoints(piece, [new Models.Point(2,1),new Models.Point(2,2),new Models.Point(2,3),new Models.Point(2,4)])); }); it('should rotate back to N', ()=> { piece.rotate(); assert(pieceHasPoints(piece, [new Models.Point(1,1),new Models.Point(2,1),new Models.Point(3,1),new Models.Point(4,1)])); }); }); });
Running these tests as described produces the output:
> node_modules/mocha/bin/mocha --compilers js:babel-register models Piece hasPoint ✓ should have (1,1) ✓ should have (2,1) ✓ should have (3,1) ✓ should have (4,1) ✓ should not have (2,2) ✓ should not have (1,2) ✓ should not have (3,2) ✓ should not have (3,3) rotation general rotation ✓ should rotate clockwise indefinitely rotating an I ✓ should have the expected points to start with ✓ should rotate to the correct E position ✓ should rotate to the correct S position ✓ should rotate to the correct W position ✓ should rotate back to N 14 passing (12ms)
Next Time…
The next installment of the series will look at handling user input so that we can allow the user to position pieces (move left and right) and trigger rotation. There is work to be done to add collision detection and improve the development workflow.
|
https://www.withouttheloop.com/articles/2016-01-04-tetris4/
|
CC-MAIN-2022-27
|
refinedweb
| 2,185
| 58.18
|
I have set up nested routing for a project I’m working on. The routing
config file looks like this:
resources :projects do
resources :key_questions
end
I have this working for other aspects of my site just fine, and
actually have no problems at all except for when I’m trying to destroy
a key_question entry.
Here is my destroy method of the key_questions controller:
def destroy
@key_question = KeyQuestion.find(params[:id])
@key_question.destroy
respond_to do |format| format.html
{ redirect_to( project_key_questions_path(session[:project_id]) )}
format.xml { head :ok }
end
end
I use the same redirection for other links on the site and they work
fine, but when calling the destroy action I get the error:
No route matches “/projects/53/key_questions”
If I then move my cursor to the URL bar and push enter, it loads that
same page just fine without a routing error. What is going on here???
I should also note that the database entry is not being removed.
My Config:
Rails 3.0, Ruby 1.9.2
Thank you so much to anyone who can offer some advice!
- Chris
|
https://www.ruby-forum.com/t/strange-routing-error/197830
|
CC-MAIN-2021-43
|
refinedweb
| 182
| 64.51
|
[ Split from original support question here: ]
Setting up firejail is relatively easy, and the included default profiles thoroughly enhance security for the programs they are for. You can configure firejail for further needs and for additional programs. How complex you want to make it is up to you. I'll provide an overview below first of how to install it and how to use the default profiles. Then if you want it you can look in to fine-tuning the default profiles or writing your own profile files.
While there are a lot of options for fine-tuning and writing your own profiles, I'll try and show you foremost the possibilities that I think will be of common interest to those that want to take firejail further. But again, the default profiles already boost your security so there is no need to go here unless you want to.
The website best summarizes what firejail does:”.
Installation
Whether you are using Linux Mint 17.x or LMDE 2 the installation of firejail is as easy as:
- Download and save to disk the firetools .deb file for your architecture (32 bit or 64 bit) from the website: ... /firejail/. If you also want the GUI firetools program find that here (firetools will give you a menu window from which to launch applications for which it has a profile): ... firetools/.
- Double-click the downloaded file in your file manager to launch the installer. It should install without problems.
Usage
Firejail comes with a bunch of default profiles for common programs that are either Internet connected or run untrusted code on your computer. You can find the default profiles in /etc/firejail. To start a program using one of these profiles just prefix the command with "firejail". So for example to start Firefox with the default firejail profile run the command "firejail firefox" (close running Firefox first). Even if you start a program with firejail for which there is no profile defined, it will get some default confinement (see at the end of this comment for the defaults).
Now this isn't very convenient so you'll want to customize the menu launcher for applications you want to run with firejail. AFAIK on all Linux Mint editions you can right-click on the menu button and from one of the options in the context menu go to the menu editor. There you can edit the command associated with a menu launcher. Just prefix the command with "firejail ".
You can also manually copy the .desktop file for the application you want to run with firejail from /usr/share/applications to ~/.local/share/applications and edit the copied file (this is what the menu editor also does). Replace the "Exec=" line to start with "Exec=firejail ". We can also do this in one go for all installed applications for which there is a default firejail profile with this one command:
Code: Select all
mkdir -p ~/.local/share/applications; for profile in $(basename -s .profile /etc/firejail/*.profile); do if [[ -f /usr/share/applications/$profile.desktop ]]; then sed -r 's/^(Exec=)/\1firejail /' /usr/share/applications/$profile.desktop > ~/.local/share/applications/$profile.desktop; echo $profile configured to run in firejail; fi; done
You can see which applications are running in a sandbox provided by firejail with this command (in case you want to check you set things up correctly):
Code: Select all
firejail --list
A more verbose listing, showing also sub-processes running in the sandbox, is with:
Code: Select all
firejail --tree
Fine-tuning default profiles
I would recommend you don't edit the profiles in /etc/firejail as these will be overwritten when you install another version of firejail. If you have one or two options you want to add for a program you can just add them as command line parameters to firejail. So for example say you want to blacklist your /backups directory, you would start firefox as: "firejail --blacklist=/backups -- firefox". (The -- before the firefox command signals the end of options for firejail.) This would use the default firefox profile but with this additional parameter.
You can find parameters you can use in the firejail manpage ("man firejail"). You don't need to add parameters for your program to already benefit from additional security. If you have certain additional needs this can be a quick and easy way to tailor the default profiles.
Some common parameters you might have a need for to add:
- --blacklist=dirname_or_filename — makes the directory or file inaccessible
- --cpu=cpu-number,cpu-number,cpu-number — sets which CPU cores the program will be able to use
- --net=none — deny the program network access
- --private — gives the program a private copy of your home directory that is discarded after the program closes
- --private=directory — use the given directory as the home directory for the program, it is not discarded after the program closes
- --tmpfs=dirname — gives the program an empty directory for the given directory that is discarded after the program closes
Custom profiles / understanding default profiles
You might want to write your own profiles for further customization of the default profiles or to add profiles for other applications. Custom profiles you can store in ~/.config/firejail. You can find information on the available settings in the firejail-profile manpage ("man firejail-profile").
If you want to understand the default profiles that information is also very useful.
Let's look at Firefox's default profile as an example (/etc/firejail/firefox.profile):
Code: Select all
# Firejail profile for Mozilla Firefox (Iceweasel in Debian)
noblacklist ${HOME}/.mozilla
include /etc/firejail/disable-mgmt.inc
include /etc/firejail/disable-secret.inc
include /etc/firejail/disable-common.inc
include /etc/firejail/disable-devel.inc
caps.drop all
seccomp
protocol unix,inet,inet6,netlink
netfilter
tracelog
noroot
whitelist ${DOWNLOADS}
whitelist ~/.mozilla
whitelist ~/.cache/mozilla/firefox
whitelist ~/dwhelper
whitelist ~/.zotero
whitelist ~/.lastpass
whitelist ~/.vimperatorrc
whitelist ~/.vimperator
whitelist ~/.pentadactylrc
whitelist ~/.pentadactyl
whitelist ~/.keysnail.js
whitelist ~/.config/gnome-mplayer
whitelist ~/.cache/gnome-mplayer/plugin
include /etc/firejail/whitelist-common.inc
You see lines starting with a hash (#) are comments. At first this profile includes four other files, and at the end it includes another file. You can look into these on your own but to summarize:
- disable-mgmt.inc — makes inaccessible system management commands (/sbin and /usr/sbin directories, and a couple of commands)
- disable-secret.inc — makes inaccessible secret files in your home directory (SSH keys, Gnome and KDE keyrings, GPG keys, etc.)
- disable-common.inc — makes inaccessible files from other browsers, with the above "noblacklist ${HOME}/.mozilla" line ensuring the files for Firefox aren't made inaccessible (=blacklisted).
disable-devel.inc — makes inaccessible development commands (like compilers, debug tools, scripting tools, and so on)
whitelist-common.inc — make accessible common files and directories that most graphical programs will need
The "seccomp" line enables a filter for which system calls the program can make. Better explained on the firejail blog: ... omp-guide/. The "protocol" line further tailors the system call filter for networking.
The "netfilter" is there so a default network filter is enabled for if you set up a new network namespace.
The "tracelog" line makes it so any violations where the program tries to access blacklisted files or directories will be logged in /var/log/syslog.
The "noroot" line disables the root user in the sandbox.
The "whitelist" lines that follow make accessible files and directories that would be used by Firefox. The modifications to whitelisted files and directories are persistent, everything else written to your home directory is discarded when the sandbox is closed.
On top of this also the defaults apply:
The sandbox consists of a chroot filesystem build in a new mount namespace, and new PID [can't see processes running outside the sandbox] and UTS [can have its own hostname] namespaces. The default Firejail filesystem is based on the host filesystem with the main directories mounted read-only. Only /home and /tmp and directories are writeable [unless overruled with whitelist, blacklist, tmpfs, or private settings].
|
https://forums.linuxmint.com/viewtopic.php?p=1067011&sid=66d038608062f6d29c4f633ddd83c7ba
|
CC-MAIN-2018-05
|
refinedweb
| 1,334
| 53.92
|
mona is a Javascript library for easily writing reusable, composable parsers.
It makes parsing complex grammars easy and fun!
With
mona, you simply write some Javascript functions that parse small pieces
of text and return any Javascript value, and then you glue them together into
big, intricate parsers using
combinators to... combine them! No custom syntax
or separate files or separate command line tools to run: you can integrate this
into your regular JS app.
It even makes it really really easy to give excellent error messages, including line and column numbers, and messages with what was expected, with little to no effort.
New parsers are hella easy to write -- give it a shot! And if you're familiar with Parsec, then you've come to the right place. :)
parse
parseAsync
value
bind
fail
label
token
eof
delay
log
map
tag
lookAhead
is
isNot
and
or
maybe
not
unless
sequence
join
followedBy
split
splitEnd
collect
exactly
between
skip
range
stringOf
oneOf
noneOf
string
alphaUpper
alphaLower
alpha
digit
alphanum
space
spaces
text
trim
trimLeft
trimRight
eol
natural
integer
real
float
cardinal
ordinal
shortOrdinal
$ npm install mona
You can directly require
mona through your module loader of choice, or you can
use the prebuilt UMD versions found in the
browser/ directory:
var mona = require('mona')
import mona from 'mona'
define(['node_modules/mona/browser/mona'], function (mona) { ... })
<script src=/js/node_modules/mona/browser/mona.min.js></script>
{return mona}mona// => [1, 2, 3, 49829, 49, 139]
{return mona}{return mona}{return mona}{return mona}{return mona}{return mona}// => [['foo', 'bar'], ['b"az', 'quux']]
mona is a package composed of multiple other packages, re-exported through a
single module. You have the option of installing
mona from npm directly, or
installing any of the subpackages and using those independently.
This API section is organized such that each parser or function is listed under the subpackage it belongs to, along with the name of the npm package you can find it in.
@mona/parse
This module or one of its siblings is needed in order to actually execute defined parsers. Currently, it exports only a single function: a synchronous parser runner.
> parse(parser, string[, opts]) -> T
Synchronously executes a parser on a given string, and returns the resulting value.
{Parser<T>} parser- The parser to execute.
{String} string- String to parse.
{Opts} [opts]- Options object.
{Boolean} [opts.throwOnError=true]- If truthy, throws a ParserError if the parser fails and returns ParserState instead of its value.
{String} [opts.fileName]- filename to use for error messages.
// => 'a'// => 123
@mona/parse-async
This module exports only a single function: an asynchronous parser runner. You need this module or something similar in order to actually execute your parsers.
> parseAsync(parser, callback[, opts]) -> Handle
Executes a parser asynchronously, returning an object that can be used to manage the parser state.
You can feed new data into the parsing process by calling the returned handle's
#data() method. Unless the parser given tries to match
eof(), parsing will
continue until the handle's
#done() method is called.
{Function} parser- The parser to execute.
{AsyncParserCallback} callback- node-style 2-arg callback executed once per successful application of
parser.
{Object} [opts]- Options object.
{String} [opts.fileName]- filename to use for error messages.
var handle =handledata'foo'// logs:// > Got a token: f// > Got a token: o// > Got a token: o
@mona/core
The core parser package contains essential and dev-utility parsers that are
intended to be the core of the rest of the parser libraries. Some of these are
very low level, such as
bind(). Others are not necessarily meant to be used in
production, but can help with debugging, such as
log().
> value(val) -> Parser<T>
Always succeeds with
val as its value, without consuming any input.
{T} val- value to use as this parser's value.
// => 'foo'
> bind(parser, fun) -> Parser<U>
Calls
fun on the value from
parser. Fails without executing
fun if
parser fails.
{Parser<T>} parser - The parser to execute.
{Function(Parser<T>) -> Parser<U>} fun - Function called with the resulting
value of
parser.
// => 'a!'
> fail([msg[, type]]) -> Parser<Fail>
Always fails without consuming input. Automatically includes the line and column
positions in the final
ParserError.
{String} [msg='parser error']- Message to report with the failure.
{String} [type='failure']- A type to apply to the ParserError.
> label(parser, msg) -> Parser<T>
Label a
parser failure by replacing its error messages with
msg.
{Parser<T>} parser- Parser whose errors to replace.
{String} msg- Error message to replace errors with.
// => unexpected eof// => expected thing
> token([count]) -> Parser<String>
Consumes a single item from the input, or fails with an unexpected eof error if there is no input left.
{Integer} [count=1]- number of tokens to consume. Must be > 0.
// => 'a'
> eof() -> Parser<true>
Succeeds with a value of
true if there is no more input to consume.
// => true
> delay(constructor, ...args) -> Parser<T>
Delays calling of a parser constructor function until parse-time. Useful for recursive parsers that would otherwise blow the stack at construction time.
{Function(...T) -> Parser<T>} constructor- A function that returns a Parser.
{...T} args- Arguments to apply to the constructor.
// The following would usually result in an infinite loop:{return}// But you can use delay() to remedy this...{return}
>log(parser, label[, level]) -> Parser<T>
Logs the
ParserState resulting from
parser with a
label.
{Parser<T>} parser- Parser to wrap.
{String} tag- Tag to use when logging messages.
{String} [level='log']- 'log', 'info', 'debug', 'warn', 'error'.
>map(fun, parser) -> Parser<T>
Transforms the resulting value of a successful application of its given parser.
This function is a lot like
bind, except it always succeeds if its parser
succeeds, and is expected to return a transformed value, instead of another
parser.
{Function(U) -> T} transformer- Function called on
parser's value. Its return value will be used as the
mapparser's value.
{Parser<U>} parser- Parser that will yield the input value.
// => 1234.5
>tag(parser, tag) -> Parser<Object<T>>
Results in an object with a single key whose value is the result of the given parser. This can be useful for when you want to build ASTs or otherwise do some tagged tree structure.
{Parser<T>} parser- Parser whose value will be tagged.
{String} tag- String to use as the object's key.
// => {myToken: 'a'}
>lookAhead(parser) -> Parser<T>
Runs a given parser without consuming input, while still returning a success or failure.
{Parser<T>} parser- Parser to execute.
// => 'a'
>is(predicate[, parser]) -> Parser<T>
Succeeds if
predicate returns a truthy value when called on
parser's result.
{Function(T) -> Boolean} predicate- Tests a parser's result.
{Parser<T>} [parser=token()]- Parser to run.
// => 'a'
> isNot(predicate[, parser]) -> Parser<T>
Succeeds if
predicate returns a falsy value when called on
parser's result.
{Function(T) -> Boolean} predicate- Tests a parser's result.
{Parser<T>} [parser=token()]- Parser to run.
// => 'b'
@mona/combinators
Parser combinators are at the very core of what makes something like mona shine: They are, themselves, parsers, but they are intended to accept other parsers as arguments, that they will then use to do whatever job they're doing.
Combinators do just that: They combine parsers. They act as the glue that lets you take all those individual parsers that you wrote, and combine them into increasingly more intricate parsers.
This package contains things like
collect(),
split(), and the
or()/
and()
pair.
> and(...parsers, lastParser) -> Parser<T>
Succeeds if all the parsers given to it succeed, using the value of the last executed parser as its return value.
{...Parser<*>} parsers- Parsers to execute.
{Parser<T>} lastParser- Parser whose result is returned.
// => 'b'
> or(...parsers[, label]) -> Parser<T>
Succeeds if one of the parsers given to it succeeds, using the value of the first successful parser as its result.
{...Parser<T,*>} parsers- Parsers to execute.
{String} [label]- Label to replace the full message with.
// => 'bar'
> maybe(parser) -> Parser<T> | Parser<undefined>
Returns the result of
parser if it succeeds, otherwise succeeds with a value
of
undefined without consuming any input.
{Parser<T>} parser- Parser to try.
// => 'a'// => undefined
> not(parser) -> Parser<undefined>
Succeeds if
parser fails. Does not consume.
{Parser<*>} parser- parser to test.
// => 'b'
> unless(notParser, ...moreParsers, lastParser) -> Parser<T>
Works like
and, but fails if the first parser given to it succeeds. Like
and, it returns the value of the last successful parser.
{Parser<*>} notParser- If this parser succeeds,
unlesswill fail.
{...Parser} moreParsers- Rest of the parses to test.
{Parser<T>} lastParser- Parser whose value to return.
// => 'b'
> sequence(fun) -> Parser<T>
Put simply, this parser provides a way to write complex parsers while letting
your code look like regular procedural code. You just wrap your parsers with
s(), and the rest of your code can be sequential. If the description seems
confusing, see the example.
This parser executes
fun while handling the
parserState internally, allowing
the body of
fun to be written sequentially. The purpose of this parser is to
simulate
do notation and prevent the need for heavily-nested
bind calls.
The
fun callback will receive a function
s which should be called with
each parser that will be executed, which will update the internal
parserState. The return value of the callback must be a parser.
If any of the parsers fail, sequence will exit immediately, and the entire sequence will fail with that parser's reason.
{Function -> Parser<T>} fun- A sequence callback function to execute.
mona
> join(...parsers) -> Parser<Array<T>>
Succeeds if all the parsers given to it succeed, and results in an array of all the resulting values, in order.
{...Parser<T>} parsers- One or more parsers to execute.
// => ['a', 1]
> followedBy(parser, ...moreParsers) -> Parser<T>
Returns the result of its first parser if it succeeds, but fails if any of the following parsers fail.
{Parser<T>} parser - The value of this parser is returned if it succeeds.
{...Parser<*>} moreParsers - These parsers must succeed in order for
followedBy to succeed.
// => 'a'// => expected {a}
> split(parser, separator[, opts]) -> Parser<Array<T>>
Results in an array of successful results of
parser, divided by the
separator parser.
{Parser<T>} parser- Parser for matching and collecting results.
{Parser<U>} separator- Parser for the separator
{Opts} [opts]- Optional options for controlling min/max.
{Integer} [opts.min=0]- Minimum length of the resulting array.
{Integer} [opts.max=Infinity]- Maximum length of the resulting array.
// => ['a','b','c','d']
> splitEnd(parser, separator[, opts]) -> Parser<Array<T>>
Results in an array of results that have been successfully parsed by
parser,
separated and ended by
separator.
{Parser<T>} parser- Parser for matching and collecting results.
{Parser<U>} separator- Parser for the separator
{Integer} [opts.enforceEnd=true]- If true,
separatormust be at the end of the parse.
{Integer} [opts.min=0]- Minimum length of the resulting array.
{Integer} [opts.max=Infinity]- Maximum length of the resulting array.
// => ['a', 'b', 'c']
> collect(parser[, opts]) -> Parser<Array<T>>
Results in an array of
min to
max number of matches of
parser
{Parser<T>} parser- Parser to match.
{Integer} [opts.min=0]- Minimum number of matches.
{Integer} [opts.max=Infinity]- Maximum number of matches.
// => ['a', 'b', 'c', 'd']
> exactly(parser, n) -> Parser<Array<T>>
Results in an array of exactly
n results for
parser.
{Parser<T>} parser- The parser to collect results for.
{Integer} n- exact number of results to collect.
// => ['a', 'b', 'c', 'd']
> between(open, close, parser) -> Parser<V>
Results in a value between an opening and closing parser.
{Parser<T>} open- Opening parser.
{Parser<U>} close- Closing parser.
{Parser<V>} parser- Parser to return the value of.
// => 'a'
> skip(parser) -> Parser<undefined>
Skips input until
parser stops matching.
{Parser<T>} parser- Determines whether to continue skipping.
// => 'b'
> range(start, end[, parser[, predicate]]) -> Parser<T>
Accepts a parser if its result is within range of
start and
end.
{*} start- lower bound of the range to accept.
{*} end- higher bound of the range to accept.
{Parser<T>} [parser=token()]- parser whose results to test
{Function(T) -> Boolean} [predicate=function(x,y){return x<=y }]- Tests range
// => 'd'
@mona/strings
This package is intended as a collection of string-related parsers. That is, parsers that specifically return string-related data or somehow match and manipulate strings themselves.
Here, you'll find the likes of
string() (the exact-string matching parser),
spaces(), and
trim().
> stringOf(parser) -> Parser<String>
Results in a string containing the concatenated results of applying
parser.
parser must be a combinator that returns an array of string parse results.
{Parser<Array<String>>} parser- Parser whose result to concatenate.
// => 'aaa'
> oneOf(matches[, caseSensitive]) -> Parser<String>
Succeeds if the next token or string matches one of the given inputs.
{String|Array<String>} matches- Characters or strings to match. If this argument is a string, it will be treated as if matches.split('') were passed in.
{Boolean} [caseSensitive=true]- Whether to match char case exactly.
// => 'c'// => 'bar'
> noneOf(matches[, caseSensitive[, other]]) -> Parser<T>
Fails if the next token or string matches one of the given inputs. If the third
parser argument is given, that parser will be used to collect the actual value
of
noneOf.
{String|Array} matches- Characters or strings to match. If this argument is a string, it will be treated as if matches.split('') were passed in.
{Boolean} [caseSensitive=true]- Whether to match char case exactly.
{Parser<T>} [other=token()]- What to actually parse if none of the given matches succeed.
// => 'd'// => 'f'// => 'frob'
> string(str[, caseSensitive]) -> Parser<String>
Succeeds if
str matches the next
str.length inputs,
consuming the string and returning it as a value.
{String} str- String to match against.
{Boolean} [caseSensitive=true]- Whether to match char case exactly.
// => 'foo'
> alphaUpper() -> Parser<String>
Matches a single non-unicode uppercase alphabetical character.
// => 'D'
> alphaLower() -> Parser<String>
Matches a single non-unicode lowercase alphabetical character.
// => 'd'
> alpha() -> Parser<String>
Matches a single non-unicode alphabetical character.
// => 'd'// => 'D'
> digit(base) -> Parser<String>
Parses a single digit character token from the input.
{Integer} [base=10]- Optional base for the digit.
// => '5'
> alphanum(base) -> Parser<String>
Matches an alphanumeric character.
{Integer} [base=10]- Optional base for numeric parsing.
// => '1'// => 'a'// => 'A'
> space() -> Parser<String>
Matches one whitespace character.
// => '\r'
> spaces() -> Parser<String>
Matches one or more whitespace characters. Returns a single space character as its result, regardless of which whitespace characters and how many were matched.
// => ' '
> text([parser[, opts]]) -> Parser<String>
Collects between
min and
max number of matches for
parser. The result is
returned as a single string. This parser is essentially
collect() for strings.
{Parser<String>} [parser=token()]- Parser to use to collect the results.
{Object} [opts]- Options to control match count.
{Integer} [opts.min=0]- Minimum number of matches.
{Integer} [opts.max=Infinity]- Maximum number of matches.
* // => 'abcde'* // => 'bcde'
> trim(parser) -> Parser<T>
Trims any whitespace surrounding
parser, and returns
parser's result.
{Parser<T>} parser- Parser to match after cleaning up whitespace.
// => 'a'
> trimLeft(parser) -> Parser<T>
Trims any leading whitespace before
parser, and returns
parser's result.
{Parser<T>} parser- Parser to match after cleaning up whitespace.
// => 'a'
> trimRight(parser) -> Parser<T>
Trims any trailing whitespace before
parser, and returns
parser's result.
{Parser} parser- Parser to match after cleaning up whitespace.
// => 'a'
> eol() -> Parser<String>
Parses the end of a line.
// => '\n'
@mona/numbers
If you ever need a parser that will take strings and turn them into the numbers
you want the to be, this is the place to look. Parsers in this package include
integer(),
float(), and
ordinal() (which parses English ordinals (
first,
second,
third) into numbers).
> natural(base) -> Parser<Integer>
Matches a natural number. That is, a number without a positive/negative sign or decimal places, and returns a positive integer.
{Integer} [base=10]- Base to use when parsing the number.
* // => 1234
> integer(base) -> Parser<Integer>
Matches an integer, with an optional + or - sign.
{Integer} [base=10]- Base to use when parsing the integer.
// => -1234
> real() -> Parser<Float>
Parses a floating point number.
// => -1.234e-7
> cardinal() -> Parser<Integer>
Parses english cardinal numbers into their numerical counterparts
// => 2000
> ordinal() -> Parser<Integer>
Parses English ordinal numbers into their numerical counterparts.
// 100005
> shortOrdinal() -> Parser<Integer>
Parses shorthand english ordinal numbers into their numerical counterparts. Optionally allows you to remove correct suffix checks and allow any apparent ordinal to get through.
{Boolean} [strict=true]- Whether to accept only appropriate suffixes for each number. (if false,
2thparses to
2)
// 5. Fails if there's nothing left to consume.
Simply creating a parser is not enough to execute a parser, though. We need to
use the
parse function, to actually execute the parser on an input string:
mona // => 'foo'mona // => throws an exceptionmona // => 'a'mona // => error, unexpected eof.
|
https://www.npmjs.com/package/mona
|
CC-MAIN-2018-09
|
refinedweb
| 2,765
| 58.58
|
This post has (errr, “These posts have”) been a long time coming. They are based off the Mix 11 and TechEd 2011 sessions I gave a loooooong time ago, but despite my best efforts I’ve not been able to post anything to my blog for some time. Partly it was due to a family vacation in Yellowstone National Park (go there! it’s incredible!) and partly it’s due to being rather busy getting the finishing touches on Mango and preparing for what’s next. But mostly it’s due to the fact that every time I sit down to blog about it, I add some new feature to the code or make it more extensible… this is almost (almost!) a complete app at this stage, but it’s not quite good enough to put in the marketplace.
This series of three posts will demonstrate the basics of background agents in Mango, and will throw in a couple of helper libraries as well (that is what parts 2 and 3 are about). The sample app, as seen in the aforementioned Mix / TechEd demos, is a simple Twitter viewer that uses a background agent to periodically show a toast and update the application’s tile if there are new tweets since the last tame the app or the agent ran.
The foreground app
The foreground app for this particular demo isn’t very interesting. It shows a list of recent tweets for a given search term (the hard-coded default is, of course, wp7dev) and if you tap on any of the items it shows the tweet in full, complete with the user’s background image and the ability to follow any links in the tweet. It’s not smart enough to link-ify #hashtags or @usernames, etc. so it doesn’t suffice as a real twitter client; it’s just enough for someone to casually keep up with a search term.
The app looks like this:
Nothing too special. The only interesting bit on the app is the “enable agent” check box at the bottom-left of the screen, which is where the magic happens, as they say.
When you click the “enable agent” button, it runs this code:
/// <summary>
/// Starts the search agent and runs it for 1 day
/// </summary>
private void StartSearchAgent()
{
PeriodicTask task = new PeriodicTask("twitter");
task.Description = "Periodically checks for new tweets and updates the tile on the start screen." +
" Will also show a toast during normal 'waking' hours (won't wake you up at night!)";
task.ExpirationTime = DateTime.Now.AddDays(1);
try
{
ScheduledActionService.Add(task);
}
catch (InvalidOperationException)
{
MessageBox.Show("Can't schedule agent; either there are too many other agents scheduled or you have disabled this agent in Settings.");
}
}
The code is pretty straightforward:
- Create a PeriodicTask, which is the main class you use to describe the behaviour of the background agent. A periodic task runs every 30 minutes for about 25 seconds, give or take.
- If you’d prefer to run for much longer – but only when the phone is plugged in and on WiFi – you can create a ResourceIntensiveTask instead
- Give the task a Description, which will show up in the system UI under settings -> applications -> background tasks. This text will explain to the user what your agent is doing and will help them determine whether or not to disable the agent at some future point in time.
- Set the ExpirationTime for 1 day from now. You can specify an expiration date up to 14 days in the future, but in order to be kind to the batter this app defaults to 1 day.
- Try to add the task to the ScheduledActionService. This might fail if the user has disabled the agent in settings or if there are already too many other agents running on the phone.
- There’s not really a good way to tell which it is, but the good new is that the remedy is always to tell the user to go to the settings page and enable your agent (and / or disable some other agents).
And that’s all there is to it! Now the system will happily run your agent every ~30 minutes and let it do its thing. Hooray!
The background agent
The agent is also pretty simple. As you can read about in the docs, you basically add a background agent project to your solution and then implement the work you want to do in the OnInvoke method of your class. A few notes about agents up-front:
- Your agent must derive from ScheduledTaskAgent and you must override OnInvoke
- Your agent must be correctly registered in WMAppManifest.xml of the foreground project
- Visual Studio does this for you automatically, but if you change the name of the class or of the project you will need to update the XML file
- Cheat sheet: Specifier = ScheduledTaskAgent, Source = [assembly name], Type = [fully qualified type name], Name = [whatever you want]
- Your agent project (and any projects it references) must be referenced by the foreground project, even if you never use it (this is so that it is correctly added to the XAP)
- Your foreground app never explicitly calls into the agent (they are separate processes) but if you want you can instantiate an instance of the agent class inside your foreground app in order to share code (using a shared library might be a better approach, though)
Just in case you missed it (and for search-engine-friendliness):
- If you change your agent’s project name, namespace, or class name, you must update WMAppManifest.xml for the new metadata
- If you reference another assembly from the agent, you must also reference it from the foreground app to ensure it makes it into the XAP
If you fail to do this, your agent will not load at all. If you run it under the debugger with the break on all exceptions turned on, you will get a FileNotFoundException stating that it can’t find your assembly or one of the assemblies it relies on.
Anyway, back to our story. The agent’s work looks like this (we’ll get into the guts of it later):
/// <summary>
/// Called by the system when there is work to be done
/// </summary>
/// <param name="Task">The task representing the work to be done</param>
protected override void OnInvoke(ScheduledTask Task)
{
// Read the search term from IsoStore settings; in a more complex scenario you might
// have multiple tiles and read each tile's term from a different setting
IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;
TweetSearchTerm = settings["searchTerm"].ToString();
// Get the tweets from the library
TwitterHelper.GetFirstTweet(TweetSearchTerm, tweet =>
{
// Don't do any work if this is invalid or the same as the most recently-seen tweet
if (tweet.Id != Tweet.InvalidId && tweet.Id != settings["lastTweetId"].ToString())
{
settings["lastTweetId"] = tweet.Id;
settings.Save();
// Show the toast
ToastHelper.ShowToast(tweet.AuthorAlias, tweet.TweetText, new Uri("/DetailsPage.xaml?id=" + tweet.Id, UriKind.Relative));
// Update the tile
TileHelper.UpdateTile(new ExtendedTileData
{
FrontText = "latest tweet",
FrontTitle = TweetSearchTerm,
BackText = tweet.TweetText,
BackTitle = tweet.AuthorAlias,
BackImage = tweet.AvatarUri,
});
}
// Done!
NotifyComplete();
});
}
What’s going on here?
- First we get the search term from the ApplicationSettings (this is set in the foreground app)
- As noted above, this is currently set to wp7dev
- Then we call a handy-dandy method GetFirstTweet to retrieve the first tweet that matches our search term
- This is an asynchronous call, of course, since it goes out to the web
- When the call returns, we check if the tweet is valid and whether it is different from the last tweet we saw (which is also retrieved from ApplicationSettings)
- If the tweet is invalid or not new, we skip to the end
- The current tweet is saved as the last-seen tweet, so we don’t show it again next time
- We use a helper method to show a toast to the user with the tweet’s author and text. We also use a deep link into the application so that tapping on the toast will launch directly into the details page for that toast
- The same technique as used in the alarms and reminders example is used to show a “home” button in the UI in this case
- We use another helper method to update the primary tile with the tweet information, including the author’s avatar (this will appear on the back of the tile)
- Finally, we call the all-important NotifyCompletemethod to let the system know we completed successfully.
The importance of calling NotifyComplete at the right time cannot be overstated!
- Failure to call NotifyComplete at all will cause the system to think you timed-out, and then in the foreground if you query LastExitReason you will get a failure code ExecutionTimeExceeded
- Calling NotifyComplete too early will immediately terminate your process, leaving any remaining work on background threads incomplete (although the system will happily report that you Completed successfully!)
Luckily there is a simple way to deal with this, as we shall see in Part 2 of the post.
A handy tip for agent debugging
One of the problems with debugging agents is that the very act of using the debugger changes the way your agent executes. In particular, when the debugger is attached both the runtime quota and the memory quota are ignored, leaving you with infinite time and memory to (ab)use. This is necessary for the debugger to work correctly (imagine trying to complete a debug session in only 25 seconds!) but introduces issues if the thing you’re trying to debug is a memory and / or execution time issue.
Now back in the olden days – when we used to have to walk to school uphill in both directions – we didn’t have fancy-schmancy graphical debuggers. We had printf! Now printf (or its modern debugging equivalent, Debug.WriteLine) doesn’t really help with an agent if you can’t have the debugger attached, so there’s not much you can do. Obviously you can write out logs to a log file and then read them off the device with the Isolated Storage Explorer tool, or if you’re brave enough you can enable console spew from the emulator, but if you just want to display a tiny bit of text – like, say, the amount of memory you’re currently using or the amount of time you’ve been executing… why not use a toast?
The agent includes a method WriteDebugStats that is used to write some memory statistics to a toast (and to a tile for good measure!). Because toasts “stack up” in the shell and are displayed for several seconds before being replaced by the next toast, you can actually queue up several messages inside toasts that can be used to convey debug information while not under the debugger.
The method looks like this, using the same helper methods as before to show the toast and a tile update (the memory values are all based on calls to ApplicationPeakMemoryUsage API):
/// <summary>
/// Writes out debug stats to a toast and a secondary tile (if it exists)
/// </summary>
void WriteDebugStats()
{
const double ONE_MEGABYTE = 1024 * 1024;
double initial = (double)initialMemory / ONE_MEGABYTE;
double beforeTile = (double)beforeTileMemory / ONE_MEGABYTE;
double final = (double)finalMemory / ONE_MEGABYTE;
TimeSpan duration = DateTime.Now - startTime;
// Show a toast
ToastHelper.ShowToast("Mem / time", string.Format("{0:#.#}-{1:#.#}-{2:#.#}MB / {3:#.#}s", initial, beforeTile, final, duration.TotalSeconds), null);
// Update the debug tile (if it is pinned)
TileHelper.UpdateTile("DEBUG_TILE", "debug info", "debug info", string.Format(TweetSearchTerm + ": {0:#.#}MB, {1:#.#}s", final, duration.TotalSeconds));
}
If the app is running in debug mode, it will display a “debug” button on the main page that will pin a secondary tile to start that is used to display the debug info.
Another handy hint – use the new LaunchForTest API to launch your agent whenever you want – this replaces the old (Beta 1) behaviour of launching the agent whenever Add or Find was called and the debugger was attached (it was a rather annoying “feature”). You can even call LaunchForTest from the background agent itself, letting it run in perpetuity (but, of course, only on side-loaded dev projects; a shipping marketplace app can’t call this method). If you run the project in “Debug” mode you will see button that lets you run the agent immediately (there are is a short delay giving you enough time to exit the app so that the toast will appear).
That’s it for Part 1 – the project is zipped up below, and we’ll discuss more of the project in parts 2 and 3.
Great posts, Peter!
I have some issues with your sample though: first, it says "Microsoft.Unsupported" namespace doesn't exist, and I ahd to comment out following line from App.InitializePhoneApplication: TiltEffect.SetIsTiltEnabled(RootFrame, true); as it was missing (I found another post in your blog about "TiltEffect", I'll read that one as well).
Anyways, thanks for great articles!
Thanks Iurii – yes the HintPath was relying on the tilt assembly being somewhere else. I updated the ZIP to include the DLL directly.
Hi peter, nice article as usual!
I noticed that you update the IsolatedStorageSettings in your bgAgent code.
I do something similar in my app but sometimes after the bgAgent completes and my app starts , IsolatedStorageSettings is empty.
What can cause this? Maybe if the bgAgent is forced to stop while settings.save is in progress?
It might be due to that, yes. One option would be to use a different file (not AppSettings) and then do a copy / re-name operation to avoid partial updates.
Hi Peter,
Great guide but i just have one question. I thought i heard somewhere that the longest an agent can run for is 2 weeks. I need to be able to run a task once every n days but that could be more than 14 days. It could also be yearly. For example an app that updates a tile once or twice a year to remind you of something.
Is it possible to do this?
Hi nitro52, an agent can only run for 14 days at a time, but you can renew your 14 day subscription every time the foreground application is run. If you want to remind people of something, I suggest you look into the Alarms and Reminders feature, which I discuss here: blogs.msdn.com/…/alarms-and-notifications-in-mango.aspx
|
https://blogs.msdn.microsoft.com/ptorr/2011/07/11/background-agents-part-1-of-3/
|
CC-MAIN-2017-26
|
refinedweb
| 2,391
| 57.2
|
hi,
Just a quick question. I wrote a short code for generating 0, 1 randomly in order to simulate a coin-tossing game.
I compiled and execute the program, it just gave out 1 result all the time!!!I compiled and execute the program, it just gave out 1 result all the time!!!Code://This program will simulate 100 times //of coin-tossings. Then it will calculate //the frequency how many times it's HEAD or TAIL #include <iostream> #include <iomanip> #include <cstdlib> using namespace std; int flip(void); //return 0 for T(tail=0) and H(head=1) int main() { int headCount=0; int tailCount=0; for (int i=1; i<=100; i++) //simulate flipping 100 times { switch(flip()) { case (1): cout << " " << "H"; headCount++; break; case (0): cout << " " << "T"; tailCount++; break; } if (i%10==0) cout << endl; } cout << "Head " << headCount << endl; cout << "Tail " << tailCount << endl; return 0; } //Function that does fliping th coin int flip(void) { int outcome; outcome=rand()%2; return outcome; }
I'm wondering probably the way I use rand() is not good enough?
thanks!
|
http://cboard.cprogramming.com/cplusplus-programming/103889-random-isn%27t-really-random.html
|
CC-MAIN-2015-11
|
refinedweb
| 178
| 62.01
|
A couple of years ago the BBC distributed nearly a million BBC Micro:bits to schools in the UK as part of the BBC’s Make it Digital initiative. It was generally considered a resounding success, as described for example in this article from the BBC website..
You can program the BBC micro:bit in several ways, including using Swift and a a block editor. In this article I want to show how to get started using Python with your BBC micro:bit.
You can purchase a starter kit containing everything you need to to use a BBC micro:bit with your desktop or laptop computer here:
[products ids=”1644″]
Whats follows is an introduction to how to get started using Python to program your BBC micro:bit.
Setting up your BBC micro:bit
It’s basically a matter of plug and play to get set up with your BBC micro:bit. There is a Quick Start Guide here. In a nutshell, you just need to:
- attach the battery pack to the BBC micro:bit
- connect it to you PC/Mac with the USB cable
- this will create a
microbit“drive” on your machine
Running your first Program
Now we can run programs by sending files to the
microbit drive just created above
- go to the online Python editor
- download the sample program you find there and save it somewhere convenient with a name ending in
.hex. E.g. Download to a folder called
microbit projectson your desktop and name it as
hello.hex.
- On windows, right click this file and select
send to -> microbit.
- On Mac, drag and drop the file into to
microbit“drive”.
- That’s it – after a few seconds the
hello worldprogram should run on the BBC micro:bit.
Each time you create a new program in the editor, your should give it a new name when you download it, and build up a collection of files for later use as you go.
Python for the BBC micro:bit
So now some examples to give you a feel for what’s is possible and how Python works on the BBC micro:bit
The
Hello world program you just ran looks like this:
from microbit import * while True: display.scroll('Hello, World!') display.show(Image.HEART) sleep(2000)
You are going to need to frequently consult reference docs as you learn, so keep this page open. Please note that the intro mentions the
mu editor which you will probably want to download and use eventually, but this article refers to the online editor as my aim is to get you up and running with the minimum amount of setup (aka yak shaving in the trade).
So why does the above code work?
While True is the way we tell Python “just keep on doing this,” so everything indented within the while block repeats indefinitely.
display.scroll('Hello, World!')scrolls “Hello, World” across the display.
display.show(Image.HEART)displays the pre-existing image called “HEART”
sleep(2000)waits for 2000 milliseconds (two seconds)
- and the whole thing loops “forever”…
Images
Let’s step back a bit and just display a simple image. You can find a list of available images here.
from microbit import * display.show(Image.HAPPY)
Easy huh?
Try some others for yourself.
What about alternating two images?
Well this requires a loop, so we bring back
while True:
from microbit import * while True: display.show(Image.HAPPY) sleep(1000) display.show(Image.SAD) sleep(1000)
(Don’t forget to name your downloaded code with the
.hex extension before sending it to the BBC micro:bit)
BTW, why do we need the second
sleep(1000) instruction? If you can’t see why, try removing it and seeing what happens.
DIY images
If you want to make your own images, you can do so with the following syntax. The numbers are the
brightness values (from 0-9) for each LED in the
5x5 grid.
0 means
off.
from microbit import * my_diagonal = Image("90000:" "09000:" "00900:" "00090:" "00009") display.show(my_diagonal)
OK so there is FAR more to explore on the the BBC micro:bit. In future articles we will cover event loops, sensors and more, but for now I hope this was a helpful introduction to the fun and creativity that can be experiences with the BBC micro:bit.
|
https://compucademy.net/bbc-microbit-introduction/
|
CC-MAIN-2022-27
|
refinedweb
| 723
| 61.26
|
XSL Jumpstart: Creating Style Sheets
XSL Jumpstart
In this chapter
- XSL Processing
- Creating the Style Sheet
- Templates and Template Rules
- Understanding Patterns
- Creating Text
- Getting the Content of an Element
- Outputting the Results
- Applying Style Sheets Dynamically
- Retrieving Attributes
- Adding New Template Rules
- In Practice
- Troubleshooting
XSL Processing
This chapter is designed to give you a quick start into creating XSL style sheets. Therefore, a minimum of theory will be presented. However, before you can create even your first style sheet, it is important to understand the basics of style sheet processing. As with the rest of this book, there is an emphasis on creating XSL transformations.
When an XML document is loaded, the parser takes the document and scans all of its components, which may include
- Elements
- Attributes
- Entities
- CDATA sections
- Processing instructions
As each markup component is scanned, it is placed in a hierarchical tree structure in memory. Once the entire document is scanned, the document tree can be accessed through Application Program Interfaces (APIs) like the Document Object Model (DOM).
In the case of XSL (both formatting objects and transformations), you can write style sheets that also access this in-memory tree. From an XSL perspective, this is called the source tree because it represents the source document. The goal in XSL processing is to create a second tree that contains the output you desire. This second tree is called the result tree. To create the result tree, you use rules in your XSL style sheet (called templates) to walk through the source tree, select components of the tree you wish to process, and transform them. The result of applying a style-sheet template is placed in the result tree. In the case of formatting objects, the result tree will contain a formatted version of your XML document. In the case of a transformation, the result tree will contain the transformed XML document.
To clearly understand how this process works, consider the XML document in Listing 2.1.
Listing 2.1 A Typical Invoice Record Represented as an XML Document
<?xml version="1.0" ?> <?xml-stylesheet <clientName>ACME Programming Company</clientName> <contact>Kris Butler</contact> <address> <streetAddress>123 Fourth Street</streetAddress> <city>Sometown</city> <state>CA</state> <zip>12345</zip> <province /> <country>USA</country> </address> <descriptionOfServices> XML Training </descriptionOfServices> <costOfServices>1000</costOfServices> </invoice>
This XML document, which may have been the result of some database operation, represents a typical invoice containing client information, a description of services, cost of services, and so on. Although in practice, this document might or might not be stored as a physical file, you may give it a filename, invoice.xml, for the purposes of running this example.
For this first example, you would like to transform this document into HTML so that you can display the information in a browser.
Conceptually, the source tree looks like Figure 2.1.
Figure 2.1 This conceptual view of the source tree shows how an XML document is broken down into its constituent parts.
Now you would like to walk this tree and create the result tree shown in Figure 2.2.
Notice that the result tree in Figure 2.2 does not contain XML elements. Rather it contains HTML elements.
How the result tree gets streamed into a document depends on how the style sheet is applied. Recall from Chapter 1, "The Essence of XSL," that the style sheet may be part of a static reference in the XML document instance. In this case, the output is handled by the XML parser. On the other hand, the style sheet may be applied dynamically by an application program. In this case, it is up to your program to stream the results back out to a file, a browser, or some other device.
Figure 2.2 The output from the XSLT processor is a result. In this case, the result tree represents an HTML document.
Creating the Style Sheet
Let's look at a typical style sheet that might be used to transform the XML document in Listing 2.1 into HTML. Listing 2.2 shows the style sheet.
Listing 2.2 This Transformation (invoice.xsl) Takes Listing 2.1 and Converts It into HTML for Viewing in a Browser
<?xml version="1.0" ?> <xsl:stylesheet <xsl:output <!-- Root template rule --> <xsl:template <HTML> <HEAD> <TITLE>First XSLT Example</TITLE> </HEAD> <BODY> <P><B>Company: </B> <xsl:value-of </P> <P><B>Contact: </B> <xsl:value-of </P> <P><B>Services Rendered: </B> <xsl:value-of </P> <P><B>Total Due: </B> $<xsl:value-of </P> </BODY> </HTML> </xsl:template> </xsl:stylesheet>
For simplicity, the goal for this style sheet is to transform just four elements from the source document: clientName, contact, descriptionOfServices, and costOfServices. This also brings up a good point: You only have to transform those parts of a document you wish. Therefore, this transformation represents a departure from the structure of the original source document.
The first thing you'll notice about this XSLT style sheet is that the first line is an XML declaration indicating that it's an XML document. That means this style sheet is a well-formed XML document that must validate against an XSL DTD or schema. Where does it reference the schema? In most XML documents, a DOCTYPE declaration is used to reference the schema. However, in XSL, a namespace attribute in the <stylesheet> element refers to the schema.
A Word on Namespaces
The namespaces mechanism allows you to uniquely identify element types that you create. For example, imagine that you have created an XML document describing a book chapter. You might create element types such as <chapterTitle>, <subHead1>, <subhead2>, <chapterText>, <codeListing>, <sidebar>, <footer>, and so on. Now imagine that you want to merge the content from this document with a document taken from a training manual. That document might also use element type names such as <chapterText> or <sidebar>, but define a completely different structure. Ultimately, you wind up with name collisions between your document and the document you're attempting to merge.
From the perspective of the document author, a namespace is a prefix you can add to your elements that uniquely identify them. Typically, a namespace corresponds to a Uniform Resource Identifier (URI) of an organization, such as your company's Web address, or that of a specification document. Because these URIs can contain long path names, namespace declarations allow you to create an alias that is a shorthand notation for the fully qualified namespace. For example, I might create a document that sets up the following
xmlns:myNS=""
The xmlns portion of the statement says, "I'm creating an XML namespace." The :myNS is optional and is user defined. When included, this sets up the alias for the longer URI. The portion after the equals sign is the fully qualified URI. So, this statement creates the namespace and assigns it to the alias myNS.
The following shows how the namespace is used:
<myNS:chapter> <myNS:chapterTitle> <myNS:chapterText> ... </myNS:chapterText> </myNS:chapter>
As you can see, prefixing elements with myNS helps to create a unique name for the elements in this document.
In XSL, the <stylesheet> element requires that you set up the XSL namespace that points to a URI. The declaration tells the XML processor that this is an XSL style sheet, not just another XML document. The URI that the namespace points to varies depending on the version of XSL you're using. The current XSL specification requires conforming XSLT style sheets to point to.
TIP
Note in Listing 2.2 that an alias, xsl, is established. Because the alias is optional, it is unnecessary to include the xsl alias. In fact, because it is user defined, you can choose any alias name you wish. However, xsl is the de facto name used by virtually all style sheet developers.
Also, because the alias is optional, it is not necessary to include it at all. Omitting the alias means you can also omit the xsl: that's prefixed to all XSL element type names. This can save you some typing and eliminate a few hundred bytes from the size of your document. However, be aware that both the source document or your transformation may contain element type names that conflict with XSL's naming conventions. Therefore, it is always prudent to include the xsl alias in your style sheets.
CAUTION
Before the XSL became a W3C recommendation in November 1999, processors were forced to use non-standard URIs in their namespace declarations. If you run into an error when using the current namespace, check the version of XSL processor you are using and consider the following alternative namespaces.
XSL processors that follow the December 1998 working draft use the following namespace definition:
xmlns:xsl = ""
Interim processors (such as MSXML 1) use the following:
xmlns:xsl = ""
The November 1999 (current) specification requires the following:
xmlns:
Returning to Listing 2.2, the <stylesheet> element is the root element of the document and is therefore the container for the rest of the style sheet. You will learn about all of the elements that <stylesheet> supports in Chapter 4, "The XSL Transformation Language." However, one important element type is <output>, which allows style sheet authors to specify how they wish the result tree to be output. Currently, you can specify the result tree to be output as XML, HTML, or as text. Listing 2.2 instructs the processor to output the result tree as HTML.
|
http://www.informit.com/articles/article.aspx?p=26312&seqNum=6
|
CC-MAIN-2017-09
|
refinedweb
| 1,580
| 62.07
|
LogGenerator is used in older versions for generating statistics on how many page views you have per page on your website. Normally you don't use it...
You don't need it for using log4net no...
To get statistics on usage, use GA (Google Analytics) instead...free, and a thousand times better. There is a nice addon to give Editors information about statistics directly in edit mode and on dashboard.
Hope that helps!
But when I am migrating it from 6.1 to 7.0 getting below mentined error in code.
The type or namespace name 'LogGenerator' does not exist in the namespace 'EPiServer.Web.WebControls' (are you missing an assembly reference?)
v%MCEPASTEBIN%
|
https://world.optimizely.com/forum/legacy-forums/Episerver-7-CMS/Thread-Container/2014/6/Episerver-Migration-from-52R2-to-70-/
|
CC-MAIN-2022-21
|
refinedweb
| 114
| 62.54
|
log.rofl(‘Fun with Groovy metaprogramming’)
Recently I saw a post by someone (I think it was @jbarnette, but it was retweeted to me) suggesting that there should be some alternate log levels, like
fyi,
omg, or even
wtf. I thought that was pretty funny, but then it occurred to me I could probably implement them using Groovy metaprogramming.
As a first attempt, consider the following simple example that adds the
fyi and
omg methods to
java.util.logging.Logger:
import java.util.logging.Logger Logger.metaClass.fyi = { msg -> delegate.info msg } Logger.metaClass.omg = { msg -> delegate.severe msg }
For those who haven’t used Groovy much, the
metaClass property is associated with every class in Groovy, and allows you to add methods and properties to the class. Here the
fyi method is defined by assigning it to a one-argument closure whose implementation is to invoke the (existing)
info method in
Logger, with the
msg argument. Likewise,
omg is assigned to the
severe method. Therefore, an invocation like:
Logger log = Logger.getLogger(this.class.name) log.fyi 'for your information' log.omg 'oh my goodness'
results in
Dec 12, 2011 10:09:02 PM java_util_logging_Logger$info call
INFO: for your information
Dec 12, 2011 10:09:02 PM java_util_logging_Logger$severe call
SEVERE: oh my goodness
The methods work, but the output isn’t really what I want. The messages get passed through, but the output shows
INFO and
SEVERE rather than
FYI and
OMG.
It turns out it takes a bit of work to define a custom log level. Levels are defined using the
java.util.logging.Level class, which predefines levels like
Level.INFO,
Level.WARNING, and
Level.SEVERE. The
Level class has a protected constructor which can be used to make new levels. I therefore adding a class called
CustomLevel, as follows:
import java.util.logging.Level class CustomLevel extends Level { CustomLevel(String name, int val) { super(name,val) } }
Each level gets an integer value. On my Windows 7 system (sorry) using JDK 1.6, the actual values of some of the defined levels are:
import java.util.logging.Level println "$Level.INFO: ${Level.INFO.intValue()}" println "$Level.WARNING: ${Level.WARNING.intValue()}" println "$Level.SEVERE: ${Level.SEVERE.intValue()}"
INFO: 800
WARNING: 900
SEVERE: 1000
My second attempt was then to define a Groovy category, so that I could replace a couple of the existing levels in a controlled fashion.
import java.util.logging.Level import java.util.logging.Logger class SlangCategory { static String fyi(Logger self, String msg) { return self.log(new CustomLevel('FYI',Level.INFO.intValue()),msg) } static String lol(Logger self, String msg) { return self.log(new CustomLevel('LOL',Level.WARNING.intValue()),msg) } } Logger log = Logger.getLogger(this.class.name) use(SlangCategory) { log.fyi 'this seems okay' log.lol('snicker') }
Each of the logging methods in the
Logger class (like
info() or
warning()) delegate to the
log() method, which takes two arguments — an instance of
Level, and a message
String. I therefore used the category to replace the INFO and WARNING levels with FYI and LOL. The output is now:
Dec 12, 2011 10:20:29 PM sun.reflect.NativeMethodAccessorImpl invoke0
FYI: this seems okay
Dec 12, 2011 10:20:29 PM sun.reflect.NativeMethodAccessorImpl invoke0
LOL: snicker
Once again, this is just replacing existing levels, though it does at least have the new level name in the output string.
To really do this right, though, I wanted to be able to define new levels arbitrarily without having to hardwire them. That meant overriding the
methodMissing method in the
metaClass, using what Jeff Brown describes as the “intercept, cache, invoke” pattern for metaprogramming. Here’s the result, which I’ll explain after the code.
import java.util.logging.* Logger.metaClass.methodMissing = { String name, args -> def impl = { Object... varArgs -> int val = Level.WARNING.intValue() + (Level.SEVERE.intValue() - Level.WARNING.intValue()) * Math.random() def level = new CustomLevel(name.toUpperCase(),val) delegate.log(level,varArgs[0]) } Logger.metaClass."$name" = impl impl(args) } Logger log = Logger.getLogger(this.class.name) log.wtf 'no effin way' log.whoa 'dude, seriously' log.rofl "you're kidding, right?"
The
methodMissing method of the
metaClass takes two arguments: the name of the method, and the arguments passed to it. Whenever you invoke a method that doesn’t exist,
methodMissing gets invoked. That’s the “intercept” part.
The implementation is to define a closure that takes any number of arguments. Inside the closure, I computed a random value between
Level.WARNING and
Level.SEVERE, and then used that value to instantiate a custom level. The defined level and the new value were used in the
log method on the closure’s
delegate property (in this case, the logger) to log the message at the new level.
Finally, the
"$name" method to the
metaClass (which evaluates the name variable — otherwise the method added would just be called
name) is assigned to the
impl closure. That’s the “cache” part. Finally, the implementation is called, which is the “invoke” part.
Now I can use a log level with whatever name I want. The output of this script is
Dec 12, 2011 10:31:35 PM sun.reflect.NativeMethodAccessorImpl invoke0
WTF: no effin way
Dec 12, 2011 10:31:35 PM sun.reflect.NativeMethodAccessorImpl invoke0
WHOA: dude, seriously
Dec 12, 2011 10:31:35 PM sun.reflect.NativeMethodAccessorImpl invoke0
ROFL: you're kidding, right?
The demos are nice, but this really ought to be tested. I’m reasonably comfortable with this level (snicker) of metaprogramming, but I’d feel a lot better if I had a real test for it.
That took a fair amount of digging. It turns out that the inimitable Dierk Koenig (lead author of Groovy in Action, known as #regina on Twitter; second edition available now through the Manning Early Access Program) wrote a class called
groovy.lang.GroovyLogTestCase. That class has a static method called
stringLog. The GroovyDocs say:
“Execute the given Closure with the according level for the Logger that is qualified by the qualifier and return the log output as a String. Qualifiers are usually package or class names. Existing log level and handlers are restored after execution.”
It took me a while figure out how to use that, but eventually I got it to work. It automatically captures the console appender output, so the resulting test looks like this:
import java.util.logging.Level import java.util.logging.Logger class LoggingTests extends GroovyLogTestCase { String baseDir = 'src/main/groovy/metaprogramming' void testWithoutCustomLevel() { def result = stringLog(Level.INFO, without_custom_levels.class.name) { GroovyShell shell = new GroovyShell() shell.evaluate(new File("$baseDir/without_custom_levels.groovy")) } assert result.contains('INFO: for your information') assert result.contains('SEVERE: oh my goodness') } void testSlangCategory() { def result = stringLog(Level.INFO, use_slang_category.class.name) { GroovyShell shell = new GroovyShell() shell.evaluate(new File("$baseDir/use_slang_category.groovy")) } assert result.contains('FYI: this seems okay') assert result.contains('LOL: snicker') } void testEMC() { def result = stringLog(Level.INFO, use_emc.class.name) { GroovyShell shell = new GroovyShell() shell.evaluate(new File("$baseDir/use_emc.groovy")) } assert result.contains('WTF: no effin way') assert result.contains('WHOA: dude, seriously') assert result.contains("ROFL: you're kidding, right?") } }
I should mention a couple of minor points. First, the class associated with a script has the same name as the file containing it, so the classes in the
stringLog method are the script names. Second, in case you were wondering, the emc part of the script name stands for
ExpandoMetaClass.
I spent a very pleasant evening working on this, and learned a few things:
1. Groovy metaprogramming is fun,
2. Now I know how to use
GroovyLogTestCase, which is not at all well documented, and
3. I’ll go to all sorts of trouble to avoid working on what I’m supposed to be working on, especially if it involves a joke. 🙂
I added all this to my book’s source code. What book, you ask? Why, Making Java Groovy, available through the Manning Early Access Program (MEAP) at. I don’t know, however, if I’ll have room in the text to include all this.
log.marketing('Please forgive the mandatory advertising for my book')
Recent Comments
|
https://kousenit.org/2011/12/
|
CC-MAIN-2017-17
|
refinedweb
| 1,359
| 51.44
|
The final keyword does not allow modifying or replacing its original value or definition.
The final keyword can be used in the following three contexts:
If a variable is declared final, it can be assigned a value only once. The value of a final variable cannot be modified once it has been set.
A variable declaration includes the declaration of a local variable, a formal parameter of a method/constructor, an instance variable, and a class variable.
To declare a variable as final, use the final keyword in the variable's declaration.
final int YES = 1;
We can set the value of a final variable only once.
There are two ways to initialize a final variable:
However, we must initialize the final variable before it is read for the first time.
You can declare a local variable final. If you declare a local variable as a blank final variable, you must initialize it before using.
We can declare a parameter final. A parameter is initialized automatically with the value of the actual parameter when the method or the constructor is invoked.
Therefore, you cannot change the value of a final formal parameter inside the method's or the constructor's body.
We can declare an instance variable final and blank final.
A blank final instance variable must be initialized once and only once when any of the constructors of the class is invoked.
We can declare a class variable final and blank final. We must initialize a blank final class variable in one of the static initializers.
A reference variable stores the reference of an object. A final reference variable means that once it references an object (or null), it cannot be modified to reference another object.
The following code shows the final formal parameter x for the test2() method:
public void test2(final int x) {
If we have more than one static initializer for a class, we must initialize all the blank final class variables only once in one of the static initializers.
public class Main { public static final int YES = 1; public static final int NO = 2; public static final String MSG; static { MSG = "final static variable"; } }
If a class is declared final, it cannot be extended (or subclassed).
If a method is declared final, it cannot be redefined (overridden or hidden) in the subclasses of the class that contains the method.
|
http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0095__Java_final_Keyword.htm
|
CC-MAIN-2017-22
|
refinedweb
| 394
| 52.7
|
This prints 7, not 5 as it would if the defines are replaced with:
Code: Select all
#include <iostream> using namespace std; #define max(a,b) a>b?a:b #define max(a,b) a<b?a:b int main() { int a=7, b=5, c=3; cout << (min(a,max(b,c))) << endl; return 0; }
#define max(a,b) (a>b?a:b)
#define max(a,b) (a<b?a:b)
If you compile g++ with the -E option, you can see the output of the preprocessor:
cout << (a<b>c?b:c?a:b>c?b:c) << endl;
with values:
cout << (7<5>3?5:3?7:5>3?5:3) << endl;
The < and > associate left to right and have a higher precedence than ?: so this becomes:
cout << (0>3?5:3?7:1?5:3) << endl;
cout << (0?5:3?7:1?5:3) << endl;
The ?: associates right to left:
cout << (0?5:3?7:5) << endl;
cout << (0?5:7) << endl;
cout << (7) << endl;
|
https://onlinejudge.org/board/viewtopic.php?f=14&t=71441&p=208068
|
CC-MAIN-2020-16
|
refinedweb
| 168
| 88.74
|
These two example programs should work without any changes on a Linux or FreeBSD system. For other operating systems, minor changes are needed, mostly with file paths. These examples are designed to give enough details for you to understand the problem, without the clutter that is a necessary part of a real application. The first example is very straightforward. The second example is a little more advanced with some error checking. The first is followed by a command-line entry for compiling the program. The second is followed by a GNUmake file that may be used for compiling instead.
Example 1
test1_libmysqld.c
#include <stdio.h> #include <stdlib.h> #include <stdarg.h> #include "mysql.h" MYSQL *mysql; MYSQL_RES *results; MYSQL_ROW record; static char *server_options[] = \ { "mysql_test", "--defaults-file=my.cnf", NULL }; int num_elements = (sizeof(server_options) / sizeof(char *)) - 1; static char *server_groups[] = { "libmysqld_server", "libmysqld_client", NULL }; int main(void) { mysql_library_init(num_elements, server_options, server_groups); mysql = mysql_init(NULL); mysql_options(mysql, MYSQL_READ_DEFAULT_GROUP, "libmysqld_client"); mysql_options(mysql, MYSQL_OPT_USE_EMBEDDED_CONNECTION, NULL); mysql_real_connect(mysql, NULL,NULL,NULL, "database1", 0,NULL,0); mysql_query(mysql, "SELECT column1, column2 FROM table1"); results = mysql_store_result(mysql); while((record = mysql_fetch_row(results))) { printf("%s - %s \n", record[0], record[1]); } mysql_free_result(results); mysql_close(mysql); mysql_library_end(); return 0; }
Here is the command line for compiling the above program:
gcc test1_libmysqld.c -o test1_libmysqld \ `/usr/local/mysql/bin/mysql_config --include --libmysqld-libs`
Example 2
To try the example, create an
test2_libmysqld
directory at the same level as the MySQL source directory. Save
the
test2_libmysqld.c source and the
GNUmakefile in the directory, and run GNU
make from inside the
test2_libmysqld directory.
test2_libmysqld.c
/* * A simple example client, using the embedded MySQL server library */ #include <mysql.h> #include <stdarg.h> #include <stdio.h> #include <stdlib.h> MYSQL *db_connect(const char *dbname); void db_disconnect(MYSQL *db); void db_do_query(MYSQL *db, const char *query); const char *server_groups[] = { "test2_libmysqld_SERVER", "embedded", "server", NULL }; int main(int argc, char **argv) { MYSQL *one, *two; /* mysql_library_init() must be called before any other mysql * functions. * * You can use mysql_library_init(0, NULL, NULL), and it * initializes the server using groups = { * "server", "embedded", NULL * }. * * In your $HOME/.my.cnf file, you probably want to put: [test2_libmysqld_SERVER] language = /path/to/source/of/mysql/sql/share/english * You could, of course, modify argc and argv before passing * them to this function. Or you could create new ones in any * way you like. But all of the arguments in argv (except for * argv[0], which is the program name) should be valid options * for the MySQL server. * * If you link this client against the normal mysqlclient * library, this function is just a stub that does nothing. */ mysql_library_init(argc, argv, (char **)server_groups); one = db_connect("test"); two = db_connect(NULL); db_do_query(one, "SHOW TABLE STATUS"); db_do_query(two, "SHOW DATABASES"); mysql_close(two); mysql_close(one); /* This must be called after all other mysql functions */ mysql_library_end(); exit(EXIT_SUCCESS); } static void die(MYSQL *db, char *fmt, ...) { va_list ap; va_start(ap, fmt); vfprintf(stderr, fmt, ap); va_end(ap); (void)putc('\n', stderr); if (db) db_disconnect(db); exit(EXIT_FAILURE); } MYSQL * db_connect(const char *dbname) { MYSQL *db = mysql_init(NULL); if (!db) die(db, "mysql_init failed: no memory"); /* * Notice that the client and server use separate group names. * This is critical, because the server does not accept the * client's options, and vice versa. */ mysql_options(db, MYSQL_READ_DEFAULT_GROUP, "test2_libmysqld_CLIENT"); if (!mysql_real_connect(db, NULL, NULL, NULL, dbname, 0, NULL, 0)) die(db, "mysql_real_connect failed: %s", mysql_error(db)); return db; } void db_disconnect(MYSQL *db) { mysql_close(db); } void db_do_query(MYSQL *db, const char *query) { if (mysql_query(db, query) != 0) goto err; if (mysql_field_count(db) > 0) { MYSQL_RES *res; MYSQL_ROW row, end_row; int num_fields; if (!(res = mysql_store_result(db))) goto err; num_fields = mysql_num_fields(res); while ((row = mysql_fetch_row(res))) { (void)fputs(">> ", stdout); for (end_row = row + num_fields; row < end_row; ++row) (void)printf("%s\t", row ? (char*)*row : "NULL"); (void)fputc('\n', stdout); } (void)fputc('\n', stdout); mysql_free_result(res); } else (void)printf("Affected rows: %lld\n", mysql_affected_rows(db)); return; err: die(db, "db_do_query failed: %s [%s]", mysql_error(db), query); }
GNUmakefile
# This assumes the MySQL software is installed in /usr/local/mysql inc := /usr/local/mysql/include/mysql lib := /usr/local/mysql/lib # If you have not installed the MySQL software yet, try this instead #inc := $(HOME)/mysql-5.7/include #lib := $(HOME)/mysql-5.7/libmysqld CC := gcc CPPFLAGS := -I$(inc) -D_THREAD_SAFE -D_REENTRANT CFLAGS := -g -W -Wall LDFLAGS := -static # You can change -lmysqld to -lmysqlclient to use the # client/server library LDLIBS = -L$(lib) -lmysqld -lm -ldl -lcrypt ifneq (,$(shell grep FreeBSD /COPYRIGHT 2>/dev/null)) # FreeBSD LDFLAGS += -pthread else # Assume Linux LDLIBS += -lpthread endif # This works for simple one-file test programs sources := $(wildcard *.c) objects := $(patsubst %c,%o,$(sources)) targets := $(basename $(sources)) all: $(targets) clean: rm -f $(targets) $(objects) *.core
I'm currently working on embedding this into a dll that is compiled with Visual C++. I found the following useful to make this compile.
It is necessary to ensure winsock.h is included before mysql.h. This is because SOCKET is defined in winsock.h and is used by mysql_com.h and mysql.h includes mysql_com.h.
I noticed that winsock.h is included in mysql.h but is wrapped inside a #ifdef. I suppose another alternate would be to figure out why this isn’t being entered and resolve it.
Example,
#ifdef __LCC__
#include <winsock.h> /* For windows */
#endif
As we can see, if __LCC__ were defined there would be no reason to include winsock.h. If this is a bug someone should report it.
Also, when compiling this for an MFC application, I found it necessary to turn off/disable precompiled headers. For some reason, including winsock.h wasn’t being accepted when called after #include “stdafx.h”. After disabling precompiled headers the code compiled; hmmm… maybe that is another bug.
One reason for my post is a search on the internet suggested others were unable to compile this for the same reason I was unable.
For the simple embedded server example code in documents, there is a bug in the Makefile. You have to put "-lstdc++" to gcc command line to solve the problem, or after you type make, you'll get a lot of "undefined reference" errors.
On Windows, I had trouble getting the mysql embedded server to go passed mysql_server_init() function. After many hours of fiddling with the settings, I discovered that if you have the mysql-service running in the background (usually mysqld-nt.exe) the call to mysql_server_init() will fail! I have disabled the service from starting up... I don't know if this is a bug or a sharing violation with mysql service (seeing as that's also an embedded server!?) or I need to take some extra steps before running my exe, either way, killing the mysql service isn't the ideal fix.
I can't speak for later releases yet, but for 4.1 here
are some hard won tips:
the embedded server still reads c:\my.cnf and must find
enough initialization there under the keys you've specified
(ie [server]) to make it happy. It will need to find at
least the datadir and language dir. This default behavior
makes it hard to develop a really standalone embedded
application - it really wants to parasitize a regular mysql
installation.
The parser for my.cnf requires path names with
forward slashes even on windows.
Any failures in the initialization process cause a silent
and immediate exit. I only figured out what was going on
by downloading the source and building a complete debugging
environment.
After many try-n-errors, I made it to embed MySQL with a scripting language (). You can find the source code at. Here are a few points that I think may be helpful for those having problems:
1. Use the same compiling flags for libary and program.
I had problem with linking the distribulted library by VC6++. So I compiled static libraries acoording to flags in distributed project files. For libmysald, I used
WIN32,NDEBUG,_WINDOWS,SIGNAL_WITH_VIO_CLOSE,HAVE_DLOPEN,EMBEDDED_LIBRARY,HAVE_INNOBASE_DB,DBUG_OFF,__WIN__
And for the program I used
WIN32,NDEBUG,_CONSOLE,_MBCS,_WINDOWS,SIGNAL_WITH_VIO_CLOSE,HAVE_DLOPEN,EMBEDDED_LIBRARY,HAVE_INNOBASE_DB,DBUG_OFF,__WIN__
2) Use similar arguments as follows to initialize the server
char *server_groups[] = { "client", "server", 0 };
char *server_options[] = { "mysql_embedded", "--defaults-file=e:\\my.ini"};
And in "my.ini" groups of [client] and [server] must exist, somting like:
[client]
default-character-set=utf8
[server]
basedir=e:/db
datadir=e:/db/data
default-character-set=utf8
3) Contents in the distributed share directory should be copied to basedir for MySQL to find charactersets and etc.
will also need -lrt in the GNUmakefile for resolving clock_gettime in my_getsystime.c etc
|
http://dev.mysql.com/doc/refman/5.7/en/libmysqld-example.html
|
CC-MAIN-2015-22
|
refinedweb
| 1,427
| 57.16
|
Hi,
I need to do the following.
Suppose I have a class A and class B and C are subtypes of A.
I want to implement an ArrayList that contains subtypes of A. But I only want to implement this ArrayList using methods in class A.
I was thinking about using the bounded wildcard, but I could not get it to work.
So, I need something like:
public class MyArrayList<? extends A> extends ArrayList<? extends A> { /** * Return a subtype of A */ @Override public <? extends A> get(int index) { // do something with methods only in A. } /** * Add a subtype of A. */ @Override public boolean add(<? extends A> e) { // do something with methods only in A } }
Also I want to override the methods get and add in ArrayList. So when I use MyArrayList, I expect to do the following.
MyArrayList<B> bList = new MyArrayList<B>(); MyArrayList<C> cList = new MyArrayList<C>(); bList.add(new B()); B b = bList.get(0); cList.add(new C()); C c = cList.get(1);
I could not get this to work. Any ideas how this should be done? Or is this not a good design?
|
http://www.javaprogrammingforums.com/collections-generics/14724-arraylist-implementation.html
|
CC-MAIN-2014-10
|
refinedweb
| 188
| 87.52
|
How can I implement conditional compilation in Java?
Created May 4, 2012
In the general case, conditional compilation in Java is not needed where it would be needed in C/C++ because the same class files must run on all platforms. This is not always the case, as you may want either testing code included only in testing mode, or you may want certain features enabled (included in the .class files) if the user registers / buys a certain version. With that said, here are some different options available, from various community members:
According to Robert Baruch:
According to Finley McWalter:
There's at least two ways to do this...
If you already have a C compiler, you can use the preprocessor part of it by itself, and feed it Java source rather than C. Depending on which compiler you have available, this might be called cpp, or might be a mode of the C compiler.
Here, for example, is how to do it using GNU gcc:
cp TestS.java TestS.c gcc -E -P TestS.c > Test.java rm -f TestS.c javac Test.java
This example above is a bit complicated because gcc will only preprocess files with a .c extension (not all preprocessors are so picky). You also have to remember Java's restriction that public class Test must be defined in file Test.java - so in the example above that's the result of the preprocessor phase, and you actually edit a file with another name (e.g. TestS.java).
This approach gives you all of the features of the C preprocessor (#include, #ifdef, #define, __FILE__, __LINE__, #undef, ## etc.).
According to Terence Parr:
Check out Doug Tidwell's cool article on such a critter:. It has a discussion of why Java has no preprocessor.
According to Rob Edmondson:
Mocha Source by MochaSoft has a preprocessor that supports conditional compiling. It also has a good source code obfuscator. It processes the Java code before compilation.
According to Finley McWalter, Terence Parr, Andre van Dalen, Mikael Jakobsson, Greg Boettcher, and Robert Baruch:
The other way you can get conditional compiler is to use a feature of the Java language that's not commonly known - Java does have conditional compilation, built right in.
This works because the compiler will not generate unreachable code - and it's smart enough to recognize and handle if() statements that will always have the same result. Here's a quick example. We have two classes:
// Debug.java public class Debug { public static final boolean printDebug = true; }
and
// Test.java public class Test { public static void main(String [] args) { if(Debug.printDebug) System.out.println("debugging enabled"); else System.out.println("debugging disabled"); } }
Because Debug.printDebug is final, the compiler can know that the if statement will always be true, so it only generates the "debugging enabled" line - if you don't believe me, look at the class file with javap -c and you'll not see any bytecode for the if or for the else-clause, and the "...disabled" string isn't in the class file at all - it's been conditionally compiled away.
Now, if you change the value of printDebug to false, and recompile both source files, the else code will be present, and again there's no code in the classfile for the if.
According to Mikael Jakobsson:
The above is a solution, but I do not recommend it unless you have very, very specific needs. It is ususally better (at least from an OO viewpoint) to design your code in such a manner that you do not need to preprocess the code.
It is usually possible to separate the pieces of code that are optional into separate classes and then at runtime decide which implementation to use through Reflection.
The Abstract Factory design pattern may be one approach to the problem (See the book Design Patterns, by Erich Gamma et.al for detailed info). There are certainly other design solutions as well.
According to Brian O'Byrne:
If you need to do conditional compilation based on environment variables to such, your only option is to have a script which modifies your code before you pass it into the compiler. This script would set the INCLUDE_ constants based on outside criteria. Something written in awk or perl would do nicely.
|
http://www.jguru.com/faq/view.jsp?EID=58973
|
CC-MAIN-2019-04
|
refinedweb
| 718
| 63.09
|
Before we can write our first program (which we will do very soon), we need to know two things about development environments.
First, although our programs will be written inside .cpp files, the .cpp files themselves will be added to a project. Some IDEs call projects “workspaces”, or “solutions”.:
#include <iostream> int main() { using namespace std; cout << "Hello world!" << endl; return 0; }
If you select the code from these examples with your mouse and then copy/past it into your compiler, you will also get the line numbers, which you will have to strip out manually. Instead, click the “copy to clipboard” link at the top of the example. This will copy the code to your clipboard without the line numbers, which you can then paste into your compiler without any editing required.
Visual Studio 2005 Express
To create a new project in Visual Studio 2005 Express, go to the File menu, and select New -> Project. A dialog box will pop up that looks like this:
. the _tchar function, and then type/copy the following into your compiler:
#include "stdafx.h" #include <iostream> int main() { std::cout << "Hello world!" << std::endl; return 0; }!
Important note to Visual Studio users: Visual studio programs should ALWAYS begin with the following line:
#include "stdafx.h"
Otherwise you will receive a compiler warning, such as
c:\test\test.
Code::Blocks
To create a new project in Code::Blocks, go to the File menu, and select New Project. A dialog box will pop up that looks like this:
Select Console Application and press the Create button.
You will be asked to save your project. You can save it wherever you wish, though we recommend you save it in a subdirectory off of the C drive, such as
C:\CBProjects. Name the project
HelloWorld.
You will see “Console Application” under the default workspace:
:
This means your compile was successful!
To run your compiled program, press ctrl-F10, or go the Build menu and choose “Run”. You will see something similar to the following:
That is the result of your program!
Using a command-line based compiler
Paste the following into a text file named HelloWorld.cpp:
#include <iostream> int main() { using namespace std; cout << "Hello world!" << endl; return 0; }
From the command line, type:
g++ -o HelloWorld HelloWorld.cpp
This will compile and link HelloWorld.cpp. To run it, type:
HelloWorld (or possibly
.\HelloWorld), and you will see the output of your program.
Other IDEs
You will have to figure out how to do the following on your own:
1) Create a console project
2) Add a .cpp file to the project (if necessary)
3) Paste the following code into the file:
#include <iostream> int main() { using namespace std; cout << "Hello world!" << endl; return 0; }
4) Compile the project
5) Run the project
If compiling fails
If compiling the above program fails, check to ensure that you’ve typed or pasted the code in correctly. The compiler’s error message may give you a clue as to where or what the problem is.
If you are using a much older C++ compiler, the compiler may give an error about not understanding how to include iostream. If this is the case, try the following program instead:
#include <iostream.h> int main() { cout << "Hello world!" << endl; return 0; }
In this case, you should upgrade your compiler to something more compliant with recent standards.
If your program runs but the window closes immediately
This is an issue with some compilers, such as Bloodshed’s Dev-C++. We present a solution to this problem in lesson 0.7 — a few common cpp problems.
Conclusion
Congratulations, you made it through the hardest part of this tutorial (installing the IDE and compiling your first program)! You are now ready to learn C++!
There are 2 different things you have “using namespace std;” while in the pic example you have “std:: cout
[ Code::Blocks does that by default, so I just left it. Either way works. -Alex ]
Very useful tutorial- especially for beginers for me! Thanks a lot!
Extremely useful and easy to understand ans follow. Thank you guys for a great work.
How could I get this for Dev-Cpp? I did it, but the executable file flashes for .1 of a second and closes.
Add the following line just before the return statement in main():
Thus,
Thanks!
Thanks Brian for that comment.
Since I had no experience at all in anything, I got confused.
By the way the end1 is endl (LOWERCASE L)
i thought it was a 1 and i took an hour trying to figure out what was wrong lol.:
Thanks. Great site you got by the way.
I’ll stick with the using statement because I’ve seen people use a lot of I/O and it looked very ugly as you said.
Maybe I’m not doing it right.
I don’t get the “using namespace std” thing coz i don’t know what’s a namespace and what’s std(standard??)
And in my book they have written using namespace std before int main()…….does that make any difference?
Your #include iostream looks fine to me. (-).
thanks bro, this helped me out alot! thnaks for helping me write my first C++ program.
Hi, i am sorry i know this is very basic but i cannot find how to open Code::Blocks please somebody help me. By the way i have both Visual C++ 2005 Express Edition (which i am using for tutorial) and i have Microsoft Visual Basic 2008 Express Edition (decided to use 2005 for a better reference from tutorial) Please somebody help me.
I don’t know if anyone else has this problem but when I compile it fails. It’s “error prj0003 : error spawning ‘rc.exe’.” I’ll copy your code into my file but I just want to know why this happens.
It’s odd but I figured the problem was with my comp and the IDE clashing at some point. After a couple of reinstallations I gave up and installed the 2005 version. Everything worked.
CHEERS
[...] 2007 Prev/Next Posts « 0.4 — Introduction to development | Home | 0.6 — Compiling your first program » Monday, May 28th, 2007 at 6:18 [...]
[...] Compiling your first program [...]
Hi Alex, I am trying to compile a “.c” file using this compiler but I fail.
I can, following through this tutorial compile a “.cpp” file.
Could you help me please?
Thanks in advance.
nevermind Alex, I’ve figured it out just now.
thanks, anyway
|
http://www.learncpp.com/cpp-tutorial/06-writing-your-first-program/
|
crawl-001
|
refinedweb
| 1,087
| 75.5
|
Other Software
Soft Download Site
Download freware and shareware archive - is a PAD (Portable Application Description)software archive ( software repository ) . The same engine based on Php5-MySql database- Smarty templating . Modifications were made to accommodate the features of a modern software repository, Read the rest of this entry
Directory Dominator
An easy to use directory submission software tool used to help build one way links to your website. Find niche categories within directories that accept fast website inclusions. Boosts search engine rankings and Google PageRank with desired anchor Read the rest of this entry
Comment Poster
Comment Poster is powerful and very easy to use next generation SEO tool to automate links building campaigns. Comment Poster will submit your comments with backlinks to your website on thousands of your niche related websites on the Internet automatically. Read the rest of this entry
WebKorr
WebKorr supports your copy editing practice when processing web page correction assignments: All you have to do is proofread the text and directly edit the files, because WebKorr does all the rest! At first it provides you with the web Read the rest of this entry
WebLines
Make your website more interesting and informative for your visitors. Add fresh content live 24 hours/day. Boost your search engine rankings. Weblines can do all this for you! Put live RSS newsfeeds on your website with webLines. An RSS Read the rest of this entry
WEBSmith
With WEBSmith 3 it is now possible for non-programmers to create powerful, interactive websites, without coding, or knowledge of complex web technologies and databases. With WEBSmith you simply drag and drop components to add the features you want onto your Read the rest of this entry
View-IT!, Read the rest of this entry
Trellian SEO Toolkit
The Trellian SEO Toolkit v2 is a Search Engine Optimization application that features all the Search Engine Optimization tools you will ever need to manage your web site and reach the top of the search engines!
The Trellian SEO Toolkit Read the rest of this entry
Trellian SiteSpider Read the rest of this entry
Tometa WhereIs
You can think of the Tometa WhereIs service as the Internet’s telephone directory. It gives you information on the geographical location of an IP address based on Internet infrastructure information
With a free SDK, email support and code examples in almost Read the rest of this entry
Traffic Geek
Search Engine submission software to submit your websites to more than 900,000 search engines, directories and link pages including Google, Yahoo and Dmoz. The software includes more than 20 SEO tools such as link popularity, keyword builder, html validator, Read the rest of this entry
TheDowser Professional
No longer do you need to mess around with several different keyword research tools or visit different websites for your keyword research.
EVERYTHING you need to research keywords and manage your keyword lists and sub-lists is provided within one application - Read the rest of this entry
Stunnix JavaScript Obfuscator and Encoder Read the rest of this entry
SortSite Standard
SortSite Standard allows web site builders, owners and consultants to check entire sites for standards compliance and quality issues.
It’s simple to use: just type in a URL and click Check. SortSite follows links, checking each page it finds, then produces Read the rest of this entry
Smtp.NET
Professional Email Component for ASP.NET and .NET Windows Forms which doesn?t extend the System.Web.Mail namespace but was built from the ground up to go further and offer you more. Smtp.NET was designed to be the easiest .NET email component Read the rest of this entry
SiteByter Pro
SiteByter Pro is an intelligent search engine optimization program and top ranking web site creator.
SiteByter Pro is a program which really does something instead of you in the process of search engine optimization (SEO). The complex job of creating a Read the rest of this entry
Python Code Library
Python
Pop-a-Color Value
Pop-a-Color Value is a simple graphics tool that can be used to get color values when making skins for programs and also creating websites. When looking for html color values, hex color values, or rgb color values, Pop-a-Color Value Read the rest of this entry
Perl Code Library
Oven Fresh Easy Poll Maker
Quickly and Easily Make Online Voting Polls. Preview Your Poll while You Design it. Design Editors enhance the layout styling process. View Voting Results Instantly Online. Unlimited bar graph colors. Show or hide bar graph. Read the rest of this entry
Page Generator
Page Generator is software designed for people that have a good knowledge of the search engine world. We offer you an easy way to rocket your content on the internet. If you are a content hunter and you deploy websites Read the rest of this entry
IP To Country
Free IP address to country, monitor for SEO, includes C++ source code, track and filter by country. Can read a list of IP addresses from a file, one per line and check the country they are from. Further Read the rest of this entry Read the rest of this entry
HostName Commander
A powerful tool for web developers to control the mapping between the host names and IP addresses. Can be used to block access to unwanted web sites, suppress third-party web ads while web surfing, speed up DNS queries, and more. Read the rest of this entry
Google Adsense Websites
Google AdsenseArticles Over 26,300 Pages of Keyword Content Rich Websites AdSense is an advertising program run by Google. Website owners can enroll in this program to enable text and image advertisements on their sites. These ads are administered by Google Read the rest of this entry
Google Base Products Lister
Now you can take advantage of ALL Google Base has to offer. No more spreadsheet or ftp programs. The absolute easiest way to submit your items to Google Base. Just fill in the boxes, and click. Database driven. Will hold Read the rest of this entry
HanengCharts
HanengCharts enables you to easily add customized, dynamic, interactive charts to your Web site, intranet or Web application. No installation is required on the server, simply upload the small JAR file (80Kb) to the same folder as your Web page, Read the rest of this entry
Online Shopping Web Store Builder Design
Online store builder, shopping cart software, E-Commerce Web-site solutions provider - The most popular Ecommerce Website builder: Best to build a professional Web store - complete E-Commerce solutions. Design user friendly Web-site storefronts - online order processing - both retail Read the rest of this entry
Easy Web Buttons
Easy Web Buttons lets you point and click your way to beautiful buttons in minutes. Change colors, fonts, gradient, lighting effects and more! Exports to any format, includinged layered PhotoShop files. Easily add captions, dropshadows and more. By changing Read the rest of this entry
ConceptDraw WebWave Mac
ConceptDraw WebWave is an essential tool on the stage of web site/application prototyping and design, page mocking-up and site-mapping. It includes more than 4590 ready-made graphics, shapes, templates and wizards for quickly creating professional diagrams and drawings. The application runs Read the rest of this entry
Color Wheel Expert
Based on the color wheel and color harmony theory, Color Wheel Expert enables users to select a color, and then have 12 harmonious colors displayed in a circle.
With Color Wheel Expert, it’s easy to find analogous colors, triads, and Read the rest of this entry
BrowserBob Professional,?). Read the rest of this entry
ButtonGadget2
How about this for a bold claim?
Virtually ANY button you see ANYWHERE, you can copy, strip the text, then add your own icon and text to it!
Welcome to ButtonGadget2.
Thousands of copies used daily testify the original ButtonGadget is the Read the rest of this entry
Boardawy
Free open source code in Perl forum software and bulletin board system multi lingual multi theme SQL driven highly customisable, the ideal community solution for any web site. Boardawy is the web standard free open source code in Perl forum Read the rest of this entry
Best Web Hosting Review Tool
Discover the best web hosting companies on the web quickly and easily with our best web hosting charts. Compare prices, features, customer service ratings, and package sizes conveniently with this handy application. All web hosting companies are reviewed Read the rest of this entry
Elite Article Submitter
Article Submitter Pro! software will automatically submit your article for you! Article Submit saves you time and money but auto submission to hundreds of the big name article submission websites. Gain maximum exposure by submitting your article with this top Read the rest of this entry
QK BarCode Generator
QK Barcode Generator lets you make professional, ready-to-print barcode graphics easily and quickly. The powerful preview function helps you output bar codes to a printer easily. You can print barcodes on one paper with normal printer.
QK Barcode Generator also Read the rest of this entry
Real Estate Solution
Today digital world, more than ever real estate buyers are using the Internet to find a home. Old-fashioned off-line real estate agencies are quickly becoming obsolete and must quickly adapt to the new reality to stay competitive in the hot Read the rest of this entry
Blog Planter
An easy to use blog submitter used to help distribute your blogs to the available directories. This software is effective for blog promotion as you’ll start seeing an increase in your link popularity within a few days of using Read the rest of this entry
WireFusion Professional, Read the rest of this entry
Web Chart Creator
Web Chart Creator is your tool for fast creation of dynamic 3D charts for the Internet/Intranet projects using databases of any type. The charts are created with a simple visual editor. It has a friendly and intuitive interface designed for Read the rest of this entry
WebLight
WebLight is an automated web site testing tool that scans web sites for non-conforming HTML and broken links.
Unlike other web testing tools WebLight:
- Scans entire sites for HTML and link problems.
- Is designed for web developers, maintainers and owners, not Read the rest of this entry
WebPen
Allow your website users to sign documents online using the mouse as a pen! Now legal documents and agreements can be displayed online and signed instantly using the mouse as a pen. WebPen eliminates the traditional hassle of downloading, printing, Read the rest of this entry
Voicent VoiceXML Gateway, Read the rest of this entry
Web Button Maker Deluxe
Presentation is everything, enhance your website with elegant buttons created with Web Button Maker Deluxe! Easily create Vista and XP themed buttons, Mac and Aqua style buttons, colorful and shining web buttons, animated buttons and more! Choose between over 100 Read the rest of this entry
|
http://m.mowser.com/web/trialsoftwarez.com%2Fcategory%2Fweb-development%2Fother-12%2F
|
crawl-002
|
refinedweb
| 1,829
| 57.2
|
Opened 15 months ago
Closed 15 months ago
#19668 closed New feature (wontfix)
Form enhancement: `Form.set_data` to set data and files
Description
This simply moves the logic for setting is_bound, data and files into a method which enables setting the data after form initialization. This is a backwards compatible. This makes it easier to not have to initialize a form twice in view. It changes from this idiom:
def view(request): if request.method == 'POST': form = Form(request.POST, request.FILES) ... else: form = Form() ...
to simply this:
def view(request): form = Form() if request.method == 'POST': form.set_data(request.POST, request.FILES) ...
This reduces clutter and redundancy when there are a lot of form arguments passed.
Attachments (0)
Change History (6)
comment:1 Changed 15 months ago by bruth
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 15 months ago by bruth
- Summary changed from Form enhancement: `Form.set_data` to set data an files to Form enhancement: `Form.set_data` to set data and files
comment:3 Changed 15 months ago by anonymous
- Needs tests set
- Patch needs improvement set
- Triage Stage changed from Unreviewed to Design decision needed
I'm -0 on this change, but i'm not a core dev.
The argument against this would be that much of a form's functionality changes based on data, fields, valid/invalid, cleaned data, errors etc.. Changing the data at the wrong time could cause a lot of problems. Changing set_data to raise an error if the form is already bound may be enough to address some of those concerns. I'd suggest raising the issue on django-developers to get some opinions from core devs.
comment:4 Changed 15 months ago by anonymous coward
-1
You don't save a single line of code (unless you copy the provided example, which doesn't complete the antithesis version), are adding another method to maintain, and are creating the potential for internal problems (as mentioned above), not to mention making Form less extensible.
comment:5 Changed 15 months ago by rafales
-1, and you can make it even shorter:
form = Form(request.POST or None, request.FILES or None) if request.method == 'POST': # code
comment:6 Changed 15 months ago by claudep
- Resolution set to wontfix
- Status changed from new to closed
This proposal received mostly negative opinions until now (also on django-developers). I also think that this would be an important change in the contract that Form data are set at initialization time, for few benefits but a high risk of triggering bugs in the form stack.
Anyway, thanks for taking the time to make the proposal.
Pull request here:
|
https://code.djangoproject.com/ticket/19668
|
CC-MAIN-2014-15
|
refinedweb
| 446
| 63.19
|
import all of the how-to articles from the pkgsrc.se wiki
This how-to describes how to install NetBSD/hpcsh on a Jornada 620LX. We will be using a 2 GB compact flash card and no swap.. Also, not having a serial cable for this (somewhat) rare Jornada I did the entire install through the in-ROM Windows CE and a CF Disk. **Contents** [[!toc]] #List of things necessary for this install * An x86 machine capable of running NetBSD * Your Jornada 620LX * A ~>1GB CF Disk * A CF Disk reader for x86 machine #The process over-simplified 1. Install NetBSD on x86 and bring it up to -current 2. Build tools/kernel/release for HPCSH on the x86 machine 3. Partition (fdisk) & DiskLabel CF Disk 4. Unpack release onto CF Disk 5. Boot Jornada into CE and run HPCBoot.exe from CF Disk 6. Enjoy NetBSD #The REAL breakdown * Install onto a spare x86 machine, I'm not going to hand-hold through this install, as a basic install is perfectly fine. * In /usr/src, build the HPCSH(-3) tools: $ cd /usr/src/ && ./build.sh -u -m hpcsh tools * Build the HPCSH(-3) kernel: $ ./build.sh -u -m hpcsh kernel=GENERIC * Build the HPCSH(-3) release: $ ./build.sh -u -m hpcsh -U release * NOTE: on building release I had it fail multiple times because I had not cleared out my /usr/src/../obj/* and my /usr/src/../tools/* and then rebuilt my tools for x86 after moving to -current. * Attach the CFDisk to the NetBSD machine. Partition it into two partitions (I used a 2GB card and partitioned into 24MB and a 1.9GB). * You can get away with using as little as a few MB, but I figured better safe than sorry with the extra space the 2GB card allots me. * Note: Delete all partitions using fdisk before creating/editing these ones! <pre><code> fdisk /dev/sd1 Do you want to change our idea of what BIOS thinks? [n] [enter] Which partition do you want to change?: [none] 0 sysid: 1 start: 0 size: 24M bootmenu [enter] The bootselect code is not installed, do you want to install it now? [n] [enter] Which partition do you want to change?: [none] 1 sysid: 169 start: (offset of partition 0's sectors) size: (last sectors) bootmenu [enter] The bootselect code is not installed, do you want to install it now? [n] [enter] Which partition do you want to change?: [none] [enter] Update the bootcode from /usr/mdec/mbr? [n] [enter] Should we write new partition table? [n] y </code></pre> * Now create filesystems on the two partitions: newfs_msdos sd1e && newfs sd1a (your lettering here may differ) * Mount your filesystems so we can use them: mount -o softdep /dev/sd1a /mnt && mount -o -l /dev/sd1e /mnt2 * Copy your kernel and HPCBoot.exe to the msdos partition: cd /usr/src/obj/releasedir/hpcsh/binary/kernel cp netbsd-GENERIC.gz /mnt2/netbsd.gz cd ../sets mv kern-GENERIC.tgz kern-GENERIC.tar.gz mv kern-HPW650PA.tgz kern-HPW650PA.tar.gz for tgz in *.tgz; do tar -xpvzf $tgz -C /mnt; done * This got me booting; however I hadn't set a root password anywhere! So make sure the first time to boot hpcboot.exe with the "single-user" checkbox, then mount / read-write and change the root password: mount -u / mount /dev/wd0b on / type ffs (noatime, nodevmtime, local) #References Original content from <>
|
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/tutorials/how_to_install_netbsd_on_hpcsh.mdwn?rev=1.1;content-type=text%2Fx-cvsweb-markup
|
CC-MAIN-2017-09
|
refinedweb
| 572
| 66.23
|
Introduction
Behavior testing simply means that we should test how an application behaves in certain situations. Often the behavior is given to us developers by our customers. They describe the functionality of an application, and we write code to meet their specifications. Behavioral tests are a tool to formalize their requirements into tests. This leads naturally to behavior-driven development (BDD).
After completing this tutorial, you should be able to:
- Explain the benefits of behavior testing
- Explain the “given”, “when”, and “then” phases of Behave
- Write basic behavioral tests using Behave
- Write parameterized behavioral tests using Behave
Prerequisites
Before starting, make sure you have the following installed:
Setting Up Your Environment
This tutorial will walk you through writing tests for and coding a feature of a Twenty-One (or “Blackjack”) game. Specifically, we’ll be testing the logic for the dealer. To get started, create a root directory where your code will go, and then create the following directories and blank files:
. ├── features │ ├── dealer.feature │ └── steps │ └── steps.py └── twentyone.py
Here’s a brief explanation of the files:
dealer.feature: The written out tests for the dealer feature.
steps.py: The code that runs the tests in
dealer.feature.
twentyone.py: The implementation code for the dealer feature.
Writing Your First Test
Although behavioral tests do not require test-driven development, the two methodologies go hand-in-hand. We’ll approach this problem from a test-driven perspective, so instead of jumping to code, we’ll start with the tests.
Writing the Scenario
Open
dealer.feature and add the following first line:
Feature: The dealer for the game of 21
This line describes the feature. In a large application, you would have many features. Next, we’ll add a test. The first test will be simple — when the round starts, the dealer should deal itself two cards. The word Behave uses to define a test is “Scenario”, so go ahead and add the following line:
Scenario: Deal initial cards
Before we write more, we need to understand the three phases of a basic Behave test: “Given”, “When”, and “Then”. “Given” initializes a state, “When” describes an action, and “Then” states the expected outcome. For this test, our state is a new dealer object, the action is the round starting, and the expected outcome is that the dealer has two cards. Here’s how this is translated into a Behave test:
Scenario: Deal initial cards Given a dealer When the round starts Then the dealer gives itself two cards
Notice that the three phases read like a normal English sentence. You should strive for this when writing behavioral tests because they are easily readable by anyone working in the code base.
Now to see how Behave works, simply open a terminal in the root directory of your code and run the following command:
behave
You should see this output:
Feature: The dealer for the game of 21 # features/dealer.feature:1 Scenario: Deal initial cards # features/dealer.feature:3 Given a dealer # None When the round starts # None Then the dealer gives itself two cards # None Failing scenarios: features/dealer.feature:3 Deal initial cards 0 features passed, 1 failed, 0 skipped 0 scenarios passed, 1 failed, 0 skipped 0 steps passed, 0 failed, 0 skipped, 3 undefined Took 0m0.000s You can implement step definitions for undefined steps with these snippets: [ The rest of output removed for brevity ]
The key part here is that we have one failing scenario (and therefore a failing feature) that we need to fix. Below that, Behave suggests how to implement steps. You can think of a step as a task for Behave to execute. Each phase (“given”, “when”, and “then”) are all implemented as steps.
Writing the Steps
The steps that Behave runs are written in Python and they are the link between the descriptive tests in
.feature files and the actual application code. Go ahead and open
steps.py and add the following imports:
from behave import * from twentyone import *
Behave steps use annotations that match the names of the phases. This is the first step as described in the scenario:
@given('a dealer') def step_impl(context): context.dealer = Dealer()
It’s important to notice that the text inside of the annotation matches the scenario text exactly. If it doesn’t match, the test cannot run.
The context object is passed from step to step, and it is where we can store information to be used by other steps. Since this step is a “given”, we need to initialize our state. We do that by creating a
Dealer object, and attaching that object to the
context. If you run
behave again, you’ll see the test fails, but now for a different reason: We haven’t defined the Dealer class yet! Again, we have a failing test that is “driving” us to do work.
Now we will open
twentyone.py and create a
Dealer class:
class Dealer(): pass
Run
behave once again to verify that we fixed the last error we saw, but that the scenario still fails because the “when” and “then” steps are not implemented. From here on, the tutorial will not explicitly state when you should run
behave. But remember, the cycle is to write a test, see that it fails, and then write code to make the test pass.
Here are the next steps to add to
steps.py:
@when('the round starts') def step_impl(context): context.dealer.new_round() @then('the dealer gives itself two cards') def step_impl(context): assert (len(context.dealer.hand) == 2)
Again, the annotation text matches the text in the scenario exactly. In the “when” step, we have access to the
dealer created in “given” and we can now call a method on that object. Finally, in the “then” step, we still have access to the
dealer, and we assert that the dealer has two cards in its hand.
We defined two new pieces of code that need to be implemented:
new_round() and
hand. Switch back to
twentyone.py and add the following to the
Dealer class:
class Dealer(): def __init__(self): self.hand = [] def new_round(self): self.hand = [_next_card(), _next_card()]
The
_next_card() function will be defined as a top-level function of the module, along with a definition of the cards. At the top of the file, add the following:
import random _cards = ['A', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K'] def _next_card(): return random.choice(_cards)
Remember that
random is not secure and should not be used in a real implementation of this game, but for this tutorial it will be fine.
If you run
behave now, you should see that the test passes:
Feature: The dealer for the game of 21 # features/dealer.feature:1 Scenario: Deal initial cards # features/dealer.feature:3 Given a dealer # features/steps/steps.py:5 0.000s When the round starts # features/steps/steps.py:9 0.000s Then the dealer gives itself two cards # features/steps/steps.py:14 0.000s 1 feature passed, 0 failed, 0 skipped 1 scenario passed, 0 failed, 0 skipped 3 steps passed, 0 failed, 0 skipped, 0 undefined Took 0m0.000s
Writing Tableized Tests
Often when writing tests we want to test the same behavior against many different parameters and check the results. Behave makes this easier to do by providing tools to create a tableized test instead of writing out each test separately. The next game logic to test is that the dealer knows the point value of its hand. Here is a test that checks several scenarios:
Scenario Outline: Get hand total Given a <hand> When the dealer sums the cards Then the <total> is correct Examples: Hands | hand | total | | 5,7 | 12 | | 5,Q | 15 | | Q,Q,A | 21 | | Q,A | 21 | | A,A,A | 13 |
You should recognize the familiar “given, when, then” pattern, but there’s a lot of differences in this test. First, it is called a “Scenario Outline”. Next, it uses parameters in angle brackets that correspond to the headers of the table. Finally, there’s a table of inputs (“hand”) and outputs (“total”).
The steps will be similar to what we’ve seen before, but we’ll now get to use the parameterized steps feature of Behave.
Here’s how to implement the new “given” step:
@given('a {hand}') def step_impl(context, hand): context.dealer = Dealer() context.dealer.hand = hand.split(',')
The angle brackets in the
dealer.feature file are replaced with braces, and the
hand parameter becomes an object that is passed to the step, along with the context.
Just like before, we create a new
Dealer object, but this time we manually set the dealer’s cards instead of generating them randomly. Since the
hand parameter is a simple string, we split the parameter to get a list.
Next, add the remaining steps:
@when('the dealer sums the cards') def step_impl(context): context.dealer_total = context.dealer.get_hand_total() @then('the {total:d} is correct') def step_impl(context, total): assert (context.dealer_total == total)
The “when” step is nothing new, and the “then” step should look familiar. If you’re wondering about the “:d” after the
total parameter, that is a shortcut to tell Behave to treat the parameter as an integer. It saves us from manually casting with the
int() function. Here’s a complete list of patterns that Behave accepts and if you need advanced parsing, you can define your own pattern.
There’s many different approaches to summing values of cards, but here’s one solution to find the total of the dealer’s hand. Create this as a top-level function in the
twentyone.py module:
def _hand_total(hand): values = [None, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10, 10] value_map = {k: v for k, v in zip(_cards, values)} total = sum([value_map[card] for card in hand if card != 'A']) ace_count = hand.count('A') for i in range(ace_count, -1, -1): if i == 0: total = total + ace_count elif total + (i * 11) + (ace_count - i) <= 21: total = total + (i * 11) + ace_count - i break return total
In short, the function maps the card character strings to point values, and sums the values. However, aces have to be handled separately because they can value 1 or 11 points.
We also need to give the dealer the ability to total its cards. Add this function to the
Dealer class:
def get_hand_total(self): return _hand_total(self.hand)
If you run
behave now, you’ll see that each example in the table runs as its own scenario. This saves a lot of space in the features file, but still gives us rigorous tests that pass or fail individually.
We’ll add one more tableized test, this time to test that the dealer plays by the rules. Traditionally, the dealer must play “hit” until he or she has 17 or more points. Add this scenario outline to test that behavior:
Scenario Outline: Dealer plays by the rules Given a hand <total> when the dealer determines a play then the <play> is correct Examples: Hands | total | play | | 10 | hit | | 15 | hit | | 16 | hit | | 17 | stand | | 18 | stand | | 19 | stand | | 20 | stand | | 21 | stand | | 22 | stand |
Before we add the next steps, it’s important to understand that when using parameters, the order matters. Parameterized steps should be ordered from most restrictive to least restrictive. If you do not do this, the correct step may not be matched by Behave. To make this easier, group your steps by type. Here is the new given step, ordered properly:
@given('a dealer') def step_impl(context): context.dealer = Dealer() ## NEW STEP @given('a hand {total:d}') def step_impl(context, total): context.dealer = Dealer() context.total = total @given('a {hand}') def step_impl(context, hand): context.dealer = Dealer() context.dealer.hand = hand.split(',')
The typed parameter
{total:d} is more restrictive than the untyped
{hand}, so it must come earlier in the file.
The new “when” step is not parameterized and can be placed anywhere, but, for readability, should be grouped with the other
when steps:
@when('the dealer determines a play') def step_impl(context): context.dealer_play = context.dealer.determine_play(context.total)
Notice that this test expects a
determine_play() method, which we can add to the
Dealer class:
def determine_play(self, total): if total < 17: return 'hit' else: return 'stand'
Last, the “then” step is parameterized so it needs to also be ordered properly:
@then('the dealer gives itself two cards') def step_impl(context): assert (len(context.dealer.hand) == 2) @then('the {total:d} is correct') def step_impl(context, total): assert (context.dealer_total == total) ## NEW STEP @then('the {play} is correct') def step_impl(context, play): assert (context.dealer_play == play)
Putting Everything Together
We’re going to add one final test that will tie together all of the code we’ve just written. We’ve proven to ourselves with tests that the dealer can deal itself cards, determine its hand total, and make a play separately, but there’s no code to tie this together. Since we are emphasizing test-driven development, let’s add a test for this behavior.
Scenario: A Dealer can always play Given a dealer When the round starts Then the dealer chooses a play
We already wrote steps for the “given” and “when” statements, but we need to add a step for “the dealer chooses a play.” Add this new step, and be sure to order it properly:
@then('the dealer gives itself two cards') def step_impl(context): assert (len(context.dealer.hand) == 2) #NEW STEP @then('the dealer chooses a play') def step_impl(context): assert (context.dealer.make_play() in ['stand', 'hit']) @then('the {total:d} is correct') def step_impl(context, total): assert (context.dealer_total == total)
This test relies on a new method
make_play() that you should now add to the
Dealer class:
def make_play(self): return self.determine_play(self.get_hand_total())
This method isn’t critical, but makes it easier to use the
Dealer class.
If you’ve done everything correctly, running
behave should display all of the tests and give a summary similar to this:
1 feature passed, 0 failed, 0 skipped 16 scenarios passed, 0 failed, 0 skipped 48 steps passed, 0 failed, 0 skipped, 0 undefined Took 0m0.007s
Conclusion
This tutorial walked you through setting up a new project with the Behave library and using test-driven development to build the code based off of behavioral tests.
If you would like to get experience writing more tests with this project, try implementing a
Player class and
player.feature that plays with some basic strategy.
To learn more about BDD and why you might want to adopt it, check out our article on Behavior-Driven Development.
|
https://semaphoreci.com/community/tutorials/getting-started-with-behavior-testing-in-python-with-behave
|
CC-MAIN-2019-13
|
refinedweb
| 2,450
| 62.07
|
Wikipedia:Redirects for discussion
From Wikipedia, the free encyclopedia
Redirects for discussion (RfD) is the place where Wikipedians decide what should be done with problematic redirects. Items sent here usually stay listed for a week or so, after which they are deleted by an administrator, kept, or retargeted.
Note: If all you want to do is replace a currently existing, unprotected redirect with an actual article, you do not need to list it here. Turning redirects into fleshed-out encyclopedic articles is wholly encouraged at Wikipedia. Be bold.
Note: Redirects should not be deleted simply because they do not have any incoming links. Please do not list this as a reason to delete a redirect. Redirects that do have incoming links are sometimes deleted as well, so it's not a necessary condition either. See When should we delete a redirect?
Old discussions are archived at Wikipedia:Redirects for discussion/Log.
[edit] Before you list a redirect for discussion...
...please familiarize yourself with the following:
- Wikipedia:Redirect — our general policy on what redirects are, why they exist, and how they are used.
- Wikipedia:Criteria for speedy deletion — our policy on which pages can be deleted without discussion. The "General" and "Redirects" section apply here.
- Wikipedia:Deletion policy — our deletion policy that describes how we delete things by consensus
- Wikipedia:Guide to deletion — guidelines on discussion format and shorthands that also apply here
[edit] The guiding principles of RfD
- The purpose of a good redirect is to eliminate the possibility that an average user will wind up staring blankly at a "Search results 1-10 out of 378" search page instead of the article they were looking for. If someone could plausibly type in the redirect's name when searching for the target article, it's a good redirect.
- Redirects are cheap. Redirects take up minimal disk space and use very little bandwidth. Thus, it doesn't really hurt things much if there are a few of them scattered around. On the flip side, deleting redirects is cheap since the deletion coding takes up minimal disk space and use very little bandwidth. In general, there is no harm in deleting problematic redirects that do not contribute to improving the encyclopedia.
- The default result of any RFD nomination which receives no other discussion is delete. Thus, a redirect nominated in good faith and in accordance with RfD policy will be deleted, even if there is no discussion surrounding that nomination.
- Redirects nominated in contravention of Wikipedia:Redirect will be speedily kept.
- RfD is not the place to resolve most editorial disputes. If you think a redirect should be targeted at a different article, discuss it on the talk pages of the current target article and/or the proposed target article. However, for more difficult cases, this page can be a centralized discussion place for resolving tough debates about where redirects point.
- Requests for deletion of redirects from one page's talk page to another page's talk page don't need to be listed here, as anyone can simply remove the redirect by blanking the page.
- Try to consider whether or not a redirect would be helpful to the reader when discussing.
[edit] When should we delete a redirect?.
[edit] Reasons for deleting.".
-. Old CamelCase links and old subpage links should be left alone in case there are any existing links on external pages pointing to them.
- Someone finds them useful. Hint: If someone says they find a redirect useful, they probably do. You might not find it useful — this is not because the other person is.
See also: Policy on which redirects can be deleted immediately.
[edit] Closing notes
- Details at: Administrator instructions for RfD.
Nominations should remain open, per policy, about a week before they are closed, unless they meet the general criteria for speedy deletion, the criteria for speedy deletion of a redirect, or are not valid redirect discussion requests (e.g. are actually move requests).
[edit] How to list a redirect for discussion
To list a redirect for discussion, follow this two-step process:
- Please consider using What links here to locate other redirects that may be related to the one you are nominating. After going to the redirect target page and selecting "What links here" in the toolbox on the left side of your computer screen, select both "Hide transclusions" and "Hide links" filters to display the redirects to the redirect target page.
- It is generally considered civil to notify the good-faith creator and any main contributors of the redirect that you are nominating the redirect. To find the main contributors, look in the page history of the redirect. For convenience, the template
{{subst:RFDNote|RedirectName}}
Notice of redirect discussion at [[Wikipedia:Redirects for discussion]]
Administrator instructions
[edit] Current list
[edit] July 5
[edit] WP:GRAWP
Please kill this, per WP:DENY etc. No need to have this. Redirect shortcuts are for policies, pages and other useful things, however we don't need to honor vandals with them. Are we going to go back to the troll feeding days where {{WOW}} expanded to a Wheels sock tag? Where people had boxes saying that they hate Willy? Triplestop x3 02:12, 5 July 2009 (UTC)
[edit] July 4
[edit] Template User JHVW/DrWho
Delete, unnecessary redirect created via moving a misnamed template. TheCatalyst31 Reaction•Creation 21:41, 4 July 2009 (UTC)
[edit] 4th of july
"4th of July" (capital J) exists and is quite a different page than the target. ospalh (talk) 21:39, 4 July 2009 (UTC)
[edit] Las Venturas Airport
Is this necessary? Las Venturas is a fake place, so i doubt someone would be looking for extensive info on a fake airport. KMFDM FAN (talk!) 17:21, 4 July 2009 (UTC)
[edit] Mick peterson
I cannot see any connection between these two subjects. A Google search shows no connection [1]. meshach (talk) 05:51, 4 July 2009 (UTC)
[edit] July 3
[edit] Template:Nobot
[edit] Template:Wtf
Inappropriate and potentially bitey redirect for {{fact}}. Jafeluv (talk) 08:44, 3 July 2009 (UTC)
{{db-g4}} per Wikipedia:Templates for deletion/Log/2006 July 5. Bazj (talk) 09:11, 4 July 2009 (UTC)speedy declined
- Delete 1-inappropriate; 2-unnecessary tla. Bazj (talk) 09:56, 4 July 2009 (UTC)
[edit] July 2
[edit] Rsnail
Delete. Apparently Rsnail is the username of an important admin in Club Penguin. This may have been mentioned in the past but it no longer is, the article has matured. Mangojuicetalk 14:07, 2 July 2009 (UTC)
[edit] Elisabeth II of Bohemia
- Elisabeth II of Bohemia → Elisabeth of Bohemia (1409–1442)
- Elizabeth II of Bohemia → Elisabeth of Bohemia (1409–1442)
Delete. Implausible redirect. Not a ruling monarch, so should not have a numeral. No incoming links from article space. DrKiernan (talk) 08:21, 2 July 2009 (UTC)
- How about retargetting to the dab page Elisabeth of Bohemia? As the redirect is the result of a move, somebody apparently once thought there was a "Elizabeth II of Bohemia". Kusma (talk) 15:38, 2 July 2009 (UTC)
- retarget to either Elisabeth of Bohemia, Princess Palatine who, if I understand things correctly, is daughter of Elizabeth of Bohemia (note the slightly different spelling) and thus may be incorrectly thought by some to be Elisabeth II or as previously suggested retarget to the Elisabeth of Bohemia dab page and let the user decide who they actually meant. PaulJones (talk) 21:11, 3 July 2009 (UTC)
- The redirect should be deleted, as it amounts to original research and is generally confusing because people tend to think that there have been two monarchs of Bohemia named Elisabeth, although there were none. Elisabeth I of Bohemia, Anna I of Bohemia, and Świętosława I of Bohemia also need to be deleted. They were created by someone who didn't understand the difference between queen regnant and queen consort; keeping those redirects would be as confusing as having Elizabeth I of the United Kingdom redirect to Elizabeth Bowes-Lyon. Not to mention that no serious scholar refers to any of those women by those invented names. The redirect led three people on this page alone to believe that Bohemia had a queen regnant named Elisabeth II. Wikipedia should always avoid ill-informing its readers. Surtsicna (talk) 21:22, 3 July 2009 (UTC)
[edit] Ride (Usher song)
[edit] July 1
[edit] 塔什库尔干塔吉克自治县
Per Naming conventions. Gordonrox24 | Talk 14:44, 1 July 2009 (UTC)
- Keep it this is for the Chinese users of English Wikipedia and to provide the Chinese equivalent in English Wikipedia.--Joseph Solis in Australia (talk) 14:47, 1 July 2009 (UTC)
- Keep. The naming conventions you point to describe what to name the article on a given topic. Here, the article is called Tashkurgan Tajik Autonomous County, which is clearly English. Foreign-language redirects are not forbidden; indeed, Nürnberg, delikatessen, Firenze, Franz Josef Strauß and Chomolungma all redirect to the correct English name. Those are all examples listed in the naming convention you point to. Jafeluv (talk) 15:51, 1 July 2009 (UTC)
- Keep the term occurs on the article, is the original language name for the subject, and useful considering that the term can be rendered in several different ways in English. 76.66.193.20 (talk) 21:53, 1 July 2009 (UTC)
- Keep and tag {{R from alternative language}}. Kusma (talk) 15:36, 2 July 2009 (UTC)
- Speedy Keep The naming conventions apply to articles, not redirects. EVula // talk // ☯ // 19:42, 3 July 2009 (UTC)
[edit] Template:O rly?
Delete. {{Fact}} has a lot of useful aliases, but these are unnecessarily pointy and sarcastic. I don't see why anyone would advocate tagging text with "O rly?" tags. Using either {{fact}} directly or one of its many aliases like {{citation needed}} or {{cn}} would be more appropriate. These seem like joke redirects in the first place. Jafeluv (talk) 09:46, 1 July 2009 (UTC)
- Delete - A useless, joke redirect. The case could be made that it's contrary to WP:BITE. Lәo(βǃʘʘɱ) 18:42, 1 July 2009 (UTC)
- Delete Funny, but no real purpose. GlassCobra 19:34, 2 July 2009 (UTC)
- Keep per WP:HARMLESS Why so serious? (Disclaimer: I created a couple of 'em, if I remember correctly.) –Juliancolton | Talk 19:49, 3 July 2009 (UTC)
- Keep I agree with Julian. It's not harming anything, and even if they were to be used in the main article space, their function would serve the same as a regular {{fact}} tag (the functionality of the tag is far more important than the transcluded name). EVula // talk // ☯ // 22:58, 3 July 2009 (UTC)
[edit] Jabuary 21
This is an unlikely typo with no incoming links. JIMp talk·cont 08:27, 1 July 2009 (UTC)
- Delete. Implausible typo, unlikely to be useful. Jafeluv (talk) 09:50, 1 July 2009 (UTC)
- As a typo, it is pretty likely, actually... have you bever typed ab b instead of a n? Kusma (talk) 14:04, 2 July 2009 (UTC)
- Delete:This isn't needed. If you type Jabuary into the search bar you will get a "Did you mean January" note.--Gordonrox24 | Talk 14:36, 2 July 2009 (UTC)
- Delete per above. Unlikely typo, taken care of by software. GlassCobra 19:35, 2 July 2009 (UTC)
- Delete per the above statements. EVula // talk // ☯ // 19:43, 3 July 2009 (UTC)
- Delete as per Jabuary 20 and Jamuary 21. Tavix | Talk 23:08, 3 July 2009 (UTC)
[edit] June 30
[edit] Kingdom of Northern Italy
Delete. A "Kingdom of Northern Italy" never existed. There was a Kingdom of Sardinia, that in 1861 renamed itself Kingdom of Italy, and never went under the name of Kingdom of Northern Italy. Candalua (talk) 19:34, 30 June 2009 (UTC)
- Maybe retarget to Kingdom of Italy (Napoleonic)... The clearest reference to "Kingdom of Northern Italy" that I could find is in this book, which refers to the Napoleonic Kingdom of Italy (in northern Italy) that existed in the early 1800s. –BLACK FALCON (TALK) 16:09, 1 July 2009 (UTC)
[edit] Who are you album
Delete. Not useful due to non-standard disambiguation and the fact that the redirect's title is actually longer than target article's title. Who Are You (album) could possibly be a useful redirect, because the parenthetical disambiguator "(album)" is quite common on Wikipedia. (Redirect creator notified using {{RFDNote}}.) –BLACK FALCON (TALK) 18:07, 30 June 2009 (UTC)
- Delete per nom. I went ahead and created the (album) redirect, since you're right, that is a likely search term. EVula // talk // ☯ // 19:41, 3 July 2009 (UTC)
[edit] Where are we?
Humorous, but not a useful redirect. No significant page history and 0 views. (Redirect creator notified using {{RFDNote}}.) –BLACK FALCON (TALK) 17:08, 30 June 2009 (UTC)
- Delete Heh, totally agreed with the nom; got a laugh out of it, but that doesn't mean it should be kept. EVula // talk // ☯ // 19:39, 3 July 2009 (UTC)
- Delete But the person who made should get a barnstar of good humor. :D KMFDM FAN (talk!) 21:00, 4 July 2009 (UTC)
- Delete Not a useful redirect, very few people, if any at all, would likely look that up in search. Marlith (Talk) 23:40, 4 July 2009 (UTC)
[edit] Slayer's Upcoming 10th Album
[edit] Slayer 10th Album
[edit] June 29
[edit] Sinuciderea fecioarelor
Delete. A new editor input an article in Romanian which turned out to be a plot summary of Jeffrey Eugenides' novel The Virgin Suicides, on which we already have an article. I redirected it, but on reflection this is a most unlikely search term on the English Wikipedia. JohnCD (talk) 18:53, 29 June 2009 (UTC)
- Keep, harmless, avoid accidental recreation, and points people from Romanian Wikipedia who change the ro.wikipedia in the URL to en.wikipedia to the right article. Nobody loses if we have this redirect. Kusma (talk) 08:47, 30 June 2009 (UTC)
- Delete Normal users of English Wikipedia will not enter this as any form of typo. If this exists on the ro.wikipedia, merely change where it points —Preceding unsigned comment added by Bwilkins (talk • contribs)
[edit] Wikipedia:Notability (uglyness)
Particularly silly and slightly misleading cross-namespace redirect. ╟─TreasuryTag►senator─╢ 17:11, 29 June 2009 (UTC)
- Strong delete first redirect, Weak delete' second redirect. They both are pointless, but the first one looks more 'official' having the same name system as the notability guidelines. Garion96 (talk) 17:42, 29 June 2009 (UTC)
- Strong keep - If you delete this, you must delete WP:HOTTIE, Wikipedia:Hotties are always notable and Wikipedia:Notability (hotness) too, all of which redirect to User:GlassCobra/Essays/Hotties are always notable. - ALLST✰R▼echo wuz here 19:38, 29 June 2009 (UTC)
- Delete all. I really don't feel like linking to OTHERCRAPEXISTS since Allstar should know better. --Der Wohltemperierte Fuchs (talk) 00:25, 30 June 2009 (UTC)
- Which itself is an essay and means nothing. My Wikipedia:Notability (uglyness)/WP:FUGLY was created as a companion to his Wikipedia:Notability (hotness)/WP:HOTTIE. You delete one, you've got to delete them all. - ALLST✰R▼echo wuz here 03:33, 30 June 2009 (UTC)
- It's an essay, yes, but it's common sense. You can't lump together anything and make some sort of conditional "you must" statement. We evaluate pages in the nom by their own merits. --Der Wohltemperierte Fuchs (talk) 11:10, 30 June 2009 (UTC)
- Again, that just simply doesn't apply in this case as they are essentially the same thing substituting one word for another: fugly for hottie. - ALLST✰R▼echo wuz here 17:13, 30 June 2009 (UTC)
- I disagree. See my !vote below. Delicious carbuncle (talk) 17:58, 30 June 2009 (UTC)
- Bleh, I have no opinion on this. My redirect has been put up for deletion before by the anti-fun squad, do with the information what you will. GlassCobra 04:43, 30 June 2009 (UTC)
- Strong don't care, both harmless and useless. Kusma (talk) 08:51, 30 June 2009 (UTC)
- Weak keep, doesn't strike me as particularly original, funny or clever - but it's not like anyone's going to use that redirect for anything more useful! I don't see the harm in allowing joke redirects as long as they aren't likely to mislead; and I really can't think of anyone who'd go looking for an actual notability standard for ugly people. ~ mazca talk 09:32, 30 June 2009 (UTC)
- Keep and ignore. Really we have more important things that are causing harm that should be deleted. And vandalism editors who need a dose of the anti-fun to curb the degrading of articles. -- Banjeboi 11:36, 30 June 2009 (UTC)
- Strongest possible who cares? Hans Adler 17:44, 30 June 2009 (UTC)
- Delete - While I can appreciate the satirical nature of both the "Hotties are always notable" and "Fuglies are not notable" essays, they are not equivalent. To be called a hottie is generally assumed to be complimentary, even if untrue. To be called ugly is generally assumed to be insulting, even if true. I'm not convinced that it reflects well on WP to have a shortcuts linking to an essay which names specific individuals as "ugly", "fugly", or, in the case of Will Ferrell, "fucking ugly, even when au natural". I understand that it is intended to be funny, but so is a good percentage of the vandalism that is added and removed every day. Delicious carbuncle (talk) 17:57, 30 June 2009 (UTC)
- Delete - ASE seems to have a desire to have redirects into his userspace. All of which have thus far been deleted by consensus. Plus the BLP concerns need to be redacted from the target page.→ ROUX ₪ 18:25, 30 June 2009 (UTC)
- As usual for Roux, the "rest of the story" is left out. These redirects have been around since February 2008 and have absolutely nothing whatsoever to do with the recent shortcuts I created for my user talk page. - ALLST✰R▼echo wuz here 18:32, 30 June 2009 (UTC)
- Delete these will never be popular enought o need s shortcut (unecessary shortcut/redirect) If it wouldn't have a place in the project space (Wikipedia:) then it shouldn't require a shortcut. (Ie in comparison to WPCRATSTATS which could quite easily be placed in the project space). ViridaeTalk 22:17, 2 July 2009 (UTC)
- Keep C'mon people, don't we have something better to do with our time than get upset about stuff like this? EVula // talk // ☯ // 19:38, 3 July 2009 (UTC)
[edit] Skittlebrau
I believe "skittlebrau" may have been a gag used in the episode of The Simpsons that this points to; however, the article contains no information on anything called "skittlebrau", and it doesn't seem to have any relevance to the episode beyond being a throwaway gag. We don't need redirects for every gag used in every episode of everything, and having one for this gag seems arbitrary. Unscented (talk) 14:40, 29 June 2009 (UTC)
- Checking out the old history of the redirect shows that it was in fact from that episode. [4]. It however does not seem not imporant enough to even mention in the episode's article.--76.66.188.176 (talk) 16:18, 29 June 2009 (UTC)
- Delete per nom. - if it's not mentioned in the article, no point redirecting to it. JohnCD (talk) 18:59, 29 June 2009 (UTC)
- Keep: It is something that I have searched for in the past, and it now serves a purpose of discouraging a separate article for the subject (which, being Simpson cruft, is likely to happen). Harmless.--Remurmur (talk) 03:50, 30 June 2009 (UTC)
- Keep, a definite potential search term; and simply redirecting to the episode is useful information - it tells you what episode that half-remembered joke came from. I also agree that having a redirect acts to prevent people creating doomed articles based on a throwaway reference. ~ mazca talk 09:22, 30 June 2009 (UTC)
- Delete - we don't need redirects for every single made up word gag from every single Simpsons episode ever. HIghly unlikely search term which is not mentioned in the target article (nor should it be, since it's trivia). "Keep it because otherwise someone will write a crufty article about it" is not a valid reason for keeping and if the idea that someone might write such an article is really that threatening then the word can be salted. It doesn't appear anywhere else on Wikipedia so it's not like someone's going to stumble across any redlinks for it. Otto4711 (talk) 20:36, 30 June 2009 (UTC)
[edit] June 28
[edit] Country Music Awards
Misleading redirect. There are multiple country music awards shows, and it's very likely that someone might type in "Country Music Awards" when searching for the Academy of Country Music, the CMT awards, or even country Grammys. Ten Pound Hammer, his otters and a clue-bat • (Many otters • One bat • One hammer) 15:27, 28 June 2009 (UTC)
- Delete or disambiguation page. This redirect is the result of an old page move that I did. As per the nomination it is misleading with the various articles that exist. Deletion is fine, the Wikipedia search works ok for this item. Or a disambiguation page can be created. I have no preference, but just delete it if this discussion doesn't garner a strong consensus.--Commander Keane (talk) 03:23, 29 June 2009 (UTC)
[edit] United States Census, 2005)
- Comment Shouldn't this discussion be merged with the other US census discussions?--Emmette Hernandez Coleman (talk) 16:06, 3 July 2009 (UTC)
- I actually don't understand why these aren't just deleted already. There was no discussion for the 2002 and 2004 articles; I feel like these discussions are a clear waste of time. Timneu22 (talk) 17:23, 3 July 2009 (UTC)
- I meant the other discussions on this page.--Emmette Hernandez Coleman (talk) 17:57, 3 July 2009 (UTC)
[edit] United States Census, 2006] United States Census, 2007] Rollbacker
Cross-namespace redirect from article to wiki namespace. Delicious carbuncle (talk) 11:48, 28 June 2009 (UTC)
- Delete - unneeded cross-namespace redirect. --ThaddeusB (talk) 18:01, 28 June 2009 (UTC)
- Keep - The alternative is that we have to go through two other sites so that we can find the rollback page. I think that this is necessary because I doubt that people are going to type in the entire page to go directly to it. Kevin Rutherford (talk) 23:56, 28 June 2009 (UTC)
- Delete, newly created WP:XNR. The target page has plenty of shortcuts like WP:Rollbacker or WP:RBK. Why does it need another one in mainspace with all the ugly side effects, like turning up in searches, AJAX search hints, mainspace statistics, …? Amalthea 00:25, 29 June 2009 (UTC)
Keepif the only reason for deletion would be that it's a cross-namespace redirect. What would a reader searching for "rollbacker" expect to find? If there's another prominent use of the word that I'm not aware of, the deletion may be appropriate, but I don't think there's consensus to delete all the 2,000+ cross-namespace redirects currently in use just because they go from one namespace to another. Many of them serve a purpose, and if there's no potential confusion with actual article titles, I see no reason to delete. Jafeluv (talk) 09:27, 29 June 2009 (UTC)
- I assume a reader might be looking for any of the pages listed at Rollback (disambiguation). Also, I think very few people want to delete all pages in Category:Cross namespace redirects, it's the redirects from the reader-side of Wikipedia (Mainspace, Category, Portal) to the editor-side that are problematic and should be avoided. I don't want to rehash all arguments and counter-arguments from WP:XNR, but I am convinced that in this case, the possible benefits of this highly specialized redirect don't warrant weakening namespace boundaries at all. Amalthea 11:51, 29 June 2009 (UTC)
- Okay... If that's what the reader could be looking for, why delete? Why not redirect to rollback or rollback (disambiguation)? Jafeluv (talk) 13:51, 30 June 2009 (UTC)
- Fine by me. Rollback already includes a hatnote, properly wrapped in {{selfref}}, which directs people towards WP:Rollback feature. Amalthea 14:42, 30 June 2009 (UTC)
- Delete - if the reader is not a Wikipedian, he doesn't need to know about the behind-the-scenes machinery. If he is a Wikipedian wanting to know about this feature, he wouldn't expect to find it in article space, and he has only to type "WP:ROL" in the search box to have "Wikipedia:Rollback feature" appear. JohnCD (talk) 18:43, 29 June 2009 (UTC)
- Retarget to Rollback (disambiguation). Some of the terms there can be converted into a rollbacker verb so this is a plausible redirect. It also links to the Wikipedia feature so the intent of the original editor is fulfilled without being a cross-name space redirect.--Lenticel (talk) 02:33, 3 July 2009 (UTC)
[edit] Obama Beach
"Obama Beach" isn't mentioned in the target article and thus the redirect is inappropriate ThaddeusB (talk) 03:42, 28 June 2009 (UTC)
- Delete, confusing. Or follow Gordon Brown's slip of the tongue and redirect to the more appropriate Omaha Beach. But that's still confusing. Kusma (talk) 11:11, 28 June 2009 (UTC)
- Delete, of course. I took the same view as the above posters some weeks ago and prodded it, but the prod was declined for some red tapey reason and I simply couldn't be bothered to bring it here.—S Marshall Talk/Cont 18:51, 28 June 2009 (UTC)
- Delete per nom. JohnCD (talk) 18:44, 29 June 2009 (UTC)
[edit] June 27
[edit] Articles containing fatwas by Ibn Taymiya
Implausible search term; created as a redirect from a tiny little sub-stub article back in 2005. This redirect doesn't seem to serve any useful purpose at this time; no content was merged. ~ mazca talk 13:06, 27 June 2009 (UTC)
[edit] Plox
Unlikely search term. Seems to be some kind of slang for 'please' based on google searches, but it's unlikely that anyone would use this redirect for an encyclopedic inquiry. Brianga (talk) 02:33, 27 June 2009 (UTC)
- Retarget. Were please an article on the word, this might be a marginally useful redirect (if "plox" was mentioned in the article). As it's only a disambiguation page, however, I think plox should be made a soft redirect to wiktionary like zomg and GTFO are. Jafeluv (talk) 06:50, 27 June 2009 (UTC)
- Retarget Agree with Jafeluv -LK (talk) 10:12, 27 June 2009 (UTC)
- Comment - I don't know the correct format for this as I've not visited "Redirects for discussion" before, but I think it needs to be a redirect as Plox is an area/suburb of the Somerset town of Bruton. It includes King's School, Bruton (see Schol web page - address bottom left) and a Grade I listed building, Bow Bridge, for which I was about to write an article at Bow Bridge, Plox (see Details of bridge at Images of England). Therefore I do think a redirect page is needed for readers interested in the place.— Rod talk 10:23, 3 July 2009 (UTC)
[edit] Iain Thersby
I'm uncertain of the relevance of this redirect. Majorly talk 01:27, 27 June 2009 (UTC)
[edit] June 25
[edit] Dried cherry history
Unlikely search term which would now redirect to Dried cherry anyway. Drawn Some (talk) 23:01, 25 June 2009 (UTC)
- Retarget to dried cherry. Seems like a harmless enough redirect and might possibly be helpful to someone, as do the rest in this group - suggest taking them all together in one discussion. SpinningSpark 23:10, 25 June 2009 (UTC)
- Delete all. These seem like pretty far fetched search terms for someone looking for dried cherry. Someone looking for the history of dried cherries will probably search for "history of dried cherries". Jafeluv (talk) 15:37, 26 June 2009 (UTC)
[edit] History dried cherries
Unlikely search term which would now redirect to Dried cherry anyway. Drawn Some (talk) 22:55, 25 June 2009 (UTC)
[edit] Dried tart cherries history
Unlikely search term which would now redirect to Dried cherry anyway. Drawn Some (talk) 22:52, 25 June 2009 (UTC)
[edit] Dried cherries history
Unlikely search term which would now redirect to Dried cherry anyway. Drawn Some (talk) 22:51, 25 June 2009 (UTC)
[edit] History dried tart cherries
Unlikely search term, would now direct to Dried cherry. Drawn Some (talk) 22:49, 25 June 2009 (UTC)
[edit] History of dried tart cherries
Unlikely search term, now would redirect to Dried cherry. Drawn Some (talk) 22:47, 25 June 2009 (UTC)
[edit] 2 redirects: "Downtown Troy" and "Downtown Hudson"
[edit] Downtown Troy
- (created recently as redirect to Central Troy Historic District, then deleted, then recently recreated as redirect to Troy, New York)
[edit] Downtown Hudson
- (created recently as redirect to Hudson Historic District (New York), then deleted, then recreated)
Request deletion of these two recently created, for several reasons:
- 1: The redirects do not aid readers who might be searching for target articles. In the absence of the Downtown Troy redirect, I believe a reader searching on "Downtown Troy" would have found their way immediately to
Downtown Troy Historic DistrictCentral Troy Historic District, a very nice NRHP HD article. The Downtown Troy redirect was set up at first to direct there but now redirects to Troy, New York, an article which has no section on Downtown Troy and no proper noun use of the phrase. I don't know, but believe the change of redirect target may have been because the HD is named just for the general area it is in, but the HD article does not strive to describe the larger area. I see no evidence that "Downtown Troy" is a commonly used term for any specific area, actually. Thus the existence of the redirect only serves to suggest to the reader that there will be an article or section somewhere about a defined area of that name, and that does not exist. In the case of Downtown Hudson there is also, I believe, no area commonly referred to by that name, and no proper noun coverage in the target article.
- 2: The redirects were created in the midst of an ongoing discussion about NRHP HD articles and redirects and mergers/split proposals and so on, in Connecticut, see Talk:National Register of Historic Places listings in Connecticut#Extending edit warring to other states. While it may not have been assuming good faith on my part, tt seemed to me that the two here might have been created simply to use in supporting arguments about CT NRHP HDs and town/villages, as if to suggest that it is usual for there to be parallel articles and/or redirects, everywhere, for NRHP HDs of format "Name Historic District" and a neighborhood/village of corresponding "Name" or "Name (Town)". In general I believe it would be unhelpful to go down the entire list of 14,000 U.S. NRHP HDs and create competing articles at "Name" or "Name (town)", in effect relying upon the notability of the NRHP HDs named "Name Historic District" that are in the same general area, but in fact not necessarily entirely overlapping very much in history or geographic area. As a general matter, then, I think it best to call in question the manufacture of piggy-back redirects like this, and in these 2 cases in NYS, to delete them. doncram (talk) 18:50, 25 June 2009 (UTC)
- Keep and modify. As the creator of the target articles, I actually think they'd be OK as long as the state was added (i.e., Downtown Troy, New York, Downtown Troy, NY and similarly for Hudson). Troy's downtown is mostly covered by its historic district, and Hudson's actually covers a huge portion of its developed area, much less downtown. Daniel Case (talk) 00:47, 26 June 2009 (UTC)
- Okay, I am afraid Daniel Case may be being deliberately unclear (perhaps to avoid "taking sides" in a parental-unit-like way, as he works with both Polaron and myself elsewhere?). By "Keep and modify", what Daniel appears to be meaning is not to have the two redirects that were nominated for deletion, but perhaps other redirects from similar names with ", New York", consistent with usual place-naming conventions, could be created, and directed in at least one of the two cases to a different target. So, Daniel, could you please clarify if this understanding is not correct, but in the present discussion I believe Daniel's view is DELETE and then so far there is a consensus of two in that direction. Further, actually, about a redirect from "Downtown Troy, New York", I don't see a natural target for it to redirect to, because there is no section or proper noun usage of "Downtown Troy" in the Troy, New York article, and because the Downtown Troy HD article does not cover the entire downtown area. Also, about a "Downtown Hudson, New York" redirect, Daniel is saying the Downtown Hudson HD article covers a huge other area, so is not an appropriate target. I further don't think it would add value for someone to revise any or all of the four existing articles ("Troy, New York", "Downtown Troy HD", "Hudson, New York", or "Downtown Hudson HD (New York)") just so that they would serve better as redirect targets. So, I am back to: it seems best to just delete the redirects, which are new and unused. doncram (talk) 17:16, 26 June 2009 (UTC)
- To anyone familiar with those cities, those seem like moving the goalpost. Downtown Troy is more or less contiguous with the historic district. And you read too much into my argument with Hudson ... the 45-block grid is downtown, more or less. It doesn't include any significant undeveloped areas save Promenade Park. Per the principle of least astonishment, incorporated in the redirect guideline, someone searching on "Downtown Hudson, New York" would not be at all astonished to end up at the Hudson Historic District article. Likewise with Troy.
Googling on Downtown+Troy+NY, I don't find many hits that would refer to a location outside the historic district. Doing this for Hudson] is a little harder because hits related to New York City come up, but still the ones referring to the Columbia County seat land in that 139-acre section. Daniel Case (talk) 03:39, 27 June 2009 (UTC)
- Addendum: this is, I allow, not always true. I would support the deletion of a "Downtown Monroe, NY" redirect to Village of Monroe Historic District since that district is not at all downtown, rather a more residential area immediately to the east. Daniel Case (talk) 03:43, 27 June 2009 (UTC)
- Keep and create additional redirects per Daniel Case with the state name. Retarget to city article if it is the case that the historic district is not representative of the downtown area. From reading the historic district articles, I had gathered that these represented the main part of the downtown areas of these cities. Since there were no downtown articles, it seemed to me that you could find out more about the downtown areas from the historic district articles than from the city articles. --Polaron | Talk 16:47, 26 June 2009 (UTC)
- Keep as plausible search terms. --NE2 21:22, 26 June 2009 (UTC)
- Keep both in the fashion proposed by Daniel Case. These seem like plausible search terms and plausible destinations for a searcher. --Orlady | Talk 17:26, 27 June 2009 (UTC)
- I don't think Daniel Case has stated clearly where he thinks the two should redirect to, so I don't really understand a vote to follow the fashion proposed by him. "Likewise with Troy" means what, keep the redirect for "Downtown Troy" in place, which directs to Troy, New York? Or change it to direct to Central Troy Historic District? I still think both of these should best be deleted, as not helpful to readers who would otherwise easily find the candidate articles if they searched on those exact terms, and a bit unhelpful in fact, because each redirect seems to promise an article on exactly the given topic, which does not exist. I guess this is looking like "No consensus to delete", though, with delegation to Daniel Case to choose whichever targets for these redirects that he deems best. doncram (talk) 00:45, 28 June 2009 (UTC)
- My reading was that Daniel Case had suggested that both of these redirects should point to the historic district articles. --Orlady (talk) 02:15, 28 June 2009 (UTC)
- That was my intent, yes. Daniel Case (talk) 19:12, 28 June 2009 (UTC)
[edit] various Tolland County, Connecticut NRHP HDs
The 10 redirects to be deleted are various National Register of Historic Places listings in Tolland County, Connecticut NRHP-listed Historic districts (HDs):
[edit] Andover Center Historic District
[edit] Bolton Green Historic District
[edit] Ellington Center Historic District
[edit] Hebron Center Historic District
[edit] Mansfield Center Historic District
[edit] Monroe Center Historic District
General reasons for deleting this redirect have been given. It was stated below by Polaron that this, along with 3 others "have had discussions about why leaving the redirect in place may be better than deletion." The only such discussion is this statement by Polaron "Delete all except Monroe Center and Naugatuck Center, which at least mention and describe the bounds of the historic district,....".
To respond specificly about Monroe Center, it is factually incorrect that the Monroe article describes the bounds of the historic district. What the Monroe article has is a section:
==On the National Register of Historic Places== * '''Daniel Basset House''' — 1024 Monroe Turnpike (added [[September 23]], [[2002]]) * '''Monroe Center Historic District''' — CT 110 and CT 111 (added [[September 19]], [[1977]]) * '''Stevenson Dam Hydroelectric Plant''' — CT 34 (added [[October 29]], [[2000]]) * '''[[Thomas Hawley House]]''' — 514 Purdy Hill Rd. (added [[May 11]], [[1980]])
I see mention of a location, not a description of bounds. It would be appropriate, in my view, to have the mention of Monroe Center Historic District there, and in the Tolland County NRHP list, appear as a redlink, to indicate to editors that they are free to create an article about the wikipedia-notable topic of the NRHP-listed historic district. I appreciate that no one has gone to change the Monroe article just to "win" this RFD discussion, allowing this to continue to serve as an example of many others like it in the CT NRHP list. Even if it were amended to include bounds and a sentence or few more about the HD, I would still think it best to have redlink to the NRHP HD name, mainly for possible NRHP HD editors and hence future readers, and at no harm to current readers. Certainly at the current state of the Monroe, Connecticut article, I think deleting the redirect is appropriate, even obviously appropriate. Thanks. doncram (talk) 17:53, 3 July 2009 (UTC)
[edit] Naugatuck Center Historic District
- Note: Naugatuck is not in Tolland County. --Orlady (talk) 19:10, 27 June 2009 (UTC)
- Noted, and I checked the Tolland County NRHP list and find it is not listed there, so there is no error to correct. I meant to include only Tolland County ones in this batch of redirects to delete, but, yes, this one is in New Haven County. The deletion of redirect is still requested. doncram (talk) 18:10, 3 July 2009 (UTC)
- Polaron, below, asserts that there is not consensus about this redirect and 3 others, as there has been discussion about them. The only discussion against this redirect which I can find was his statement below that he disagreed because the town article contained mention of the historic district and described its bounds. I replied to his statement explaining why i thought that did not matter, and there was no further discussion. For reference, the only mention in the town article is, in a list of NRHPs in the town, the item of "Naugatuck Center Historic District — Roughly bounded by Fairview Avenue, Hillside Avenue, Terrace Avenue, Water Street and Pleasant View Street (added 30 August 1999)". That is no more information than appears in the New Haven County NRHP list, and in fact the town article's statement of the NRHP listing date is incorrect. The NRHP list-article gives July 30, 1999 as the date, which is correct according to the National Register database which i just checked. Even if there were more information in either place, it would be appropriate in both places to show a redlink to the NRHP HD article name, conveying that an editor can open an article about the HD. doncram (talk) 18:21, 3 July 2009 (UTC)
- Glad to know that this location is not an error in NRIS. I agree that the Naugatuck article contains little information about the historic district, but it contains a long section on the history of the town, including the following statement about the town common, which presumably is in the historic district: "The town common features 11 commissions by the renowned New York architecture firm of McKim, Mead and White," it lists some other historic properties that are probably in the historic district, and it has photos of several historic buildings that appear to be in the district. If (as I naively assumed at one time) the NRHP Wikiproject was interested in giving people information about the heritage that is commemorated by National Register listings, that article is more informative than the list-article and would be a worthwhile redirect. However, if there is consensus that the most interesting aspects of National Register listings are their listing dates, metes and bounds, and the names of architectural styles, then I have to agree that this redirect is a dangerous thing that needs to be deleted. --Orlady (talk) 18:58, 3 July 2009 (UTC)
- On this occasion, I can distinguish the sarcasm in your comments, but I think it is misdirected. Perhaps you should criticise the Naugatuck, Connecticut article and remove the list of NRHP listing dates and bounds of districts from that article; i agree that the metes-and-bounds-and-listing date information there is excessive. It seems appropriate to keep it in the NRHP list-articles though. And it would seem appropriate to describe the bounds of a legal historic district in an article about it, so really i don't see where your sarcasm is appropriate, if directed towards me. I will not accept responsibility for what is in this and other CT town articles, which are largely unsourced.
- By the way it is speculation (not that you asserting otherwise) that the town common and various specific buildings mentioned in the Naugatuck article are included in the historic district. It is probably a good guess that they are included, but in too-numerous-to-list other cases in Connecticut alone, editors' speculation of what must be included in a historic district has proven incorrect. The request is to delete these 10 redirects, and then later about 300 others, and to clear the way for development of NRHP HD articles. I would hope these would develop eventually like Daniel Case's nice articles such as Central Troy Historic District, which do indeed include listing dates and "metes and bounds".
- It happens that in CT there has been a history of edit warring by one editor when any other editor started an NRHP HD article at an NRHP HD name that the one editor had redirected to a town article. Orlady, I don't believe you are as familiar with the previous history of such edit warring, but I believe you have seen some recent edit warring on a slightly higher level (involving at least some discussion of sources and facts), so you should give me some credence that, before, plenty of edit warring happened and was at an even lower level. I was affected, and one other editor has volunteered in this discussion that he/she was affected, and I believe there were others affected. I believe the edit-warring-editor has revised his practices, and at least would not now edit war in every such case. This initiative to delete these redirects is to partly to clear the air and clarify that separate articles on these wikipedia-notable NRHP HDs in fact will be allowed.
- "but in too-numerous-to-list other cases in Connecticut alone, editors' speculation of what must be included in a historic district has proven incorrect" -- can you name even one such case where it was "proven incorrect" as to what is in the historic district? --Polaron | Talk 03:10, 4 July 2009 (UTC)
[edit] Somers Historic District
[edit] Tolland Green Historic District
[edit] Willington Common Historic District
[edit] about Tolland County, CT NRHP HDs as a group
Request deletion of a first batch of 10 redirects of NRHP historic district (HD) names. Consensus in discussion at Talk:National Register of Historic Places listings in Connecticut#Moving forward, cleanup tasks and other sections on that Talk page is that these redirects are unhelpful and should be deleted. All were created in June 2008, have no useful edit history, and are unhelpful because a) they suggest, in the National Register of Historic Places listings in Tolland County, Connecticut list-article, that an article on the given NRHP HD is available. Each redirects instead to a town or CDP or village article that has no mention of the NRHP. And they are unhelpful because b) they suggest that the town article is the place to develop the NRHP HD material, such as adding NRHP infoboxes, while in fact in all cases it would be better to create a separate NRHP HD article, at least unless and until a very substantial overlap of all history and geographic area is established, which could theoretically justify a merger proposal later. However the future merger is extremely unlikely, and it is better in short term and almost certainly also in long term to have a separate article. I believe this is a fair representation of consensus view.
This is the first batch of 10 out of perhaps 300 redirect deletions needed. Each redirect has been edited to include a custom template, linking to the discussion at wt:List of RHPs in CT. The original creator of all of these redirects is participating in the discussion there and I consider this to be adequate notification. doncram (talk) 09:14, 25 June 2009 (UTC)
- I've participated in the talk somewhat, but it's gone so fast that I can't keep up, so no vote from me. Please understand that "CDP" is census-designated place. Nyttend (talk) 12:20, 26 June 2009 (UTC)
- Delete all except Monroe Center and Naugatuck Center, which at least mention and describe the bounds of the historic district,
and Mansfield Center, which is a village where merging is appropriate.--Polaron | Talk 16:52, 26 June 2009 (UTC)
- Polaron, thanks for agreeing about 7 of the 10 cases, reducing the scale of the problem. I am trying to make a solution for about 300 redirect cases. I think the deletion-of-redirect is appropriate for the Monroe Center and Naugatuck Center cases too, where only the existence of a NRHP HD is mentioned in the town/village article, within a list of other NRHPs that are included in the town/village. Just like the same information (the name and the bounds of the HDs) is included in the NRHP list-article, National Register of Historic Places listings in Connecticut. In the absence of the redirect, a wikipedia reader searching on the exact phrase "Monroe Center Historic District" would do fine in finding their way to either the list-article or to the town article. And the redlink in the list-article, and the mention in the town article (which itself could be converted to a redlink), convey properly that the NRHP HD is a wikipedia-notable topic which an editor can start an article for. If a redirect is in place, that would tend to suggest incorrectly at the NRHP list-article that there is an article on the topic already. And it would tend to suggest incorrectly to someone who clicks on it there that it is intended for NRHP HD coverage to be developed within the Town/Village article, while in fact I would prefer to welcome a new article. Seriously, isn't it okay to delete these 2 redirects? Thanks for pointing out that those town/village articles mention the NRHP HDs, but I don't understand from your statement any reason why you would oppose deleting those redirects. I believe your opinion is also that separate NRHP HD articles can/should be created for these two, eventually, by any editor. doncram (talk) 17:53, 26 June 2009 (UTC)
- About the Mansfield Center HD case, which is now a redirect to Mansfield Center, Connecticut article about a CDP, there is no coverage, not even any mention, of the Mansfield Center HD in the CDP article. Polaron, I understand from your statement here that you think, based on your own knowledge or upon sources you have which are not in the article, that ultimately it will be better to have one merged article covering the CDP and the NRHP HD. Can we please deal with that as a merger proposal later, but for now allow deletion of the redirect? If the redirect is deleted, it will show as a redlink and clarify at the NRHP list-article that there is no coverage of the NRHP HD yet, and allow for anyone to create an NRHP HD article, and to start adding pictures and descriptions of contributing properties and so on. This does not preclude a later merger with the CDP article, which can/should be handled by a regular merger proposal, which should be non-contentious later, when information about the bounds of the two areas and other information has been developed. I guess there are 20-50 cases like this in CT, which I would like to treat in the same way right now, by deleting the redirects and allowing for NRHP HD articles to be developed gradually. Polaron, since this does not preclude the ultimate merger of two Mansfield Center articles, is this not okay? It is what I have been proposing, and I think there is general consensus for it, in the RFC discussion at wt:List of RHPs in CT. I just think it is premature in this case and 20-50 similar others, to prejudge the merger decision. I would appreciate very much if you could agree to this approach. doncram (talk) 18:19, 26 June 2009 (UTC)
- Mansfield Center is first and foremost a place. It happens that the Census Bureau treats it as a CDP, where it counted 973 people in 2000, and somebody (probably the Town of Mansfield, which is the legal local government) submitted an application to list it as a National Register historic district. There also may be a state or local historic district designation. I see two options for article coverage:
- (1) Create three separate and distinct articles: One general article about the place; a second article about the demographic data for the CDP; and a third article about the National Register historic district, including its metes and bounds and the buildings that are included in it.
- (2) Create a single article about the place that includes information about the historic district designation and the demographic data. Redirect Mansfield Center Historic District to point to that single article.
- I prefer option 2, as I find option 1 to rather silly. --Orlady (talk) 19:24, 27 June 2009 (UTC)
- I don't think it is silly to allow an NRHP HD article to be developed, even though it may eventually be merged into a CDP article later. There would be no harm done, and I think it would just advance the development of CT NRHP information in wikipedia sooner, to show a redlink and thereby to encourage anyone to start a fully sourced, focused article on the NRHP HD, without burden of relating it to a CDP that may or may not prove to be very similar in geographic area and shared history.
- My proposal was to delete the redirect as an "obvious"-type decision for now, as there was no information in the CDP article to which it redirected, and no common information developed yet that would help make any decision about the likely best ultimate article to hold NRHP HD information. This is a representative example for perhaps many more (10 to 30?) very similar redirect cases in CT. The proposal overall is to delete all the similar redirect cases, and to signal and allow for NRHP HD articles to be developed, where information could be developed about the "metes and bounds and buildings". And then later informed merger proposals could be considered and resolved more easily. Note that it has been conceded or shown that many of the 300 or so redirects set up in June 2008 were to articles that are not the appropriate final article name. I don't want to debate each of the cases like this Mansfield Center HD one, requiring us to do research about the specifics, now. I would be happy to withdraw this one part of the RFD request, i.e. not to delete the Mansfield Center HD redirect, and to discuss that separately as an exception item, if we could otherwise agree to just delete the redirect in cases like this (where there is no information available to inform a guess whether the redirect will be the best final decision later). doncram (talk) 00:25, 28 June 2009 (UTC)
- That discussion is Orlady's comment: "Here is one thought: ....Tolland, Connecticut does have some information about Tolland Green Historic District, which makes the current redirect more useful than no information at all (or could be the basis for a stub article). --Orlady (talk) 19:30, 26 June 2009 (UTC)". And the information in the Tolland, CT town article is an informal passage about the town's green, which provides no description of the boundaries or importance of the NRHP HD which may include that green.
- Does the presence of some informal, unsourced information in a town article about something that might be included in a NRHP HD article justify derailing a general solution to the RFC issue(s)? We could delete the info in the town article, but I would rather not get into that. Or we could start the Tolland Green H D article, but I would argue against including the unsourced passage from the town article, and my wish is to delete about 300 redirects, not to start 300 stub articles. Editors should be encouraged to get the NRHP application and other reliable sources, before starting a stub article. Deleting the redirect here, too, would appropriately signify to future editors that they can create the NRHP HD article, if they have sources. This isn't even a case where it is likely that a new HD article should be merged with an existing town article: from its name, I think it is highly unlikely that Tolland Green HD has substantially the same boundaries as Tolland, Connecticut. So I am back to wanting to delete the redirect here, too.
- Further about what serves wikipedia readers, note that searching on "Tolland Green" now yields, for its first 4 hits:
# Tolland Green Historic District redirect Tolland, Connecticut. 67 B (3 words) - 08:56, 25 June 2009 # Tolland, Connecticut (redirect from Tolland Green Historic District) Tolland is a town in Tolland County , Connecticut , United States . ... The Green's features include an old-fashioned penny candy and ... 10 KB (1253 words) - 23:37, 5 June 2009 # National Register of Historic Places listings in Tolland County, Connecticut List of Registered Historic Places in Tolland County, Connecticut ... 42 | Tolland Green Historic District 100px link off | 1997 | 8 | 1 ... 16 KB (1172 words) - 14:30, 13 June 2009 # Tolland, Massachusetts Tolland is a town in Hampden County , Massachusetts , United States . ... It has been replaced with a picnic on the town green. ... 6 KB (728 words) - 18:52, 16 June 2009
- If the first hit, the Tolland Green H D redirect itself, was deleted, I believe the Tolland, Connecticut article that is the 2nd hit would still appear near the top of the wikipedia search, again at about the same level as the Tolland, Massachusetts article, which also mentions a town green. I think that would be fine. And then also a search on "Tolland Green Historic District" would probably yield the county NRHP list-article, which includes more specifically about the HD, namely a short description of its boundaries, than any other article, so that would be best for a reader interested in the HD per se, too. If anywhere, the redirect for Tolland Green HD should go to the NRHP list-article, but that would be a circular redirect for readers browsing the NRHP list-article. I reiterate, deleting this redirect appears the correct thing to do. doncram (talk) 14:13, 27 June 2009 (UTC)
- I am afraid that I am too dense to grasp the point you are trying to make in discussing WP search results.
- It appears to me that Tolland, Connecticut includes plenty of information about the historic district in the center of town. (Specifically, it." There are also several images of the district.) A person encountering a link to Tolland Green Historic District in an NRHP list would get far more benefit from that article than from a redlink, which is what they would see if the redirect were deleted. --Orlady (talk) 02:26, 28 June 2009 (UTC)
- I wanted to consider this one as type 1 in the RFC proposal: "1: Simple redirect ones with no NRHP content--Of the HDs that are redirected, it looks like 75% or so redirect to town or CDP articles that have no mention of the NRHP HD, have no NRHP categories added, no NRHP template, no NRHP infobox." I feel the Tolland article should not be seen as covering the NRHP HD already. It does not mention the NRHP HD by name and refers only to a "national historic district". Further, the Tolland article is not even the NRHP HD article location desired by anyone. Polaron and you I think would agree that it is not the right location to add "metes and bounds" and detailed description of each NRHP contributing property. So i feel it does a disservice to would-be CT NRHP article developers, to suggest by the redirect that the NRHP HD information must be added only to the Tolland article. It is better to suggest by a redlink in the NRHP list-article that a separate article can be created. Readers interested in the Tolland Green can easily find those couple sentences in the Tolland article now if they search on "Tolland Green". And, if orderly development of CT NRHP articles is supported, in part by deleting this redirect and many others, there will sooner be an actual Tolland Green HD article with plenty more information.
- If you want to say this one does not meet the criteria laid out for type 1 in the RFC proposal, because there is mention of a "national historic district", then this one kicks into the type 2 grouping in the RFC proposal, for which, to settle matters, a stub NRHP article at Tolland Green HD must now be created (by the proposal). In the stub article, I will argue against unsourced statements being included, so actually a reader interested in Tolland Green may be less well served for a time, but ultimately there will be better info available. I would prefer to delete the redlink and be done with this one for now, but if you want to draw the line between Type 1 and Type 2 differently for this one i don't want to argue. Your choice. doncram (talk) 03:27, 28 June 2009 (UTC)
(unindent) Is the consensus then Yes, delete all 10 redirects? There has not been no question raised for 6 of the redirects, and in my view questions about why the other 4 should be redirected have been answered. The important thing here is to get a decent consenus, to apply to 300 or so other cases too, not to decide just these 10 cases. doncram (talk) 14:48, 29 June 2009 (UTC)
- To summarize, no there is not a consensus to delete all 10. Six have had no objections (Andover Center, Bolton Green, Ellington Center, Hebron Center, Somers, Willington Common). The other four have had discussions about why leaving the redirect in place may be better than deletion. --Polaron | Talk 21:54, 1 July 2009 (UTC)
- Consensus is a matter of some judgment, but there having been some discussion does not mean a consensus of reasoned opinion is not apparent. Also, consensus in wikipedia is not the same as unanimous(sp?) voting. I have stated reasons why all 10 should be redirected. Pointing out that there has been some discussion, or a statement on the level of "I disagree" without reasons should not be allowed to derail reasonable, well-supported arguments for deletion of all of these redirects. I hope/expect a closing administrator will consider the quality of arguments given in the general and specific discussion of these redirects. doncram (talk) 18:00, 3 July 2009 (UTC)
- Which if the reasons for deleting a redirect are these supposedly under? As long as the target mentions the topic being redirected, there is no valid reason to delete the redirect. If you're unhappy with that, create the article or fix the text in the current target to make the redirect topic more obvious. --Polaron | Talk 03:18, 4 July 2009 (UTC)
- Out of the given list of reasons for redirects, reasons 7, 2, 4, 6 roughly apply. Reason #7, that "the redirect is a novel or very obscure synonym for an article name, it is unlikely to be useful. Implausible typos or misnomers are potential candidates for speedy deletion, if recently created." In none of these 10 cases and many others, is the NRHP HD a plausible typo for the town article to which it has been redirected. Reason #2, that the redirect causes confusion, is also paramount. These redirects have been used in the past by the creating editor to confuse and obfuscate others, and to serve in edit warring battles, for reasons I cannot understand or explain, to fight against the creation of wikipedia articles on the topics of the NRHP HDs. In the context, the confusion is the communication of message that an article will not be allowed at the NRHP HD name, which is dead wrong to convey because the NRHP HD is wikipedia notable in fact. Reason #4, that the redirect makes no sense, also applies in most cases. Also, the spirit of Reason #6, that "If the redirect is broken, meaning it redirects to an article that does not exist or itself, it can be deleted immediately" is also applicable.
- However, the given list of reasons for redirect don't fit precisely, because they are written in terms of describing when redirects to actual articles are justified. The 300 redirects in questions are, instead, redirects away from valid wikipedia articles to other articles. They are not reasonable synonyms for those articles. The redirects only reflect the general impression by one editor that it is possible that the target article, a town article, could be the correct place in wikipedia to cover the topic of the NRHP HD, while in fact the NRHP HD name is the naturally correct place to cover it. If a redirect was to be made for these NRHP HD names, the best redirect target for all, when the article has not yet been created, would be the corresponding county or state NRHP list-article. At the NRHP list-article, however, the best use of NRHP HD name is to show as a redlink. Really the redirects are of negative value.
- I note that Polaron has started marching through another state NRHP list to set up such lousy redirects as well, escalating this discussion. At this point, I find his actions to be deliberately disrupting wikipedia, specifically the orderly process for NRHP article creation, given discussion in process here and in the RFC that is still open on this topic. doncram (talk) 07:50, 4 July 2009 (UTC)
- It is not uncommon for topics that are not yet currently developed enough to have a stand-alone article to be merged to an article on a larger topic where the smaller topic can be discussed. That is one of the uses of redirects. Are you proposing to undo all such cases throughout Wikipedia where a more specific but not yet developed topic redirects to a larger topic? If you really do believe these redirects are implausible typos, why didn't you speedy delete them under that criterion? #6 obviously doesn't apply so I don't know what you're going on about here. #4 is meant for nonsensical redirects, i.e. unrelated topics. #2 is the reason why the 6 that don't mention the district can be deleted as the targets do not mention the topic but they don't apply to the other 4. --Polaron | Talk 13:34, 4 July 2009 (UTC)
[edit] List of ways to skin a cat
Neither the humour nor the cat article covers cat skinning or mentions this adage. The redirects seem pointless and should be deleted. The article used to exist but is now in userspace at User:MartinHarper/cat skinning. SpinningSpark 07:50, 25 June 2009 (UTC)
- Delete. Those should probably have been deleted under R2 when the page was userified, instead of redirecting. Jafeluv (talk) 09:59, 26 June 2009 (UTC)
- Delete. The redirects have no useful value. --Orlady (talk) 02:27, 28 June 2009 (UTC)
- Retarget Both to Skinning.--Emmette Hernandez Coleman (talk) 20:34, 1 July 2009 (UTC)
[edit] Talk:He Merry Thoughts
[edit] June 24
[edit] Diet of Vorms
Vorms? I assume this is a redirect in case of a typo. Not needed. Gordonrox24 | Talk 23:56, 24 June 2009 (UTC)
- Yes, but not entirely; I did that because the German way of pronouncing W's is as V's, right? So it's pronounced the Diet of Vorms. However, it might be unnecessary. If you have the power, feel free to remove it. -Panther (talk) 00:00, 25 June 2009 (UTC)
- We have a German Wikipedia also.--Gordonrox24 | Talk 19:36, 25 June 2009 (UTC)
- This seems a perfectly reasonable redirect to me; I would imagine it's a pretty common misspelling given the pronunciation. ~ mazca talk 13:02, 27 June 2009 (UTC)
- Delete per nom. I've been trying to think of who might search for this redirect. Native German speakers won't, because they'll know that the name of the city "Worms" is spelled with a "W". Native English speakers won't, because they'll most likely have heard the event referred to with the English pronunciation. Maybe if a native English speaker heard it from a native German speaker...? –BLACK FALCON (TALK) 23:31, 29 June 2009 (UTC)
- If a native English speaker heard another native English speaker say it correctly, then the assumption would be Vorms. I am aware that most people would read, not hear, the term, but this is an odd pronunciation of the word W and otherwise some people might not be able to find the article. Can you create a Did you mean... on the search page, after someone has searched Diet of Vorms? Because that would work. -Panther (talk) 17:49, 1 July 2009 (UTC)
- I'm not sure if it's possible to confirm that while the redirect still exists, but I did a test on a similar phrase. Searching for "Parasitic vorm" brings up a search page with "Did you mean: Parasitic worm" at the top. The same is true of a search for "Computer vorm" (see search page). –BLACK FALCON (TALK) 18:18, 1 July 2009 (UTC)
- Keep, not implausible. meshach (talk) 07:03, 2 July 2009 (UTC)
[edit] Who is a Filipino?
Wikipedia is not Wolfram Alpha. This redirect is an implausible search term with no history worth preserving, no significant incoming links, and no traffic. Even if, by some small chance, the title is searched, "Filipino people" would be one of the first result—if not the first one. Delete. (Redirect creator notified using {{RFDNote}}.) –BLACK FALCON (TALK) 21:36, 24 June 2009 (UTC)
- Also included in this nomination: Who is an Arab → Arab. –BLACK FALCON (TALK) 21:38, 24 June 2009 (UTC)
- See related discussion at Wikipedia:Redirects for discussion/Log/2009 June 17#What is a legend ?. –BLACK FALCON (TALK) 23:08, 29 June 2009 (UTC)
- Keep, the RFD page explicitly says that lack of incoming links should not be used as a reason for deleting a redirect. These are implausible terms but not impossible and redirects are cheap. meshach (talk) 07:01, 2 July 2009 (UTC)
- I don't think that impossibility is a viable standard for judging redirects... Any search string consisting of characters that are supported by MediaWiki is technically possible and, unless there is a limitation on the number of characters that can be typed into the search box, the number of possible searches is infinite. "Lack of incoming links" was only one reason I offered (and I would never nominate a redirect for deletion for that reasons alone), the others being "implausible search term" (even if searched, the target would be among the first results) and "no traffic". –BLACK FALCON (TALK) 16:07, 2 July 2009 (UTC)
[edit] Fledgling Jason Steed
[edit] List of terms in Xenosaga
- List of terms in Xenosaga → Xenosaga
- List of planets in Xenosaga → Xenosaga
- List of starships in Xenosaga → Xenosaga
- List of organizations in Xenosaga → Xenosaga
- List of major characters in Xenosaga → List of characters in the Xenosaga series
- List of minor characters in Xenosaga → List of characters in the Xenosaga series
Delete as unnecessary redirects. ZXCVBNM (TALK) 18:13, 24 June 2009 (UTC)
- Several of these pages had content that was transwikied, so I am not sure whether we may need to keep the page histories. (Added later: If copyright policy does not demand that we keep the page histories, or if they can be preserved elsewhere, then delete all per nom.) Delete List of major characters in Xenosaga, which was moved to List of characters in the Xenosaga series (the pagemove history is preserved in the page history of the target article). Delete List of minor characters in Xenosaga, which was deleted at AfD and currently has no significant undeleted page history). –BLACK FALCON (TALK) 20:47, 24 June 2009 (UTC)
- Also, please tag all of the nomianted redirects with {{rfd}}. –BLACK FALCON (TALK) 20:48, 24 June 2009 (UTC)
- Done, except for the List of terms, which is protected for some reason.--ZXCVBNM (TALK) 22:17, 24 June 2009 (UTC)
- Thank you. I added the tag to the list of terms, which seems to have been protected to prevent edit-warring about its status. –BLACK FALCON (TALK) 22:21, 24 June 2009 (UTC)
- RE: Black Falcon - I don't see why the page history needs to be "preserved" if it's already been transwikied. All of this information is most likely in the Xenosaga Wiki, making detailed pages about it unnecessary. This is an encyclopedia, not a comprehensive game guide.--ZXCVBNM (TALK) 00:02, 25 June 2009 (UTC)
- Comment. I'm concerned we are deleting the histories, and therefore easily accessed content here. Redirects are cheap so I don't see the rush to remove them. As well, especially on fictional items, they help push would b new articles back to the main instead. The removal of these, in this case, may perpetuate a cycle of split, merge, redirect. -- Banjeboi 01:08, 25 June 2009 (UTC)
- If the linked articles can be kept in check, I don't think it would be at all necessary to split them. These redirects are unnecessary and I don't see anyone going into their edit history to dig up information when they could go to the Xenosaga Wiki to get more detailed info. I don't think they ever were, or could be encyclopedic. Is there a valid reason to keep them?--ZXCVBNM (TALK) 01:40, 25 June 2009 (UTC)
- Well the reasons I just outlined; also I've never even heard of the Xenosaga Wiki - personally I don't use any outside wikis at all so am uncomfortable having us rely on them in such a way. We may just have to agree t disagree here. -- Banjeboi 10:55, 25 June 2009 (UTC)
- Wikia wikis are widely referred to when fictional universes and such are concerned. While larger ones have their own Wiki, like Star Trek, the smaller series have Wikia wikis with all the game content. However, that does not matter here, since I don't see any reason people would use them as search terms.--ZXCVBNM (TALK) 02:46, 29 June 2009 (UTC)
- Banjeboi, I don't see what the issue is with losing the information contained in the edit histories. If the decision was made that the article wasn't wanted, doesn't it follow that the content in the article wasn't wanted (with the exception of information that may have been merged)? At any rate, the main article contains an interwiki link to wikia:Xenosaga:Xenosaga. --Philosopher Let us reason together. 01:48, 30 June 2009 (UTC)
- I may not be expressing it well. First off interwiki links mean little to me. I don't think we should rely on other websites to serve as repositories as a default "waiting room". If other other users find those helpful to guide our work here then more power to them. My understanding is that we had a large list, it was split apart and then re-merged - or some variation therein - this is quite common. If the redirect is an unlikely search term and an unlikely future article term and there is relatively no contributions beside the break-out article being started and then remerged back then deleting the redirect may make sense. My hunch is that the same folks who thought the original split was a good idea may do it again or someone else will in their place. Hence just leaving the redirect in place may make the most sense here so when editors try to use them they are instead redirected back to the main article where we want the content to grow sustainably and organicly. Not sure if there is any perceived harm in keeping them. -- Banjeboi 12:34, 30 June 2009 (UTC)
- Keep all These redirects all have value for their existing usage, their history and their potential. The suggestion that they have no value as search terms is false as they still have significant traffic. Colonel Warden (talk) 08:28, 28 June 2009 (UTC)
[edit] Kaze no Naka no Shoujo Kinpatsu no Jeanie
[edit] June 23
[edit] Phobias
- Arsonphobia → -phobia
- Bathmophobia → -phobia
- Epistolophobia → -phobia
- Satanophobia → -phobia
- Phronemophobia → -phobia
- Politicophobia → -phobia
- Chrysophobia → -phobia
- Zelotypophobia → -phobia
- Hamartophobia → -phobia
- Teratrophobia → -phobia
- Rhabdophobia → -phobia
- Symmetrophobia → -phobia
- Anginophobia → -phobia
- Astrophobia → -phobia
- Atelophobia → -phobia
- Atephobia → -phobia
- Aurophobia → -phobia
- Acerophobia → -phobia
- Macroxenoglossophobia → -phobia
- Xenoglossophobe → -phobia
- Uranophobia → -phobia
These are phobias that are not mentioned in the main article and therefore should be deleted as someone searching for the phobia will not find anymore information on the phobia by having it redirect to the main page. They are better as redlinks because someone seeing that a page does not exist (either as a redlink or on the phobia list) they have a better chance of being created. Tavix | Talk 21:26, 23 June 2009 (UTC)
- Question: Would they be considered notable enough for their own article though?Calaka (talk) 09:08, 24 June 2009 (UTC)
- Keep A series of search engine tests finds all of these phobias mentioned in various medical sites, dictionaries, or phobia lists that require proof of proper usage. Thus these all appear to be legitimate medical phobias as opposed to random made-up-a-phobias. --Allen3 talk 10:16, 24 June 2009 (UTC)
- Keep. The redirects are useful for readers because 1) the target article provides useful information about phobias in general, even if the exact phobia the reader was looking for isn't listed, and 2) when a user is redirected to the list of phobias and sees that the one they were looking for is not on the list, they might be inclined to add their phobia to the list, thus improving the encyclopaedia. Of course, this only applies to existing phobias and not to made up ones, but I'm under the impression that the nominator is not questioning that these phobias exist. Jafeluv (talk) 10:40, 24 June 2009 (UTC)
- Keep these seem like {{R with possibilities}} and should be kept, perhaps one day these will be expanded upon. Carlossuarez46 (talk) 16:27, 24 June 2009 (UTC)
- Delete... please!
These are not "variations on a theme", some of them are wildly different from eachok, look- as it happens I just got off the phone with a friend, his partner was diagnosed with one of these today, I googled it, and... here I am. I love WP, not a prolific editor but I have a strong (I think) Contribs list... but I am so pissed off right now I may never come back. Why? WE STILL DON'T KNOW WHAT IT MEANS! I clicked on a the WikiLink of a term about which I needed information and I got... nothing. I expected a definition and discussion of the wikilinked term... and nowhere does the term even appear on the page! It's like domain squatting or something, or are we trying to appear more comprehensive than we are? These used to be red links, and they were great- a little advertisement for a new article & an acknowledgment that the word needs explanation all in one. And as for this: "target article provides useful information about phobias in general, even if the exact phobia the reader was looking for isn't listed"... WHAT? These are not just words- they represent people with mental illnesses... imagine you search "Breast Cancer" and find a Wiki entry, and click on it, and you get a HUGE page that seems to cover every minute detail about cancer- and your search "Breast Cancer" doesn't appear, anywhere, ever. When you want average survival rates for a disease that's affecting you directly, general "possible viral or bacterial origins" of the entire class o9f diseases is so much worse than useless...
look, sorry, I did say I was mad, but... please, delete. Snozzwanger (talk) 00:09, 27 June 2009 (UTC)
- In that case, redir to Greek LanguageErudecorp ? * 22:38, 28 June 2009 (UTC)
- Delete per Snozzwanger, whose experience explains clearly why these are a problem.. Any of them for which there is enough useful information can be made into a stub, the rest should go, absolutely no point redirecting to an article that says nothing about them. JohnCD (talk) 17:05, 1 July 2009 (UTC)
[edit] Aguero Genn
[edit] Please (english word)
[edit] Template:Crap crap crap!!)
[edit] Chuck norris mythology
[edit] Mortal Kombat 8 (Tentative Title)
[edit] WRNT-LP
[edit] Quincy_adams
[edit] June 20
[edit] Underline?
[edit] Book of Deuteronomy
[edit] Book of Exodus
[edit] Book of Leviticus
[edit] Wikipedia:SPEED
[edit] Big Asian Four
[edit] September 11 attacks
- 09112001 → September 11 attacks
- 11 9 2001 → September 11 attacks
- 11 September atrocities → September 11 attacks
- 9/11 tragedy → September 11 attacks
- 9-11-01 → September 11 attacks
- Pentagon bombing → September 11 attacks
- September 11, 2001 bombing → September 11 attacks
- September 11, 2001 bombings → September 11 attacks
- September 11, 2001 disaster → September 11 attacks
- September 11, 2001 Terrorist Attack/External news sites → September 11 attacks
- World Trade Center/Plane crash → September 11 attacks
- 9/11 (attack) → September 11 attacks
- 911 2001 → September 11 attacks
- Sept, 11th attacks → September 11 attacks
- 11/09/01 → September 11 attacks
There are 125 redirects to September 11 attacks. Of these, this assortment of 15 redirects are very much redundant - they are former sub-pages, unlikely search terms, inaccurate or otherwise in breach of WP:NPOV. What is more, nothing links to them any more. Ohconfucius (talk) 16:03, 20 June 2009 (UTC)
- Delete all except Pentagon bombing, as I can see that one as a search term. Tavix | Talk 15:02, 21 June 2009 (UTC)
- Delete all except Pentagon bombing. Since various conspiracy theories claim that a bomb detonated inside the Pentagon on 11 September 2001, that one is a plausible search term. –BLACK FALCON (TALK) 18:12, 23 June 2009 (UTC)
- Strong Keep most, if not all. From what I can see, someone could search for most of these redirects, except for the ones that used to be pages, but both of the kinds I mentioned should be kept for the following reason: To quote Wikipedia:Redirects for discussion#When should we delete a redirect? "[I]f." Note that these links could come from peoples bookmarks. Per wp:Redirect#Neutrality of redirects, neutral point of view does not apply to redirects. Clutter on the "what links here" is not a problem because it is trivial to click "Hide Redirects".--Emmette Hernandez Coleman (talk) 10:22, 26 June 2009 (UTC)
- In addition Wikipedia:R#Reasons for not deleting sates "[A]void deleting such redirects if [...] You risk breaking incoming or internal links by deleting the redirect. [...] [O]ld subpage links should be left alone in case there are any existing links on external pages pointing to them."--Emmette Hernandez Coleman (talk) 15:57, 29 June 2009 (UTC)
- In addition the result of the R.F.D. Wikipedia:Redirects_for_discussion/Log/2009_March_11#World Trade Center/Plane crash was keep. There were some good keep arguments there that apply here.--Emmette Hernandez Coleman (talk) 16:23, 29 June 2009 (UTC)
- Delete The September 11th attacks are well known and I doubt so many redirects are needed for such a well known event. Most people would just put September 11 attacks as a search term. --MicroX (talk) 01:19, 27 June 2009 (UTC)
- Delete all except Pentagon bombing and the redirs prefixed "Sept" without slash/sub-articling. And so long as the new improved search engine can churn up the appropriate article right at the top (and I can't imagine why it wouldn't, given at least "September 11" or "9/11"), delete 'em all (except Pentagon... for the conspiranuts). Franamax (talk) 05:42, 28 June 2009 (UTC)
|
http://ornacle.com/wiki/Wikipedia:Redirects_for_discussion
|
crawl-002
|
refinedweb
| 13,897
| 57.4
|
I'm trying to find the first day of the month in python with one condition which is if my current date passed the 25th of the month then the first date variable will hold the first date of the next month instead of the current month. I'm doing like the following:
import datetime
todayDate = datetime.date.today()
if (todayDate - todayDate.replace(day=1)).days > 25:
x= todayDate + datetime.timedelta(30)
x.replace(day=1)
print x
else:
print todayDate.replace(day=1)
This is a pithy solution.
import datetime todayDate = datetime.date.today() if todayDate.day > 25: todayDate = todayDate + datetime.timedelta(7) print todayDate.replace(day=1)
One thing to note with the original code example is that using
timedelta(30) will cause trouble if you are testing the last day of January. That is why I am using a 7-day delta.
|
https://codedump.io/share/R6nadLg5wziP/1/finding-first-day-of-the-month-in-python
|
CC-MAIN-2017-30
|
refinedweb
| 145
| 61.43
|
This action might not be possible to undo. Are you sure you want to continue?
Cat. No. 15107u
Contents
Important Changes . . . . . . . . . . . . . . . . Important Reminders . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . Personal Representative . . . . . . . . . . . . Duties . . . . . . . . . . . . . . . . . . . . . . Fees Received by Personal Representatives . . . . . . . . . . . . . Final Return for Decedent . . . . . . . Filing Requirements . . . . . . . . . Income To Include . . . . . . . . . . Exemptions and Deductions . . . . Credits, Other Taxes, and Payments . . . . . . . . . . . . . Name, Address, and Signature . . When and Where To File . . . . . . Tax Forgiveness for Deaths Due to Military or Terrorist Actions Filing Reminders . . . . . . . . . . . Other Tax Information . . . . . . . . . Tax Benefits for Survivors . . . . . Income in Respect of the Decedent . . . . . . . . . . . . . Deductions in Respect of the Decedent . . . . . . . . . . . . . Estate Tax Deduction . . . . . . . . Gifts, Insurance, and Inheritances Other Items of Income . . . . . . . Income Tax Return of an Estate — Form 1041 . . . . . . Filing Requirements . . . . . . . Income To Include . . . . . . . . Exemption and Deductions . . Credits, Tax, and Payments . . Name, Address, and Signature When and Where To File . . . . . . . . . . . . . . . . . . . . 2 2 2 2 3 3 4 4 4 5 6 7 7 7 8 8 8 8 11 11 12 14
Department of the Treasury Internal Revenue Service
Survivors, Executors, and Administrators
For use in preparing
2002 Returns
.... .... .... .... .... .... .... .... . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
14 14 15 16 19 19 19
Distributions to Beneficiaries From an Estate . . . . . . . . . . . . Income That Must Be Distributed Currently . . . . . . . . . . . . . . Other Amounts Distributed . . . . . Discharge of a Legal Obligation . Character of Distributions . . . . . How and When To Report . . . . . Bequest . . . . . . . . . . . . . . . . . Termination of Estate . . . . . . . .
. . . . 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 20 21 21 21 22 22
Form 706 . . . . . . . . . . . . . . . . . . . . . . . 23 Comprehensive Example . . . . . . . . . . . 23 Final Return for Decedent . . . . . . . . . 24 Income Tax Return of an Estate — Form 1041 . . . . . . . . . . 25 Checklist of Forms and Due Dates . . . . . 39 Worksheet To Reconcile Amounts Reported . . . . . . . . . . . . . . . . . . . . 40 How To Get Tax Help . . . . . . . . . . . . . . 41 Index . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Important Changes
Combat zone. Special rules apply if a member of the Armed Forces of the United States dies while in active service in a combat zone or from wounds, disease, or injury incurred in a combat zone. See Tax Forgiveness for Deaths Due to Military or Terrorist Actions, later. For other tax information for members of the Armed Forces, see Publication 3, Armed Forces’ Tax Guide. Benefits for public safety officers’ survivors. For tax years beginning after 2001, a survivor annuity received by the spouse, former spouse, or child of a public safety officer killed in the line of duty generally will be excluded from the recipient’s income regardless of the date of the officer’s death. Survivor benefits received before 2002 were excludable only if the officer died after 1996. The provision applies to a chaplain killed in the line of duty after September 10, 2001. For more information, see Public safety officers, later. Rollovers by surviving spouses. For distributions after 2001, an employee’s surviving spouse who receives an eligible rollover distribution may roll it over into an eligible retirement plan, including an IRA, a qualified plan, a section 403(b) annuity, or a section 457 plan. For plan.
• Certain death benefits paid by an employer to the survivor of an employee because the employee died as a result of a terrorist attack..
• Debt cancellations made after September
10, 2001, and before January 1, 2002, if the debts were cancelled because an individual died as a result of the September 11 attacks or the anthrax attacks.
• Payments from the September 11th Victim
Compensation Fund of 2001. The Act also reduces the estate tax of individuals who die as a result of a terrorist attack. 4 to 6 weeks to get an ITIN. An ITIN is for tax use only. It does not entitle the holder to social security benefits or change the holder’s employment or immigration status under U.S. law..
Useful Items
You may want to see: Publication ❏ 950 Introduction to Estate and Gift Taxes
❏ 3920 Tax Relief for Victims of Terrorist Attacks Form (and Instructions) ❏ 1040 U.S. Individual Income Tax Return ❏ 1041 U.S. Income Tax Return for Estates and Trusts ❏ 706 United States Estate (and Generation-Skipping Transfer) Tax Return
❏ 1310 Statement of Person Claiming Refund Due a Deceased Taxpayer See How To Get Tax Help near the end of this publication for information about getting publications and forms..
Important Reminders
Specified terrorist victim. The Victims of Terrorism Tax Relief Act of 2001 is explained in Publication 3920, Tax Relief for Victims of Terrorist Attacks. Under the Act, the federal income tax liability of those killed in the following attacks (specified terrorist victim) is forgiven for certain tax years.
Introduction
This publication is designed to help those in charge of the property (estate) of an individual who has died (decedent). It shows them how to complete and file federal income tax returns and points out their responsibility to pay any taxes due. A comprehensive example, using tax forms, is included near the end of this publication. Also included at the end of this publication are the following items.
• The April 19, 1995, terrorist attack on the
Alfred P. Murrah Federal Building (Oklahoma City).
• A checklist of the forms you may need and
their due dates.
• The September 11, 2001, terrorist attacks. • The terrorist attacks involving anthrax occurring after September 10, 2001, and before January 1, 2002. The Act also exempts from federal income tax the following types of income.
•. Comments and suggestions. We welcome your comments about this publication and your suggestions for future editions. You can e-mail us while visiting our web site at. You can write to us at the following address:
• Qualified disaster relief payments made
after September 10, 2001, to cover personal, family, living, or funeral expenses incurred because of a terrorist attack.
• Certain disability payments received in tax
years ending after September 10, 2001, for injuries sustained in a terrorist attack. Page 2
Duties
The primary duties of a personal representative are to collect all the decedent’s assets, pay the creditors, and distribute the remaining assets to the heirs or other beneficiaries. The personal representative also must perform the following duties.
• File any income tax return and the estate
tax return when due.
•. Penalty.. Identification number. The first action you should take if you are the personal representative for the decedent is to apply for an employer identification number (EIN) for the estate. You should apply for this number as soon as possible because you need to enter it on returns, statements, and other documents that you file concerning the estate. You also must give the number to payers of interest and dividends and other payers who must file a return concerning the estate. You must apply for the number using Form SS – 4, Application for Employer Identification Number. Generally, if you apply by mail, it takes about 4 weeks to get your EIN. However, you can apply by phone and get it immediately. (You still need Form SS – 4.) See the form instructions for how to apply. Payers of interest and dividends report amounts on Forms 1099 using the identification number of the person to whom the account is payable. After a decedent’s death, the number must be provided to the payer and used to report the interest on Form 1099 – INT, Interest Income. If the interest is payable to a surviving joint owner, the survivor’s identification number must be provided to the payer and used to report the interest. The deceased individual’s identifying number must not be used to file an individual tax return after the decedent’s final tax return. It also must not be used to make estimated tax payments for a tax year after the year of death. Penalty. If you do not include the EIN on any return, statement, or other document, you are liable for a penalty for each failure, unless you can show reasonable cause. You also are liable for a penalty if you do not give the EIN to another person, or if you do not include the taxpayer identification number of another person on a return, statement, or other document.
Notice of fiduciary relationship. The term fiduciary means any person acting for another person. It applies to persons who have positions of trust on behalf of others. A personal representative for a decedent’s estate is a fiduciary. If you are appointed to act in any fiduciary capacity for another, you must file a written notice with the IRS stating this. Form 56, Notice Concerning Fiduciary Relationship, can be used for this purpose. The instructions and other requirements are given on the back of the form. You should file the written notice (or Form 56) as soon as all of the necessary information (including the EIN) is available. It notifies the IRS that, as the fiduciary, you are assuming the powers, rights, duties, and privileges of the decedent, and allows the IRS to mail to you all tax notices concerning the person (or estate) you represent. The notice remains in effect until you notify the appropriate IRS office that your relationship to the estate has terminated. Termination notice. When you are relieved of your responsibilities as personal representative, you must advise the IRS office where you filed the written notice (or Form 56) either that the estate has been terminated or that your successor has been appointed. Use Form 56 for the termination notice by completing the appropriate part on the form. If another person has been appointed to succeed you as the personal representative, you should give the name and address of your successor.. Form 4810. Form 4810, Request for Prompt Assessment Under Internal Revenue Code Section 6501(d), can be used for making.. Request for discharge from personal liability for tax. An executor can make a written request for discharge from personal liability for a decedent’s income and gift taxes. The request must be made after the returns for those taxes are filed. It must clearly indicate that the request is for discharge from personal liability under section 6905 of the Internal Revenue Code. CAUTION able to assess tax deficiencies against the executor to the extent that he or she still has any of the decedent’s property.
!
Insolvent estate..
Fees Received by Personal Representatives. Page 3
Final Return for Decedent, 2002, before filing her 2001 tax return. Her personal representative must file her 2001 return by April 15, 2002. Her final tax return is due April 15, 2003..
section explains how some types of income are reported on the final return. For more information about accounting methods, see Publication 538, Accounting Periods and Methods.
TIP.
Filing Requirements.
Under an Accrual Method
Generally, under an accrual method of accounting, income is reported when earned. If the decedent used an accrual method, only the income items normally accrued before death. 1) The partnership’s tax year that ended within or with the decedent’s final tax year (the year ending on the date of death). 2) The period, if any, from the end of the partnership’s tax year in (1) to the decedent’s date of death. Example. Mary Smith was a partner in XYZ partnership and reported her income on a tax year ending December 31. The partnership uses a tax year ending June 30. Mary died August 31, 2002, and her estate established its tax year through August 31. The distributive share of partnership items based on the decedent’s partnership interest is reported as follows.. Example. Mr. Green died before filing his tax return. You were appointed the personal Page 4
• Final Return for the Decedent — January
1 through August 31, 2002, includes XYZ
partnership items from (a) the partnership tax year ending June 30, 2002, and (b) the partnership tax year beginning July 1, 2002, and ending August 31, 2002 (the date of death).
• Income Tax Return of the Estate — September 1, 2002, through August 31, 2003, includes XYZ partnership items for the period September 1, 2002, through June 30, 2003. Education.
S Corporation Income
If the decedent was a shareholder in an S corporation, include on the final return the decedent’s share of the S corporation’s items of income, loss, deduction, and credit for the following periods. 1) The corporation’s tax year that ended within or with the decedent’s final tax year (the year ending on the date of death). 2) The period, if any, from the end of the corporation’s tax year in (1) to the decedent’s date of death..
Exemptions and Deductions.
Exemptions
You can claim the decedent’s personal exemption on the final income tax return. If the decedent was another person’s dependent (for example, a parent’s), you cannot claim the personal exemption on the decedent’s final return.
Standard Deduction. Page 5
CAUTION
!, 2002, after incurring $800 in medical expenses. Of that amount, $500 was incurred in 2001 and $300 was incurred in 2002. Richard itemized his deductions when he filed his 2001 income tax return. The personal representative of the estate paid the entire $800 liability in August 2002. The personal representative may file an amended return (Form 1040X) for 2001 claiming the $500 medical expense as a deduction, subject to the 7.5% limit. The $300 of expenses incurred in 2002 exPage 6
penses.
Credits, Other Taxes, and Payments
This section includes brief discussions of some of the tax credits, types of taxes that may be owed, income tax withheld, and estimated tax payments that are reported on the final return of a decedent..
Credits
You can claim on the final income tax return any tax credits that applied to the decedent before death. Some of these credits are discussed next.. 1) Net earnings from self-employment (excluding income described in (2)) were $400 or more.
2) Wages from services performed as a church employee were $108.28 or more..
• In a military or terrorist action. 2002 because of wounds incurred while a U.S. employee in a terrorist attack that occurred in 1989 will be forgiven for 2002 and for all prior tax years in the period 1988 – 2001. Refunds are allowed for the tax years for which the period for filing a claim for refund has not ended, as discussed later. Military or terrorist action defined. A military or terrorist action means the following.
Tax Forgiveness for Deaths Due to Military or Terrorist Actions
The decedent’s income tax liability may be forgiven if his or her death was due to service in a combat zone or to military or terrorist actions. The Victims of Terrorism Tax Relief Act of 2001 provides tax relief for those CAUTION injured or killed as a result of terrorist attacks, certain survivors of those killed as a result of terrorist attacks, and others who were affected by terrorist attacks. For information on that Act, see Publication 3920.
Payments of Tax
The income tax withheld from the decedent’s salary, wages, pensions, or annuities, and the amount paid as estimated tax, for example, are credits (advance payments of tax) that you must claim on the final return.
• Any terrorist activity that most of the evidence indicates was directed against the United States or any of its allies.
• Any military action involving the U.S.
Armed Forces and resulting from violence or aggression against the United States or any of its allies, or the threat of such violence or aggression. Military action does not include training exercises. Any multinational force in which the United States is participating is treated as an ally of the United States.
!
Name, Address, and Signature
The. Third party designee.: 1) The amount of time served in the combat zone (including any period in which the individual was in missing status), plus 2) The period of continuous qualified hospitalization for injury from service in the combat zone, if any, plus 3) The next 180 days. Qualified hospitalization means any hospitalization outside the United States and any hospitalization in the United States of not more than 5 years. Filing a claim. to file a claim. Use the following procedures
Military or Terrorist Actions
The decedent’s income tax liability is forgiven if, at death, he or she was a military or civilian employee of the United States who died because of wounds or injury incurred:
When and Where To File
The final income tax return is due at the same time the decedent’s return would have been due
1). 2) If a U.S. individual income tax return has been filed, you should make a claim for refund by filing Form 1040X. You must file Page 7
• While a U.S. employee, and
a separate Form 1040X for each year in question. You must file these returns and claims at the following address for regular mail (U.S. Postal Service): Internal Revenue Service P.O. Box 4053 Woburn, MA 01888. If you have to attach Form 1310, you must have proof of death. The proof of death must be an authentic copy of either the death certificate or the formal notification from the appropriate government office (such as the Department of Defense) informing the next of kin of the decedent’s death. Keep the proof of death with your records. Do not attach it to Form 1310. forgiveness. Determine the decedent’s tax liability as follows. 1) Figure the income tax for which the decedent would have been liable if a separate return had been filed. 2) Figure the income tax for which the spouse would have been liable if a separate return had been filed. 3) Multiply the joint tax liability by a fraction. The numerator of the fraction is the amount in (1), above. The denominator of the fraction is the total of (1) and (2). The amount in (3) above is the decedent’s tax liability that is eligible for forgiveness. Page 8
Filing Reminders
To minimize the time needed to process the decedent’s final return and issue any refund, be sure to follow these procedures. 1) Write “DECEASED,” the decedent’s name, and the date of death across the top of the tax return. 2) If a personal representative has been appointed, the personal representative must sign the return. If it is a joint return, the surviving spouse must also sign it. 3) If you are the decedent’s spouse filing a joint return with the decedent and no personal representative has been appointed, write “Filing as surviving spouse” in the area where you sign the return. 4) If no personal representative has been appointed and if there is no surviving spouse, the person in charge of the decedent’s property must file and sign the return as “personal representative.” 5) To claim a refund for the decedent, do the following. a) If you are the decedent’s spouse filing a joint return with the decedent, file only the tax return to claim the refund. b)). c) If you are not filing a joint return as the surviving spouse and a personal representative has not been appointed, file the return and attach Form 1310.
tion for the dependent on your tax return, regardless of when death occurred during the year. If the decedent was your qualifying child, you may be able to claim the child tax credit or the earned income credit..
• You were entitled to file a joint return with
your spouse for the year of death — whether or not you actually filed jointly.
• You did not remarry before the end of the
current tax year.
• You have a child, stepchild, or foster child
who qualifies as your dependent for the tax year.
• You provide more than half the cost of
maintaining your home, which is the principal residence of that child for the entire year except for temporary absences. Example. William Burns’ wife died in 2000. Mr. Burns has not remarried and continued throughout 2001 and 2002 to maintain a home for himself and his dependent child. For 2000 he was entitled to file a joint return for himself and his deceased wife. For 2001 and 2002, he qualifies to file as a qualifying widow(er) with dependent child. For later years, he may qualify to file as a head of household. Figuring your tax. Check the box on line 5 (Form 1040 or 1040A) under filing status on your tax return and enter the year of death in the parentheses. Use the Tax Rate Schedule or the column in the Tax Table for Married filing jointly, which gives you the split-income benefits. The last year you can file jointly with, or claim an exemption for, your deceased spouse is the year of death. Joint return filing rules. If you are the surviving spouse and a personal representative is handling the estate for the decedent, you should coordinate filing your return for the year of death with this personal representative. See Joint Return earlier under Final Return for Decedent.
Other Tax Information
This section contains information about the effect of an individual’s death on the income tax liability of the survivors (including widows and widowers), the beneficiaries, and the estate.
Income in Respect of the Decedent
All income that the decedent would have received had death not occurred, that was not properly includible on the final return, discussed earlier, is income in respect of the decedent. If the decedent is a specified terrorist victim (see Important Reminders), any CAUTION income received after the date of death and before the end of the decedent’s tax year (determined without regard to death) is excluded from the recipient’s gross income. This exclusion does not apply to certain income. For more information, see Publication 3920. the part of the year before death, you can claim the exemp-
!
How To Report
Income in respect of a decedent must be included in the income of one of the following.
• The decedent’s estate, if the estate receives it.
• The beneficiary, if the right to income is
passed directly to the beneficiary and the beneficiary receives. Character of income. The character of the income you receive in respect of a decedent is the same as it would be to the decedent if he or she were alive. If the income would have been a capital gain to the decedent, it will be a capital gain to you. Transfer of right to income. If you transfer your right to income in respect of a decedent, you must include in your income the greater of: the decedent. The income is not reduced by any amounts withheld by the employer. If the income is $600 or more, the employer should report it in box 3 of Form 1099 – MISC. Partnership income. If the partner who died had been receiving payments representing a distributive share or guaranteed payment in liquidation of the partner’s interest in a partnership, the remaining payments made to the estate or other successor in interest are income in respect of the decedent. The estate or the successor Page 9
• Any person to whom the estate properly
distributes the right to receive it. If you have to include income in respect of the decedent in your income, you may be able to claim a deduction for the estate tax paid on that income. See Estate Tax Deduction, later.
TIP
Example 1.. Example 2. Assume the same facts as in Example 1, except that Frank used the accrual method of accounting. The amount accrued from the sale of the apples would be included on his final return. Neither the estate nor the widow would realize income in respect of the decedent when the money is later paid. Example 3. On February 1, George High, a cash method taxpayer, sold his tractor for $3,000, payable March 1 of the same year. His adjusted basis in the tractor was $2,000. Mr. High died on February 15, before receiving payment. The gain to be reported as income in respect of the decedent is the $1,000 difference between the decedent’s basis in the property and the sale proceeds. In other words, the income in respect of the decedent is the gain the decedent would have realized had he lived. Example 4. the decedent. None of the payments were includible on Cathy’s final return. The estate must include in its income the two installments it received, and you must include in your income each of the three installments as you receive them. Example 5. the
• The amount you receive for the right, or •. Transfer defined. A transfer for this purpose includes a sale, exchange, or other disposition, the satisfaction of an installment obligation at other than face value, or the cancellation of an installment obligation. Installment obligations. If the decedent had sold property using the installment method and you collect payments on an installment obligation you.. U.S. savings bonds acquired from decedent. If series EE or series I U.S. savings bonds that were owned by a cash method individual who had chosen to report the interest each year (or by an accrual method individual), savings and loan institutions, or your nearest Federal Reserve Bank. You also can get information by writing to the following address. Bureau of the Public Debt P.O. Box 1328 Parkersburg, WV 26106 – 1328 Or, on the Internet, visit the following site. If the bonds transferred because of death were owned by a cash method individual who had not chosen to report the interest each year and had purchased the bonds entirely with personal funds, interest earned before death must be reported in one of the following ways. 1) The person (executor, administrator, etc.) who must file the final income tax return of the decedent can elect to include in it all of the interest earned on the bonds before the decedent’s death. The transferee (estate or beneficiary) then includes in its return only the interest earned after the date of death. 2) If the election in (1), above, was not made, the interest earned to the date of death is income in respect of the decedent and is not included in the decedent’s final return. In this case, all of the interest earned before and after the decedent’s death is income to the transferee (estate or beneficiary). A transferee who uses the cash method of accounting and who has not chosen to report the interest annually may defer reporting any of it until the bonds are cashed or the date of maturity, whichever is earlier. In the year the interest is reported, the transferee may claim a deduction for any federal estate tax paid that arose because of the part of interest (if any) included in the decedent’s estate. Example 1. Your uncle, a cash method taxpayer, died and left you a $1,000 series EE bond. He had bought the bond for $500 and had not chosen to report the increase in value each year. At the date of death, interest of $94 had accrued on the bond, and its value of $594 at date of death was included in your uncle’s esPage 10
tate. Your uncle’s personal representative did not choose to include the $94 accrued interest. Example 2... The interest accrued on U.S. Treasury bonds owned by a cash method taxpayer and redeemable for the payment of federal estate taxes that was not received as of the date of the individual’s death is income in respect of the decedent. The interest, however, is taxable income and must be included in the income of the respective recipients. Interest accrued on savings certificates. The interest accrued on savings certificates (redeemable after death without forfeiture of interest) that is for the period from the date of the last interest payment and ending with the date of the decedent’s death, but not received as of that date, is income in respect of a decedent. Interest for a period after the decedent’s death that becomes payable on the certificates after death is not income in respect of a decedent, but is taxable income includible in the income of the respective recipients. Inherited IRAs. de-
duction the decedent is the total amount included in income less the income earned after Greg’s death. For more information on inherited IRAs, see Publication 590. Roth IRAs..
comes 1 year after the decedent’s date of death. An estate tax deduction, discussed later, applies to the amount included in income by a beneficiary other than the decedent’s spouse or family member. Archer MSA. The treatment of an Archer MSA or a Medicare+Choice.
any depletion deduction to which the decedent was entitled at the time of death would be allowable on the decedent’s final return, and no depletion deduction in respect of the decedent would be allowed to anyone else. For more information about depletion, see chapter 10 in Publication 535, Business Expenses.
Estate Tax Deduction
Income that can be claimed only for the same tax year in which the income in respect of the decedent must be included in the recipient’s income. (This also is true for income in respect of a prior decedent.) Individuals can claim this deduction only as an itemized deduction, on line 27 of Schedule A (Form 1040). This deduction is not subject to the 2% limit on miscellaneous itemized deductions. Estates can claim the deduction on the line provided for the deduction on Form 1041. For the alternative minimum tax computation, the deduction is not included in the itemized deductions that are an adjustment to taxable income. If the income in respect of the decedent is capital gain income, you must reduce the gain, but not below zero, by any deduction for estate tax paid on such gain. This applies in figuring the following.
CAUTION
!
A distribution cannot be a qualified distribution unless it is made after 2002. 701/2 or can treat the Roth IRA as his or her own Roth IRA. Part of any distribution the decedent. Additional earnings are the income of the beneficiary. For more information on Roth IRAs, see Publication 590. Return for Decedent, earlier. The age 30 limitation does not apply if the individual for whom the account CAUTION was established or the beneficiary that acquires the account is an individual with special needs. This includes an individual who, because of a physical, mental, or emotional condition (including learning disability), requires additional time to complete his or her education.
• The maximum tax on net capital gain. • The 50% exclusion for gain on small business stock.
Deductions in Respect of.
• The limitation on capital losses.
Computation
To figure a recipient’s estate tax deduction, determine —
• The estate tax that qualifies for the deduction, and
• The estate. • The person who acquired an interest in
the decedent’s property (subject to such obligations) because of the decedent’s death, if the estate was not liable for the obligation. Similar treatment is given to the foreign tax credit. A beneficiary who must pay a foreign tax on income in respect of a decedent will be entitled to claim the foreign tax credit. Depletion. The deduction for percentage depletion is allowable only to the person (estate or beneficiary) who receives income in respect of,
• The recipient’s part of the deductible tax.
Deductible estate tax. The estate tax is the tax on the taxable estate, reduced by any credits allowed. The estate tax qualifying for the deduction is the part of the net value of all the items in the estate that represents income in respect of the decedent. Net value is the excess of the items of income in respect of the decedent over the items of expenses in respect of the decedent. The deductible estate tax is the difference between the actual estate tax and the estate tax determined without including net value. Example 1. Jack Sage used the cash method of accounting. At the time of his death, he was entitled to receive $12,000 from clients for his services and he had accrued bond interest of $8,000, for a total income in respect of the decedent of $20,000. He also owed $5,000 for business expenses for which his estate is liable. The income and expenses are reported on Jack’s estate tax return. Page 11
!
If the decedent’s spouse or other family member is the designated beneficiary of the decedent’s account, the Coverdell ESA be-). Recipient’s deductible part. Figure the recipient’s part of the deductible estate tax by dividing the estate tax value of the items of income in respect of the decedent included in the recipient’s income (the numerator) by the total value of all items included in the estate that represents income in respect of the decedent (the denominator). If the amount included in the recipient’s income is less than the estate tax value of the item, use the lesser amount in the numerator. Example 2. From an Estate, later.
A certification must have been made by a licensed health care practitioner within the previous 12 months. Exclusion limited. If the insured was a chronically ill individual, your exclusion of accelerated death benefits is limited to the cost you incurred in providing qualified long-term care services for the insured. In determining the cost incurred, do not include amounts paid or reimbursed by insurance or otherwise. Subject to certain limits, you can exclude payments received on a periodic basis without regard to your costs. Insurance received in installments. If you receive life insurance proceeds in installments, you can exclude part of each installment from your income. To determine the part excluded, divide the amount held by the insurance company (generally the total lump sum payable at the death of the insured person) by the number of installments to be paid. Include anything over this excluded part in your income as interest. Specified number of installments. If you will receive a specified number of installments under the insurance contract, figure the part of each installment you can exclude by dividing the amount held by the insurance company by the number of installments to which you are entitled. A secondary beneficiary, in case you die before you receive all of the installments, is entitled to the same exclusion. Example. As beneficiary, you choose to receive $40,000 of life insurance proceeds in 10 annual installments of $6,000. Each year, you can exclude from your income $4,000 ($40,000 ÷ 10) as a return of principal. The balance of the installment, $2,000, is taxable as interest income. Specified amount payable. If each installment you receive under the insurance contract is a specific amount based on a guaranteed rate of interest, but the number of installments you will receive is uncertain, the part of each installment that you can exclude from income is the amount held by the insurance company divided by the number of installments necessary to use up the principal and guaranteed interest in the contract. Example. are not taxable either to the veteran or to the beneficiaries. Interest on dividends left on deposit with the Department of Veterans Affairs is not taxable. Life insurance proceeds. Life insurance proceeds paid to you because of the death of the insured (or because the insured is a member of the U.S. uniformed services who is missing in action) are not taxable unless the policy was turned over to you for a price. This is true even if the proceeds are paid under an accident or health insurance policy or an endowment contract. If the proceeds are received in installments, see the discussion under Insurance received in installments, later. you. Terminally ill individual. A terminally ill individual is one who has been certified by a physician as having an illness or physical condition that reasonably can be expected to result in death in 24 months or less from the date of certification. Chronically ill individual. A chronically ill individual is one who has been certified as one of the following.
Value included in your income Total value of income in respect of decedent $12,000 $20,000 X X
Estate tax qualifying for deduction
$4,620
=
$2,772. Estates. The estate tax deduction allowed an estate is figured in the same manner as just discussed. However, any income in respect of a decedent received by the estate during the tax year is reduced by any such income that is properly paid, credited, or required to be distributed by the estate to a beneficiary. The beneficiary would include such distributed income in respect of a decedent for figuring the beneficiary’s deduction. Surviving annuit section 1.691(d) – 1 of the regulations.
• An individual who, for at least 90 days, is
unable to perform at least two activities of daily living without substantial assistance due to a loss of functional capacity.
Gifts, Insurance, and Inheritances
Property received as a gift, bequest, or inheritance is not included in your income. However, if property you receive in this manner later produces income, such as interest, dividends, or Page 12
• An individual who requires substantial supervision to be protected from threats to health and safety due to severe cognitive impairment.
amount held by the insurance company for this purpose is reduced by the actuarial value of the guarantee. Example. As beneficiary, you choose to receive the $50,000 proceeds from a life insurance contract under a life-incomew. Interest option on insurance. If an insurance company pays you interest only on proceeds from life insurance left on deposit, the interest you are paid is taxable. Flexible premium contracts. A life insurance contract (including any qualified additional benefits)) of the Internal Revenue Code.
adjusted basis on the day it was transferred to the decedent . Special-use valuation. the estate’s basis (determined under the special-use valuation method) immediately before your purchase increased by Form 706. Increased basis for special-use valuation property. Under certain conditions, some or all of the estate tax benefits obtained by using the special-use valuation will be subject to recapture. Generally, an additional estate tax must be paid by the qualified heir if within 10 years of the decedent’s death the property is disposed of, or is no longer used for a qualifying purpose. from the date 9 months after the decedent’s death until the date you pay the recapture tax. For more information on the recapture tax, see Instructions for Form 706 – A. S corporation stock. The basis of inherited S corporation stock must be reduced if there is income in respect of a decedent attributable to that stock. Joint interest. Figure the surviving tenant’s new basis of property that was jointly owned . Example. Fred and Anne Maple (brother and sister) 75,000 $90,000 of $100,000) . . . . . . . . . . .
Minus: 1/2 of $20,000 depreciation . . . . Anne’s basis . . . . . . . . . . . . . . . .
10,000 $80,000
Qualified joint. Example. $30,000 $60,000) . . . . . . . . . . . . . . Interest acquired from Diane 50,000 $80,000 (1/2 of $100,000) . . . . . . . . . 1/2 of $20,000 depreciation . . . . Minus: 10,000 Dan’s basis . . . . . . . . . . . . . . . . . $70,000
More information. See Publication 551, Basis of Assets, for more information on basis. If you and your spouse lived in a community property state, see the discussion in that publication about figuring the basis of your community property after your spouse’s death. Depreciation. If you can depreciate property you inherited, you generally must use the modified accelerated cost recovery system (MACRS) to determine depreciation. For joint interests and qualified joint interests, you must make the following computations to figure depreciation.
Basis of Inherited Property
Your basis in property you inherit from a decedent is generally one of the following.
• The fair market value (FMV) of the property at the date of the individual’s death.
• The FMV on the alternate valuation date
(discussed in the instructions for Form 706), if so elected by the personal representative for the estate.
• The first computation is for your original
basis in the property.
• The value under the special-use valuation
method for real property used in farming or other closely held business (see Special-use valuation, later), if so elected by the personal representative.
• The second computation is for the inherited part of the property. Continue depreciating your original basis under the same method you had used in previous years. Depreciate the inherited part using MACRS. MACRS consists of two depreciation systems, the General Depreciation System (GDS) and the Alternative Depreciation System (ADS). For more information on MACRS, see Publication 946, How To Depreciate Property. Substantial valuation misstatement. If the value or adjusted basis of any property claimed on an income tax return is 200% or more of the amount determined to be the correct amount, there is a substantial valuation misstatement. If this misstatement results in an underpayment of tax of more than $5,000, an addition to tax of 20% of the underpayment can apply. The penalty increases to 40% if the value or adjusted basis is 400% or more of the amount determined to be the correct amount. If the value shown on Page 13
• The decedent’s adjusted basis in land to
the extent of the value that is excluded from the decedent’s taxable estate as a qualified conservation easement (discussed in the instructions for Form 706). Exception for appreciated property.. Appreciated property. Appreciated property is property that had an FMV greater than its
the estate tax return is overstated and you use that value as your basis in the inherited property, you could be liable for the addition to tax. The IRS may waive all or part of the addition to tax if you have a reasonable basis for the claimed value. The fact that the adjusted basis on your income tax return is the same as the value on the estate tax return is not enough to show that you had a reasonable basis to claim the valuation. Holding period. If you sell or dispose of inherited property that is a capital asset, you have a long-term gain or loss from property held for more than 1 year, regardless of how long you held the property. Property distributed in kind..
estate.. If all or part of the repayment is waived, that amount is not included in income. Survivor benefits..
that inherited IRA. For more information about IRAs, see Publication 590. Estate income. Estates may have to pay federal income tax. Beneficiaries may have to pay tax on their share of estate income. However, there is never a double tax. See Distributions to Beneficiaries From an Estate, later.
Income Tax Return of an Estate— Form 1041. As the personal representative, you choose the estate’s accounting period when you file its first Form 1041. The estate’s first tax year can be any period that ends on the last day of a month and does not exceed 12 months. Once you choose the tax year, you generally cannot change it without IRS approval. Also, on the first income tax return, you must choose the accounting method (cash, accrual, or other) you will use to report the estate’s income. Once you have used a method, you ordinarily cannot change it without IRS approval. For a more complete discussion of accounting periods and methods, see Publication 538, Accounting Periods and Methods.
• The death was caused by the intentional
misconduct of the officer or by the officer’s intention to cause such death.
• A specific bequest (unless it must be distributed in more than three installments).
• The officer was voluntarily intoxicated at
the time of death.
• Real property, the title to which passes
directly to you under local law. For information on an estate’s recognized gain or loss on distributions in kind, see Income To Include under Income Tax Return of an Estate — Form 1041, later.
• The officer was performing his or her duties in a grossly negligent manner at the time of death. Salary or wages. Salary or wages paid after the employee’s death are usually taxable income to the beneficiary. See Wages, earlier, under Specific Types of Income in Respect of a Decedent. Rollover distributions. An employee’s surviving spouse who receives an eligible rollover distribution may roll it over tax free into an IRA, a qualified plan, a section 403 annuity, or a section 457 plan. A distribution to a beneficiary other than the employee’s surviving spouse is not an eligible rollover distribution and is subject to tax. If the decedent was born before 1936, the beneficiary may be able to use optional methods to figure the tax on the distribution. For more information, see Publication 575, Pension and Annuity Income. Pensions and annuities. For beneficiaries who receive pensions and annuities, see Publication 575. For beneficiaries of federal Civil Service employees or retirees, see Publication 721, Tax Guide to U.S. Civil Service Retirement Benefits. Inherited IRAs. the Decedent. The inherited IRA cannot be rolled over into, or receive a rollover from, another IRA. No deduction is allowed for amounts paid into
Other Items of Income
Some other items of income that you, as a survivor or beneficiary, may receive are discussed below. Lump-sum payments you receive as the surviving spouse or beneficiary of a deceased employee may represent accrued salary payments; distributions from employee profit-sharing, pension, annuity, and stock bonus plans; or other items that should be treated separately for tax purposes. The treatment of these lump-sum payments depends on what the payments represent. If the decedent is a specified terrorist victim (see Important Reminders), cerCAUTION tain income received by the beneficiary or the estate is not included in income. See Publication 3920.
Filing Requirements
Every domestic estate with gross income of $600 or more during a tax year must file a Form 1041. If one or more of the beneficiaries of the domestic estate are nonresident alien individuals,.
!
Public safety officers. Special rules apply to certain amounts received because of the death of a public safety officer (law enforcement officers, fire fighters, chaplains, ambulance crews, and rescue squads). The provisions apply to a chaplain killed in the line of duty after SeptemCAUTION ber 10, 2001. The chaplain must have been responding to a fire, rescue, or police emergency as a member or employee of a fire or police department.
!
Death benefits. The death benefit payable to eligible survivors of public safety officers who die as a result of traumatic injuries sustained in the line of duty is not included in either the beneficiaries’ income or the decedent’s gross Page 14
Schedule K –1 (Form 1041)
As personal representative, you must file a separate Schedule K – 1 (Form 1041), or an accept-
able substitute (described below), for each beneficiary. File these schedules with Form 1041. You must show each beneficiary’s taxpayer identification number. A $50 penalty is charged for each failure to provide the identifying number of each beneficiary unless reasonable cause is established for not providing it. When you assume your duties as the personal representative, you must ask each beneficiary to give you a taxpayer identification number (TIN). A nonresident alien beneficiary that gives you a withholding certificate generally must provide you with a TIN (see Publication 515, Withholding of Tax on Nonresident Aliens and Foreign Entities). A TIN is not required for an executor or administrator of the estate unless that person is also a beneficiary. As personal representative, you must also furnish a Schedule K – 1 (Form 1041), or a substitute, to the beneficiary by the date on which the Form 1041 is filed. Failure to provide this payee statement can result in a penalty of $50 for each failure. This penalty also applies if you omit information or include incorrect information on the payee statement. You do not need prior approval for a substitute Schedule K – 1 (Form 1041) that is an exact copy of the official schedule or that follows the specifications in Publication 1167, Substitute Printed, Computer-Prepared, and Computer-Generated Tax Forms and Schedules. You must have prior approval for any other substitute Schedule K – 1 (Form 1041). Beneficiaries. The personal representative has a fiduciary responsibility to the ultimate recipients of the income and the property of the estate. While the courts use a number of names to designate specific types of beneficiaries or the recipients of various types of property, it is sufficient in this publication to call all of them beneficiaries. Liability of the beneficiary.. Nonresident alien beneficiary. As a resident or domestic fiduciary, in addition to filing Form 1041, you may have to file the income tax return (Form 1040NR) and pay the tax for a nonresident alien beneficiary. Depending upon a number of factors, you may or may not have to file Form 1040NR for that beneficiary. For information on who must file Form 1040NR, see Publication 519, U.S. Tax Guide for Aliens. You do not have to file the nonresident alien’s return and pay the tax if that beneficiary has appointed an agent in the United States to file a federal income tax return. However, you must attach to the estate’s return (Form 1041) a copy of the document that appoints the beneficiary’s agent.
You.
Separate Forms 1041. Each representative must file a separate Form 1041. The domiciliary representative must include the estate’s entire income in the return. The ancillary representative files with the appropriate IRS office for the ancillary’s location. The ancillary representative should provide the following information on the return.
• The name and address of the domiciliary
representative.
Amended Return
If you have to file an amended Form 1041,, you must file an amended Schedule K – 1 (Form 1041) and give a copy to each beneficiary. Check the Amended K – 1 box at the top of Schedule K – 1.
• The amount of gross income received by
the ancillary representative.
• The deductions claimed against that income (including any income properly paid or credited by the ancillary representative to a beneficiary). Estate of a nonresident alien. If the estate of a nonresident alien has a nonresident alien domiciliary representative and an ancillary representative who is a citizen or resident of the United States, the ancillary representative, in addition to filing a Form 1040NR to provide the information described in the preceding paragraph, must also file the return that the domiciliary representative otherwise would have to file.
Copy of the Will Information Returns
Even though you may not have to file an income tax return for the estate, you may have to file Form 1099 – DIV, Form 1099 – INT, or Form 1099 – MISC, if you receive the income as a nominee or middleman for another person. For more information on filing information returns, see the General Instructions for Forms 1099, 1098, 5498, and W – 2G. You will not have to file information returns for the estate if the estate is the owner of record and you file an income tax return for the estate on Form 1041 giving the name, address, and identifying number of each actual owner and furnish a completed Schedule K – 1 (Form 1041) to each actual owner. Penalty. A penalty of up to $50 Forms 1099, 1098, 5498, and W – 2G for more information. You do not have to file a copy of the decedent’s will unless requested by the IRS. If requested, you must attach a statement to it indicating the provisions that, in your opinion, determine how much of the estate’s income is taxable to the estate or to the beneficiaries. You should also attach a statement signed by you under penalties of perjury that the will is a true and complete copy.
Income To Include
The estate’s taxable income generally is figured the same way as an individual’s income, except as explained in the following discussions. If the decedent is a specified terrorist victim (see Important Reminders), cerCAUTION tain income received by the estate is not included in income. See the Form 1041 instructions and Publication 3920.
!
Two or More Personal Representatives
If property is located outside the state in which the decedent’s home was located, more than one personal representative may be designated by the will or appointed by the court. The person designated or appointed to administer the estate in the state of the decedent’s permanent home is called the domiciliary representative. The person designated or appointed to administer property in a state other than that of the decedent’s permanent home is called an ancillary representative. Publication 550. For a discussion of gains and losses from the sale of other property, including business property, see Publication 544, Sales and Other Dispositions of Assets. If, as the personal representative, your duties include the operation of the decedent’s business, see Publication 334. That publication provides general information about the tax laws that apply to a sole proprietorship. Income in respect of the decedent. As the personal representative of the estate, you may receive income that the decedent would have reported had death not occurred. For an explaPage 15
nation of this income, see Income in Respect of the Decedent under Other Tax Information, earlier. An estate may qualify to claim a deduction for estate taxes if the estate must include in gross income for any tax year an amount of income in respect of a decedent. See Estate Tax Deduction, earlier, under Other Tax Information. Gain (or loss) from sale of property. During the administration of the estate, you may find it necessary or desirable to sell all or part of the estate’s assets to pay debts and expenses of administration, or to make proper distributions of the assets to the beneficiaries. While you. Redemption of stock to pay death taxes. Under certain conditions, a distribution to a shareholder (including the estate) in redemption of stock that was included in the decedent’s gross estate may be allowed capital gain (or loss) treatment. Character of asset.. An estate and a beneficiary of that estate are generally treated as related CAUTION persons for purposes of treating the gain on the sale of depreciable property between the parties as ordinary income. This does not apply to a sale or exchange made to satisfy a pecuniary bequest.
greater than the donor’s adjusted basis, and the proceeds of the sale of the property are distributed to the donor (or the donor’s spouse). Schedule D (Form 1041).). Installment obligations. the Decedent, earlier. See Publication 537 for information about installment sales. Gain from sale of special-use valuation property. If you). Qualified heirs.. Gain from transfer of property to a political organization. Appreciated property that is. Gain or loss on distributions in kind. An estate recognizes gain or loss on a distribution of property in kind to a beneficiary only in the following situations. 1) The distribution satisfies the beneficiary’s right to receive either — a) A specific dollar amount (whether payable in cash, in unspecified property, or in both), or b) A specific property other than the property distributed. 2) You choose choose to recognize gain or loss, the choice applies to all noncash distributions during the tax year except charitable distributions and specific bequests. To make the choice, report the transaction on Schedule D (Form 1041) attached to the estate’s Form 1041 and check the box on line 7 in the Other Information section of Form 1041. You must make the choice by the due date (including extensions) of the estate’s income tax return for the year of distribution. However, if you timely filed your return for the year without making the choice, you can still make the choice by filing an amended return within six months of the due date of the return (excluding extensions). Attach Schedule D (Form 1041) to the amended return and write “Filed pursuant to section 301.9100 – 2” on the form. File the amended return at the same address you filed the original return. You must get the consent of the IRS to revoke the choice. For more information, see Property distributed in kind under Distributions Deduction, later. Under the related persons rules, you cannot claim a loss for property distribCAUTION uted.
Holding period. An estate (or other recipient) that acquires a capital asset from a decedent and sells or otherwise disposes of it is considered to have held that asset for more than 1 year, regardless of how long the asset is held. Basis of asset. Page 16
Contributions
An estate qualifies for a deduction for amounts of the beneficiaries may agree to the gift. You cannot deduct any contribution from income that is not included in the estate’s gross income. If the will specifically provides that the contributions are to be paid out of the estate’s gross income, the contributions are fully deducti-
ble. Publication 526, Charitable Contributions, and Publication 561, Determining the Value of Donated Property.
Losses
Generally, an estate can claim a deduction for a loss that CAUTION persons for purposes of the disallowance of a loss on the sale of an asset between related persons. The disallowance does not apply to a sale or exchange made to satisfy a pecuniary bequest.
!. Accrued expenses.. Expenses allocable to tax-exempt income. When figuring the estate’s taxable income on Form 1041, you cannot deduct administration expenses allocable to any of the estate’s tax-exempt income. However, you can deduct these administration expenses when figuring the taxable estate for federal estate tax purposes on Form 706. Interest on estate tax. Interest paid on installment payments of estate tax is not deductible for income or estate tax purposes.
take a deduction of $200 [($2,000 ÷ $3,000) × $300].
Distributions Deduction
An estate is allowed a deduction for the tax year for any income that must be distributed currently and for other amounts that are properly paid, credited, or required to be distributed to beneficiaries. The deduction is limited to the distributable net income of the estate. For special rules that apply in figuring the estate’s distribution deduction, see Bequest under Distributions to Beneficiaries From an Estate, later. Distributable net income. Distributable net income (determined on Schedule B of Form 1041) is the estate’s income available for distribution. It is the estate’s taxable income, with the following modifications. Distributions to beneficiaries. Distributions to beneficiaries are not deducted. Estate tax deduction. The deduction for estate tax on income in respect of the decedent is not allowed. Exemption deduction. duction is not allowed. The exemption de-
Capital gains. Capital gains ordinarily are not included in distributable net income. However, you include them in distributable net income if any of the following apply.
Net operating loss deduction. An estate can claim a net operating loss deduction, figured in the same way as an individual’s, except that it cannot deduct any distributions to beneficiaries Publication 536. Casualty and theft losses. Losses incurred for casualty and theft during the administration of the estate can be deducted only if they have not been claimed on the federal estate tax return (Form 706). You. Carryover losses. Carryover losses resulting from net operating losses or capital losses sustained by the decedent before death cannot be deducted on the estate’s income tax return.
• The gain is allocated to income in the accounts of the estate or by notice to the beneficiaries under the terms of the will or by local law.
• The gain is allocated to the corpus or principal of the estate and is actually distributed to the beneficiaries during the tax year.
• The gain is used, under either the terms of
the will or the practice of the personal representative, to determine the amount that is distributed or must be distributed.
• Charitable contributions are made out of
capital gains. Generally, when you determine capital gains to be included in distributable net income, the exclusion for gain from the sale or exchange of qualified small business stock is not taken into account. Capital losses. Capital losses are excluded in figuring distributable net income unless they enter into the computation of any capital gain that is distributed or must be distributed during the year. Tax-exempt interest. Tax-exempt interest, including exempt-interest dividends, though excluded from the estate’s gross income, is included in the distributable net income, but is reduced by the following items.
Depreciation and Depletion
The allowable deductions for depreciation and depletion that accrue after the decedent’s death must be apportioned between the estate and the beneficiaries, depending on the income of the estate that is allocable to each. Example. In 2002,
• The expenses that were not allowed in
computing the estate’s taxable income because they were attributable to tax-exempt interest (see Expenses allocable to tax-exempt income under Administration Expenses, earlier). Page 17
Administration Expenses
Expenses of administering an estate can be deducted either from the gross estate in figuring the federal estate tax on Form 706 or from the
• The part of the tax-exempt interest
deemed to have been used to make a charitable contribution. See Contributions, earlier. The total tax-exempt interest earned by an estate must be shown in the Other Information section of Form 1041. The beneficiary’s part of the tax-exempt interest is shown on Schedule K – 1 (Form 1041). Separate shares rule. The separate shares rule must be used if both of the following are true.
• The estate has more than one beneficiary. •. You must use a reasonable and equitable method to make the allocations. Generally, gross income is allocated among the separate shares based on the income that each share is entitled to under the will or applicable local law. This includes gross income that is. Example.. Income in respect of a decedent.. Example 1. Frank’s will directs you, the executor, to divide the residue of his estate (valued Page 18 the decedent must be allocated only to Judy’s share. Example 2. Assume the same facts as in Example 1, except that you must fund Judy’s share first with DEF Corporation stock valued at $300,000, rather than the IRA proceeds. To determine the distributable net income for each separate share, the $90,000 of income in respect of the decedent and Ann’s distributable net income includes $67,500 ($450,000/$600,000 X $90,000). Income that must be distributed currently. The distributions deduction includes any amount of. Support allowances. The distribution deduction includes any support allowance that, under a court order or decree or local law, the estate must pay the decedent’s surviving spouse or other dependent for a limited period during administration of the estate. The allowance is deductible as income that must be distributed currently or as any other amount paid, credited, or required to be distributed, as discussed next. Any other amount paid, credited, or required to be distributed. Any other amount paid, credited, or required to be distributed is allowed as a deduction which is filed with the IRS office where the return would have been filed. The election is irrevocable for the tax year and is only effective for the year of the election. Alimony and separate maintenance. Alimony and separate maintenance payments that must be included in the spouse’s or former spouse’s income may be deducted as income that must be distributed currently if they are paid, credited, or distributed out of the income of the estate for the tax year. That spouse or former spouse is treated as a beneficiary. Payment of beneficiary’s obligations. Any payment made by the estate to satisfy a legal obligation of any person is deductible as income that must be distributed currently or as any other amount paid, credited, or required to be distributed. This includes a payment made to satisfy the person’s obligation under local law to support another person, such as the person’s minor child. The person whose obligation is satisfied is treated as a beneficiary of the estate. This does not apply to a payment made to satisfy a person’s obligation to pay alimony or separate maintenance. Interest in real estate. The value of an interest in real estate owned by a decedent, title to which passes directly to the beneficiaries under local law, is not included as any other amount paid, credited, or required to be distributed. Property distributed in kind..
• A specific bequest (unless it must be distributed in more than three installments).
• Real property, the title to which passes
directly to the beneficiary under local law. Character of amounts distributed. If the decedent’s will or local law does not provide for the allocation of different classes of income, you must treat the amount deductible for distributions to beneficiaries as consisting of the same proportion of each class of items entering into the computation of distributable net income as the total of each class bears to the total distributable net income. For more information about the character of distributions, see Character of Distributions under Distributions to Beneficiaries From an Estate, later. Example. An estate has distributable net income of $2,000, consisting of $1,000 of taxable interest and $1,000 of rental income. Distribu-
tions to the beneficiary total $1,500. The distribution deduction consists of $750 of taxable interest and $750 of rental income, unless the will or local law provides a different allocation. Limit on deduction for distributions. You cannot deduct any amount of distributable net income not included in the estate’s gross income. Example. An estate has distributable net income of $2,000, consisting of $1,000 of dividends and $1,000 of tax-exempt interest. Distributions to the beneficiary total $1,500. Except for this rule, the distribution deduction would be $1,500 ($750 of dividends and $750 of tax-exempt interest). However, as the result of this rule, the distribution deduction is limited to $750, because no deduction is allowed for the tax-exempt interest distributed. Denial of double deduction. A deduction cannot be claimed twice. If an amount is considered to have been distributed to a beneficiary of an estate in a preceding tax year, it cannot again be included in figuring the deduction for the year of the actual distribution. Example. The will provides that the estate must distribute currently all of its income to a beneficiary. For administrative convenience, the personal representative did not make a distribution of. Charitable contribution. The amount of a charitable contribution used as a deduction by the estate in determining taxable income cannot be claimed again as a deduction for a distribution to a beneficiary.
Credits, Tax, and Payments
This section includes brief discussions of some of the tax credits, types of taxes that may be owed, and estimated tax payments that are reported on the estate’s income tax return, Form 1041.
mated tax if you expect the withholding and credits to be at least: 1) 90% of the tax to be shown on the 2003 return, or 2) 100% of the tax shown on the 2002 return (assuming the return covered all 12 months). The percentage in (2) above is 110% if the estate’s 2002 adjusted gross income (AGI) was more than $150,000. To figure the estate’s AGI, see the instructions for line 15b, Form 1041. The general rule is that you must make your first estimated tax payment by April 15, 2003. You can either pay all of your estimated tax at that time or pay it in four equal amounts that are due by April 15, 2003; June 16, 2003; September 15, 2003; and January 15, 2004. For exceptions to the general rule, see the instructions for Form 1041 – ES and Publication 505, Tax Withholding and Estimated Tax. If your return is on a fiscal year basis, your due dates are the 15th day of the 4th, 6th, and 9th months of your fiscal year and the 1st month of the following fiscal year. If any of these dates fall on a Saturday, Sunday, or legal holiday, the payment must be made by the next business day. You may be charged a penalty for not paying enough estimated tax or for not making the payment on time in the required amount (even if you have an overpayment on your tax return). You can use Form 2210, Underpayment of Estimated Tax by Individuals, Estates, and Trusts, to figure any penalty, or you can let the IRS figure the penalty. For more information, see the instructions for Form 1041 – ES and Publication 505. Return for Decedent. Foreign tax credit. The foreign tax credit is discussed in Publication 514, Foreign Tax Credit for Individuals. General business credit. The general business credit is available to an estate that is involved in a business. For more information, see Publication 334.
Tax
An estate cannot use the Tax Table that applies to individuals. The tax rate schedule to use is in the instructions for Form 1041. Alternative minimum tax (AMT). An estate may be liable for the alternative minimum tax. To figure the alternative minimum tax, use Schedule I (Form 1041), Alternative Minimum Tax. Certain credits may be limited by any tentative minimum tax figured on line 37, Part III of Schedule I (Form 1041), even if there is no alternative minimum tax liability. If the estate takes a deduction for distributions to beneficiaries, complete Part I and Part II of Schedule I Form 1041.
Name, Address, and Signature
In the top space of the name and address area of Form 1041, enter the exact name of the estate from the Form SS – 4 used to apply for the estate’s employer identification number. In the remaining spaces, enter the name and address of the personal representative (fiduciary) of the estate. Signature. The personal representative (or its authorized officer if the personal representative is not an individual) must sign the return. An individual who prepares the return for pay must manually sign the return as preparer. You can check a box in the signature area that authorizes the IRS to contact that paid preparer for certain information. See the instructions for Form 1041 for more information.
Funeral and Medical Expenses
No deduction can be taken for funeral expenses or medical and dental expenses on the estate’s income tax return, Form 1041. Funeral expenses. Funeral expenses paid by the estate are not deductible in figuring the estate’s taxable income on Form 1041. They are deductible only for determining the taxable estate for federal estate tax purposes on Form 706. Medical and dental expenses of a decedent. Return for Decedent, earlier. 2003, use Form 1041 – ES, Estimated Income Tax for Estates and Trusts, to determine the estimated tax to be paid. Generally, you must pay estimated tax if the estate is expected to owe, after subtracting any withholding and credits, at least $1,000 in tax for 2003. You will not, however, have to pay esti-
When and Where To File
When you file Form 1041 (or Form 1040NR if it applies) depends on whether you choose a calendar year or a fiscal year as the estate’s accounting period. Where you file Form 1041 depends on where you, as the personal representative, live or have your principal office. When to file. If you choose the calendar year as the estate’s accounting period, the Form 1041 for 2002 is due by April 15, 2003 (June 16, 2003, in the case of Form 1040NR for a nonresident alien estate that does not have an office in Page 19
the United States). If you choose a fiscal year, the. Extension of time to file. An extension of time to file Form 1041 may be granted if you have clearly described the reasons that will cause your delay in filing the return. Use Form 2758, Application for Extension of Time To File Certain Excise, Income, Information, and Other Returns, to request an extension. The extension is not automatic, so you should request it early enough for the IRS to act on the application before the regular due date of Form 1041. You should file Form 2758 in duplicate with the IRS office where you must file Form 1041. If you have not yet established an accounting period, filing Form 2758 will serve to establish the accounting period stated on that form. Changing to another accounting period requires prior approval by the IRS. Generally, an extension of time to file a return does not extend the time for payment of tax due. You must pay the total income tax estimated to be due on Form 1041 in full by the regular due date of the return. For additional information, see the instructions for Form 2758. Where to file. As the personal representative of an estate, file the estate’s income tax return (Form 1041) with the Internal Revenue Service center for the state where you live or have your principal place of business. A list of the states and addresses that apply is in the instructions for Form 1041. You must send Form 1040NR to the Internal Revenue Service Center, Philadelphia, PA 19255. Electronic filing. Form 1041 can be filed electronically or on magnetic media. See the instructions for Form 1041 for more information.
determine the distributable net income allocable to each beneficiary. The beneficiaries in the examples shown next do not meet the requirements of the separate shares rule.
Income That Must Be Distributed Currently
Beneficiaries who are entitled to receive currently distributable income generally must include in gross income the entire amount due them. However, if the currently distributable income is more than the estate’s distributable net income figured without deducting charitable contributions, each beneficiary must include in gross income a ratable part of the distributable net income. Example.]. Annuity payable out of income or corpus. Income that must must be distributed currently to the extent there is income of the estate not paid, credited, or required to be distributed to other beneficiaries for the tax year. Example 1. the . Example 2. Assume the same facts as in Example 1 except that the estate has an additional $1,000 of administration expenses, commissions, etc., that are.
Other Amounts Distributed
Any other amount paid, credited, or required to be distributed to the beneficiary for the tax year also must be included in the beneficiary’s gross income. Such an amount is in addition to those amounts that must to Beneficiaries From an Estate
If you are the beneficiary of an estate that must distribute all its income currently, you must report your share of the distributable net income whether or not you have actually received it. If you are the beneficiary of an estate that does not have to distribute all its income currently, you must report all income that must be distributed to you (whether or not actually distributed) plus all other amounts paid, credited, or required to be distributed to you, up to your share of distributable net income. As explained earlier in Distributions Deduction under Income Tax Return of an Estate — Form 1041, for an amount to be currently distributable income, Distributions Deduction may have to be used to Page 20
• Distributions made at the discretion of the
personal representative.
• Distributions required by the terms of the
will upon the happening of a specific event.
• Annuities that must be paid in any event,
but only out of corpus (principal).
• Distributions of property in kind as defined
earlier in Distributions Deduction under Income Tax Return of an Estate — Form 1041.
• under Income Tax Return of an Estate — Form 1041, earlier.
Example. Scott.
est to Jim and the rental income to Ted and that the personal representative distributed the income under those provisions. Jim is treated as having received $1,200 in taxable interest and Ted is treated as having received $1,800 of rental income. Charitable contribution made.. Example. The will of Harry Thomas requires a current distribution out of income of $3,000 a year to his wife, Betty, during the administration of the estate. The will also provides that the personal representative, using discretion, may distribute the balance of the current earnings either to Harry’s son, Tim, or to one or more of certain.
uted to you include dividends, tax-exempt interest, or capital gains, they will keep the same character in your hands for purposes of the tax treatment given those items. Generally, you report the dividends on line 9 of your Form 1040, and the capital gains on your Schedule D (Form 1040). The tax-exempt interest, while not included in taxable income, must be shown on line 8b of your Form 1040. Report business and other nonpassive income in Part III of your Schedule E (Form 1040). The estate’s personal representative should provide you with the classification of the various items that make up your share of the estate income and the credits you should take into consideration so that you can properly prepare your individual income tax return. See Schedule K – 1 (Form 1041), later. When to report estate income. If income from the estate is credited or must be distributed to you for a tax year, report that income (even if not distributed) on your return for that year. The personal representative can elect to treat distributions paid or credited within 65 days after the close of the estate’s tax year as having been paid or credited on the last day of that tax year. If this election is made, you must report that distribution on your return for that year. Report other income from the estate on your return for the year in which you receive it. If your tax year is different from the estate’s tax year, see Different tax years, next. Different tax years. You must include your share of the estate income in your return for your tax year in which the last day of the estate’s tax year falls. If the tax year of the estate is the calendar year and your tax year is a fiscal year ending on June 30, you will include in gross income for the tax year ended June 30 your share of the estate’s distributable net income distributed or considered distributed during the calendar year ending the previous December 31. Death of individual beneficiary.. Termination of nonindividual beneficiary.. Page.
Character of Distributions
An amount distributed to a beneficiary for inclusion in gross income retains the same character for the beneficiary that it had for the estate. No charitable contribution made. If no charitable contribution is made during the tax year, you must treat the distributions as consisting of the same proportion of each class of items entering into the computation of distributable net income as the total of each class bears to the total distributable net income. Distributable net income was defined earlier in Distributions Deduction under Income Tax Return of an Estate — Form 1041. However, if the will or local law specifically provides or requires a different allocation, you must use that allocation. Example 1.. Example 2. Assume in Example 1 that the will provides for the payment of the taxable inter-
How and When To Report
How you report your income from the estate depends on the character of the income in the hands of the estate. When you report the income depends on whether it represents amounts credited or required to be distributed to you or other amounts. How to report estate income. Each item of income keeps the same character in your hands as it had in the hands of the estate. If the items of income distributed or considered to be distrib-
Schedule K – 1 (Form 1041). The personal representative for the estate must provide you with a copy of Schedule K – 1 (Form 1041) or a substitute Schedule K – 1. You should not file the form with your Form 1040, but should keep it for your personal records. Each beneficiary (or nominee of a beneficiary) who receives a distribution from the estate for the tax year or to whom any item is allocated must receive a Schedule K – 1 or substitute. The personal representative handling the estate must furnish the form to each beneficiary or nominee by the date on which the Form 1041 is filed. $50 penalty for each failure. Consistent treatment of items. You must treat estate items the same way on your individual return as they are treated on the estate’s income tax return. If your treatment is different from the estate’s treatment, you must file Form 8082, Notice of Inconsistent Treatment or Administrative Adjustment Request (AAR), with your return to identify the difference. If you do not file Form 8082 and the estate has filed a return, the IRS can immediately assess and collect any tax and penalties that result from adjusting the item to make it consistent with the estate’s treatment.
The bequest of a specific sum of money to Marie is determinable on the same date. Example 2.. Distributions not treated as bequests. The following distributions are not bequests that meet all the requirements listed earlier that allow a distribution to be excluded from the beneficiary’s income and do not allow it as a deduction to the estate. Paid only from income. An amount that can be paid only from current or prior income of the estate does not qualify even if it is specific in amount and there is no provision for installment payments. Annuity. An annuity or a payment of money or of specific property in lieu of, or having the effect of, an annuity is not the payment of a specific property or sum of money. Residuary estate. If the will provides for the payment of the balance or residue of the estate to a beneficiary of the estate after all expenses and other specific legacies or bequests, that residuary bequest is not a payment of a specific property or sum of money. Gifts made in installments. Even if the gift or bequest is made in a lump sum or in three or fewer installments, it will not qualify as a specific property or sum of money if the will provides that the amount must be paid in more than three installments. Conditional bequests. A bequest of a specific property or sum of money that may otherwise be excluded from the beneficiary’s gross income will not lose the exclusion solely because the payment is subject to a condition. Installment payments. Certain rules apply in determining whether a bequest of specific property or a sum of money has to be paid or credited to a beneficiary in more than three installments. Personal items. Do not take into account bequests of articles for personal use, such as personal and household effects and automobiles. Real property. Do not take into account specifically designated real property, the title to which passes under local law directly to the beneficiary. Other property.. Testamentary trust..
Period of Administration
The period of administration is the time actually required by the personal representative to assemble all of the decedent’s assets, pay all the expenses and obligations, and distribute the assets to the beneficiaries. This may be longer or shorter than the time provided by local law for the administration of estates. Ends if all assets distributed. If all assets are distributed except for. the following requirements.
Transfer of Unused Deductions to Beneficiaries
If the estate has unused loss carryovers or excess deductions for its last tax year, they are allowed to those beneficiaries who succeed to the estate’s property. See Successor beneficiary, later. Unused loss carryovers.
• It is required by the terms of the will. • It is a gift or bequest of a specific sum of
money or property.
• It is paid out in three or fewer installments
under the terms of the will. Specific sum of money or property.. Example 1.. Page 22. Excess deductions.. No double deductions., that is taken into account in figuring a net operating loss or a capital loss carryover of the estate for its last tax year cannot a. For example, this would include the following beneficiaries.
the beneficiary on the last day of the estate’s final tax year and must be reported on line 14a, Schedule K – 1 (Form 1041). If the estate terminated in 2002 this amount is treated as a payment of 2002 estimated tax made by the beneficiary on January 15, 2003.
Form 706
Generally, for estate tax purposes, you must file Form 706, United States Estate (and Generation-Skipping Transfer) Tax Return. If death occurred in 2002, Form 706 must be filed if the gross estate is more than $1,000,000. If you must file Form 706, it has to be.
• A beneficiary of a fraction of the
decedent’s net estate after payment of debts, expenses, and specific bequests.
• A nonresiduary beneficiary, when the estate is unable to satisfy the bequest in full.
•. Allocation among beneficiaries. The total of the unused loss carryovers or the excess deductions on termination that may be deducted by the successor beneficiaries is to be divided according to the share of each in the burden of the loss or deduction..
Comprehensive Example
The following is an example of a typical situation. All figures on the filled-in forms have been rounded to the nearest whole dollar. On April 9, 2002,. Assets of the estate. Your father had the following assets when he died.
• His checking account balance was $2,550
and his savings account balance was $53,650.
• Your father inherited your parents’ home
from his parents on March 5, 1979. At that time it was worth $42,000, but was appraised at the time of your father’s death at $150,000. The home was free of existing debts (or mortgages) at the time of his death.
Transfer of Credit for Estimated Tax Payments
When an estate terminates, the personal representative can choose to transfer to the beneficiaries the credit for all or part of the estate’s estimated tax payments for the last tax year. To make this choice,. The amount of estimated tax allocated to each beneficiary is treated as paid or credited to
• Your father owned 500 shares of ABC
Company stock that had cost him $10.20 a share in 1983. The stock had a mean selling price (midpoint between highest and lowest selling price) of $25 a share on the day he died. He also owned 500 shares of XYZ Company stock that had cost him $20 a share in 1988. The stock had a mean selling price on the date of death of $62. Page 23
• The appraiser valued your father’s automobile at $6,300 and the household effects at $18,500.
• Your father owned a coin collection and a
stamp collection. The face value of the coins in the collection was only $600, but the appraiser valued it at $2,800. The stamp collection was valued at $3,500.
•. your mother a check for $275,000 because she was the beneficiary of his life insurance policy.
• The Easy Life Insurance Company gave
•.
• On July 1, 1992, of $11,000 in salary between January 1, 2002, and April 9, 2002, (the date he died). The Form W – 2 showed $11,000 in box 1 and $23,000 ($11,000 + $12,000) in boxes 3 and 5. The Form W – 2 indicated 2002, before he died.
income, and interest are the only items of income he received between January 1 and the date of his death. You will have to file an income tax return for him for the period during which he lived. (You determine that he timely filed his 2001 income tax return before he died.) The final return is not due until April 15, 2003, the same date it would have been due had your father lived during all of 2002. The check representing unpaid salary and earned but unused vacation time was not paid to your father before he died, so the $12,000 is not reported as income on his final return. It is reported on the income tax return for the estate (Form 1041) for 2002. The only taxable income to be reported for your father will be the $11,000 salary (as shown on the Form W – 2), the $1,900 interest, and his portion of the rental income that he received in 2002. 2002 2002, $17,735 had been allowed as depreciation. (For information on ADS, see Publication 946.) Deductions. During the year, you received a bill from the hospital for $615 and bills from your father’s doctors totaling $475. You paid these bills as they were presented. In addition, you find other bills from his doctors totaling $185 that your father paid in 2002 and receipts for prescribed drugs he purchased totaling $36. The funeral home presented you a bill for $6,890 for the expenses of your father’s funeral, which you paid. The medical expenses you paid from the estate’s funds ($615 and $475) were for your father’s care and were paid within 1 year after his death, and they will not be used to figure the taxable estate, so you can treat them as having been paid by your father when he received the medical services. See Medical Expenses under Final Return for Decedent, earlier. However, you cannot’ 2002 income tax return.
Health insurance . . . . . State income tax paid . . Real estate tax on home . Contributions to church . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $3,250 891 1,100 3,800
Rental expenses included real estate taxes of $700 and mortgage interest of $410. In addition, insurance premiums of $260 and painting and repair expenses for $350 were paid. These rental expenses totaled $1,720. Your mother and father owned the property as joint tenants with right of survivorship and they were the only joint tenants, so 2002 depreciation for the period before your father’s death), as explained next. For 2002, you must make the following computations to figure the depreciation deduction. 1) For the period before your father’s death, depreciate the property using the same method, basis, and life used by your parents in previous years. They used the mid-month convention, so the amount deductible for three and a half months is $547. (This brings the total depreciation to $18,282 ($17,735 + $547) at the time of your father’s death. 2) For the period after your father’s death, you must make two computations. a) eight and a half months is $664. b) The other half of the property must be depreciated using a depreciation method that is acceptable for property placed in service in 2002. 3 and Table A-13 in Publication, 2002, when your mother determines the amount of her income, you and your mother must decide whether you will file a joint return or separate returns for your parents for 2002. Your mother has rental income and $400 of interest income from her savings account at the Mayflower Bank
Final Return for Decedent
From the papers in your father’s files, you determine that the $11,000 paid to him by his employer (as shown on the Form W – 2), rental Page 24
of Juneville, so it appears to be to her advantage to file a joint return. Tax computation. The illustrations of Form 1040 and related schedules appear near the end of this publication. These illustrations are based on information in this example. The tax refund is $542. The computation is as follows:
Income: Salary (per Form W – 2) . . . $11,000 Interest income . . . . . . . 3,140 Net rental income . . . . . 8,183 Adjusted gross income . . . . Minus: Itemized deductions Balance . . . . . . . . . . . . . Minus: Exemptions (2) . . Taxable Income . . . . . . . . Income tax from tax table Minus: Tax withheld . . . . Refund of taxes . . . . . . .
expense has).
1041) and the Schedule D Tax Worksheet to figure the tax, $2,826, for 2002. Note. For purpose of this example, we have illustrated the filled-in worksheet. You would not file the worksheet with the return. You would keep the worksheet for your records. 2003 income tax return for estate. On January 7, 2003, you receive a dividend check from the XYZ Company for $500. You also have interest posted to the savings account in January totaling $350. On January 28, 2003, 2003 is more than $600, so you must file an income tax return, Form 1041, for 2003 (not shown). The estate’s gross income for 2003 is $850 (dividends $500 and interest $350). Deductions. After making the following computations, you determine that none of the distributions made to your mother must be included in her taxable income for 2003.
Gross income for 2003: Dividends . . . . . . . . . . . . . . . . . . . Interest . . . . . . . . . . . . . . . . . . . . Less deductions: Administration expense . . . . . . . . . . Loss . . . . . . . . . . . . . . . . . . . . . . . $500 350 $850 $1,650 ($800)
$22,323 8,678 $13,645 6,000 $7,645 $763 1,305 $542
The distribution of $2,000 must be allocated and reported on Schedule K – 1 (Form 1041) as follows: Step 1 Allocation of Income & Deductions
Type of Income Interest (15%) Dividends (5%) Other Income (80%) Total Distributable Amount Deductions Net Income $ 2,250 750 12,000 $15,000 (386) (129) (2,060) (2,575) $ 1,864 621 9,940 $12,425
Income Tax Return of an Estate—Form 1041
The illustrations of Form 1041 and the related schedules for 2002 appear near the end of this publication. These illustrations are based on the information that follows. 2002, 2003. In addition to the amount you received from your father’s employer for unpaid salary and for vacation pay ($12,000) entered on line 8 (Form 1041), you received a dividend check from the XYZ Company on June 17, 2002. The check was for $750 and you enter it on line 2 (Form 1041). The estate received a Form 1099 – INT showing $2,250 interest paid by the bank on the savings account in 2002 after your father died. Show this amount on line 1 (Form 1041). In September, a local coin collector offered you $3,000 for your father’s coin collection. Your mother was not interested in keeping the collection, so you accepted the offer and sold him the collection on September 23, 2002. 2002,
Step 2 Allocation of Distribution (Report on the Schedule K – 1 for James)
Line 1 – Interest ($2,000 × 1,864/ 12,425) . . . . . . . . . . . . . . . . . . Line 2 – Dividends ($2,000 × 621/ 12,425) . . . . . . . . . . . . . . . . . . Line 5a – Other Income ($2,000 × 9,940/12,425) . . . . . Total Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . $300 100 1,600 $2,000
The estate took an income distribution deduction, so you must prepare Schedule I (Form 1041), Alternative Minimum Tax, regardless of whether the estate is liable for the alternative minimum tax. The other distribution you made out of the assets of the estate in 2002 was the transfer of the automobile to your mother on July 1. This is included in the bequest of property, so 2002 is $10,025, figured as follows:
Gross income: Income in respect of a decedent Dividends . . . . . . . . . . . . . . . Interest . . . . . . . . . . . . . . . . Capital gain . . . . . . . . . . . . . . . . . . . . . . . . . . $12,000 . 750 . 2,250 . 200 $15,200 2003, 2003, if she itemizes can be used for this purpose. Be sure to report the termination to the IRS office where you filed Form 56 and to include the employer identification number on this notification.
Minus: Deductions and income distribution Real estate taxes . . . . . . . . $2,250 Attorney’s fee . . . . . . . . . . 325 Exemption . . . . . . . . . . . . 600 Distribution . . . . . . . . . . . . 2,000 5,175 Taxable income . . . . . . . . . . . . . . . $10,025
The estate had a net capital gain and taxable income, so you use Part V of Schedule D (Form Page 25
DECEASED
John R. Smith -- April 9, 2002
1040
L A B E L H E R E
Form
Department of the Treasury—Internal Revenue Service
U.S. Individual Income Tax Return
For the year Jan. 1–Dec. 31, 2002, or other tax year beginning Your first name and initial
2002
Smith Smith
(99)
IRS Use Only—Do not write or staple in this space.
, 2002, ending
, 20
Label
(See instructions on page 21.) Use the IRS label. Otherwise, please print or type.
OMB No. 1545-0074 Your social security number
John R.
If a joint return, spouse’s first name and initial Last name
234 567
Apt. no.
00 00
7890 0123
Spouse’s social security number
Mary L. 6406 Mayflower St.
Home address (number and street). If you have a P.O. box, see page 21.
Important!
You must enter your SSN(s) above. You Yes No Spouse Yes No
City, town or post office, state, and ZIP code. If you have a foreign address, see page 21.
Presidential Election Campaign
(See page 21.)
Juneville, ME 00000
Note. Checking “Yes” will not change your tax or reduce your refund. Do you, or your spouse if filing a joint return, want $3 to go to this fund? 1 2 3 Single Married filing jointly (even if only one had income) Married filing separately. Enter spouse’s SSN above and full name here. 5 4
Filing Status
Check only one box.
Head of household (with qualifying person). (See page 21.) If the qualifying person is a child but not your dependent, enter this child’s name here. Qualifying widow(er) with dependent child (year ). (See page 21.) spouse died
No. of boxes checked on 6a and 6b No. of children on 6c who: ● lived with you ● did not live with you due to divorce or separation (see page 22) Dependents on 6c not entered above Add numbers on lines above
6a
Exemptions
Yourself. If your parent (or someone else) can claim you as a dependent on his or her tax return, do not check box 6a
(2) Dependent’s social security number (3) Dependent’s relationship to you (4) if qualifying child for child tax credit (see page 22)
2
b Spouse c Dependents:
(1) First name Last name
If more than five dependents, see page 22.
2
d Total number of exemptions claimed 25) b Taxable amount (see page 25) 15b 16b 17 18 19 20b 21 22
8a Taxable interest. Attach Schedule B if required 9 10 11 12 13 14 15a 16a 17 18 19 20a 21 22 23 24 25 26 27 28 29 30 31 32 33a 34 35 Ordinary dividends. Attach Schedule B if required Taxable refunds, credits, or offsets of state and local income taxes (see page 24) Alimony received Business income or (loss). Attach Schedule C or C-EZ Capital gain or (loss). Attach Schedule D if required. If not required, check here Other gains or (losses). Attach Form 4797 15a IRA distributions Pensions and annuities 16a
11,000 3,140
If you did not get a W-2, see page 23. Enclose, but do not attach, any payment. Also, please use Form 1040-V.
Rental real estate, royalties, partnerships, S corporations, trusts, etc. Attach Schedule E Farm income or (loss). Attach Schedule F Unemployment compensation 20a Social security benefits b Taxable amount (see page 27) Other income. List type and amount (see page 29) Add the amounts in the far right column for lines 7 through 21. This is your total income Educator expenses (see page 29) IRA deduction (see page 29) Student loan interest deduction (see page 31) Tuition and fees deduction (see page 32) Archer MSA deduction. Attach Form 8853 Moving expenses. Attach Form 3903 One-half of self-employment tax. Attach Schedule SE Self-employed health insurance deduction (see page 33) Self-employed SEP, SIMPLE, and qualified plans Penalty on early withdrawal of savings 23 24 25 26 27 28 29 30 31 32
8,183
22,323
Adjusted Gross Income
33a Alimony paid b Recipient’s SSN Add lines 23 through 33a Subtract line 34 from line 22. This is your adjusted gross income
Cat. No. 11320B
34 35
Form
For Disclosure, Privacy Act, and Paperwork Reduction Act Notice, see page 76.
22,323 1040
(2002)
Page 26
Form 1040 (2002)
Page
2
Tax and Credits
Standard Deduction for— ● People who checked any box on line 37a or 37b or who can be claimed as a dependent, see page 34. ● All others: Single, $4,700 Head of household, $6,900 Married filing jointly or Qualifying widow(er), $7,850 Married filing separately, $3,925
36 37a
Amount from line 35 (adjusted gross income) Check if: You were 65 or older, Blind; Spouse was 65 or older, Add the number of boxes checked above and enter the total here Blind. 37a 37b
36
22,323
b If you are married filing separately and your spouse itemizes deductions, or you were a dual-status alien, see page 34 and check here 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 Subtract line 38 from line 36
Itemized deductions (from Schedule A) or your standard deduction (see left margin) If line 36 is $103,000 or less, multiply $3,000 by the total number of exemptions claimed on line 6d. If line 36 is over $103,000, see the worksheet on page 35 Taxable income. Subtract line 40 from line 39. If line 40 is more than line 39, enter -0Tax (see page 36). Check if any tax is from: a Add lines 42 and 43 Foreign tax credit. Attach Form 1116 if required Credit for child and dependent care expenses. Attach Form 2441 Credit for the elderly or the disabled. Attach Schedule R Education credits. Attach Form 8863 Retirement savings contributions credit. Attach Form 8880 Child tax credit (see page 39) Adoption credit. Attach Form 8839 Credits from: a Form 8396 b Form 8859 45 46 47 48 49 50 51 52 Form(s) 8814 b Form 4972 Alternative minimum tax (see page 37). Attach Form 6251
38 39 40 41 42 43 44
8,678 13,645 6,000 7,645 763 763
a Form 3800 Other credits. Check applicable box(es): 53 Form 8801 c Specify b Add lines 45 through 53. These are your total credits Subtract line 54 from line 44. If line 54 is more than line 44, enter -0Self-employment tax. Attach Schedule SE Social security and Medicare tax on tip income not reported to employer. Attach Form 4137 Tax on qualified plans, including IRAs, and other tax-favored accounts. Attach Form 5329 if required Advance earned income credit payments from Form(s) W-2 Household employment taxes. Attach Schedule H Add lines 55 through 60. This is your total tax Federal income tax withheld from Forms W-2 and 1099 2002 estimated tax payments and amount applied from 2001 return Earned income credit (EIC) Excess social security and tier 1 RRTA tax withheld (see page 56) Additional child tax credit. Attach Form 8812 Amount paid with request for extension to file (see page 56) Other payments from: a Form 2439 b Form 4136 c Form 8885 Add lines 62 through 68. These are your total payments 62 63 64 65 66 67 68
54 55 56 57 58 59 60 61
763
Other Taxes
56 57 58 59 60 61
763
Payments
62 63
1,305
If you have a 64 qualifying 65 child, attach Schedule EIC. 66
67 68 69
69 70 71a
Refund
Direct deposit? See page 56 and fill in 71b, 71c, and 71d.
70 71a b d 72 73 74
If line 69 is more than line 61, subtract line 61 from line 69. This is the amount you overpaid Amount of line 70 you want refunded to you Routing number Account number Amount of line 70 you want applied to your 2003 estimated tax 72 Amount you owe. Subtract line 69 from line 61. For details on how to pay, see page 57 Estimated tax penalty (see page 57) 74 c Type: Checking Savings
1,305 542 542
Amount You Owe
73 No
Third Party Designee Sign Here
Joint return? See page 21. Keep a copy for your records.
Do you want to allow another person to discuss this return with the IRS (see page 58)?
Designee’s name Phone no. ( ) Daytime phone number ( Spouse’s occupation )
Charles R. Smith, Executor
Spouse’s signature. If a joint return, both must sign.
3-25-03
Date
Mary L. Smith
Preparer’s signature Firm’s name (or yours if self-employed), address, and ZIP code
3-25-03
Date Check if self-employed EIN Phone no. ( ) Form Preparer’s SSN or PTIN
Paid Preparer’s Use Only
1040
(2002)
Page 27
SCHEDULES A&B
(Form 1040)
Department of the Treasury (99) Internal Revenue Service
Schedule A—Itemized Deductions
(Schedule B is on back)
Attach to Form 1040. See Instructions for Schedules A and B (Form 1040).
OMB No. 1545-0074
Attachment Sequence No.
2002
07 00 7890
Name(s) shown on Form 1040
Your social security number
John R. (Deceased) & Mary L. Smith Medical and Dental Expenses Taxes You Paid
(See page A-2.)
234 4,561 1,674
4
1 2 3 4 5 6 7 8 9
Caution. Do not include expenses reimbursed or paid by others. 1 Medical and dental expenses (see page A-2) 22,323 2 Enter amount from Form 1040, line 36 3 Multiply line 2
2,887
891 1,100
9
1,991
Interest You Paid
(See page A-3.)
10 11
Gifts to Charity
If you made a gift and got a benefit for it, see page A-4.
15 16 17 18
3,800
21 22
Tax preparation fees Other expenses—investment, safe deposit box, etc. List type and amount
23 24 25 26
Add lines 20 through 22 Enter amount from Form 1040, line 36 24 25 Multiply line 24 by 2% (.02) Subtract line 25 from line 23. If line 25 is more than line 23, enter -0Other—from list on page A-6. List type and amount
22 23
26
Other 27 Miscellaneous Deductions Total Itemized Deductions
28
27 Is Form 1040, line 36, over $137,300 (over $68,650 if married filing separately)? No. Your deduction is not limited. Add the amounts in the far right column for lines 4 through 27. Also, enter this amount on Form 1040, line 38. Yes. Your deduction may be limited. See page A-6 for the amount to enter.
Cat. No. 11330X
28
8,678
For Paperwork Reduction Act Notice, see Form 1040 instructions.
Schedule A (Form 1040) 2002
Page 28
Schedules A&B (Form 1040) 2002 Name(s) shown on Form 1040. Do not enter name and social security number if shown on other side.
OMB No. 1545-0074
Page
2
Your social security number
Schedule B—Interest and Ordinary Dividends
Part I Interest
(See page B-1 and the instructions for Form 1040, line 8a.)
Attachment Sequence No.
08
1
List name of payer. If any interest is from a seller-financed mortgage and the buyer used the property as a personal residence, see page B-1 and list this interest first. Also, show that buyer’s social security number and address
Amount
First S&L of Juneville Mayflower Bank of Juneville Series EE U.S. Savings Bonds -- Interest Includible Before Decedent’s Death
1
1,900 400 840
Note. If you received a Form 1099-INT, Form 1099-OID, or substitute statement from a brokerage firm, list the firm’s name as the payer and enter the total interest shown on that form.
Add the amounts on line 1 Excludable interest on series EE and I U.S. savings bonds issued after 1989 from Form 8815, line 14. You must attach Form 8815 4 Subtract line 3 from line 2. Enter the result here and on Form 1040, line 8a Note. If line 4 is over $1,500, you must complete Part III. 2 3 5 List name of payer. Include only ordinary dividends. If you received any capital gain distributions, see the instructions for Form 1040, line 13
2 3 4
3,140 -03,140
Amount
Part II Ordinary Dividends
(See page B-1 and the instructions for Form 1040, line 9.)
Note. If you received a Form 1099-DIV or substitute statement from a brokerage firm, list the firm’s name as the payer and enter the ordinary dividends shown on that form.
5
6 Add the amounts on line 5. Enter the total here and on Form 1040, line 9 Note. If line 6 is over $1,500, you must complete Part III.
6 Yes No
Part III Foreign Accounts and Trusts
(See page B-2.)
You must complete this part if you (a) had over $1,500 of taxable interest or ordinary dividends; OR (b) had a foreign account; or (c) received a distribution from, or were a grantor of, or a transferor to, a foreign trust. 7a At any time during 2002, 8 During 2002, did you receive a distribution from, or were you the grantor of, or transferor to, a foreign trust? If “Yes,” you may have to file Form 3520. See page B-2
For Paperwork Reduction Act Notice, see Form 1040 instructions.
Schedule B (Form 1040) 2002
Page 29.
2002
13 00 7890
Name(s) shown on return
Your social security number
Part I
John R. (Deceased) & Mary L. Smith Income or Loss From Rental Real Estate and Royalties
234
Note. If you are in the business of renting personal property, use Schedule C or C-EZ (see page E-3). Report farm rental income or loss from Form 4835 on page 2, line 39.
1 Show the kind and location of each rental real estate property: A B C
House, 137 Main Street Juneville, ME 00000.)
12,000
3 4
12,000 Other (list)
260
410 350 700
12
410
18
19 Add lines 5 through 18
19
1,720
19
1,720 2,097
20 Depreciation expense or depletion 2,097 20 20 (see page E-4) 21 3,817 21 Total expenses. Add lines 19 and 20 22 Income or (loss) from rental real estate or royalty properties. Subtract line 21 from line 3 (rents) or line 4 (royalties). If the result is a (loss), see page E-5 to find out 8,183 22 if you must file Form 6198 23 Deductible rental real estate loss. Caution. Your rental real estate loss on line 22 may be limited. See page E-5 to find out if you must file Form 8582. Real estate professionals must complete line ) ( ) ( ) 23 ( 42 on page 2 24 24 Income. Add positive amounts shown on line 22. Do not include any losses 25 Losses. Add royalty losses from line 22 and rental real estate losses from line 23. Enter total losses here 25 ( 26
For Paperwork Reduction Act Notice, see Form 1040 instructions.
Cat. No. 11344L
8,183
)
8,183
Schedule E (Form 1040) 2002
Page 30
Form
4562
Depreciation and Amortization
(Including Information on Listed Property)
See separate instructions. Attach to your tax return.
Business or activity to which this form relates
OMB No. 1545-0172
2002
Attachment Sequence No.
Department of the Treasury Internal Revenue Service
67
Name(s) shown on return
Identifying number
Part I
1 2 3 4 5
John R. (Deceased) & Mary L. Smith Election To Expense Certain Tangible Property Under Section 179 Note: If you have any listed property, complete Part V before you complete Part I.
1 2 3 4 5
234-00-7890
Maximum amount. See page 2 of the instructions for a higher limit for certain businesses Total cost of section 179 property placed in service (see page 2 of the instructions) Threshold cost of section 179 property before reduction in limitation Reduction in limitation. Subtract line 3 from line 2. If zero or less, enter -0Dollar limitation for tax year. Subtract line 4 from line 1. If zero or less, enter -0-. If married filing separately, see page 2 of the instructions
(a) Description of property (b) Cost (business use only) (c) Elected cost
$24,000 $200,000
6 7 2001 Form 4562 11 Business income limitation. Enter the smaller of business income (not less than zero) or line 5 (see instructions) 12 Section 179 expense deduction. Add lines 9 and 10, but do not enter more than line 11 13 Carryover of disallowed deduction to 2003. Add lines 9 and 10, less line 12 13 Note: Do not use Part II or Part III below for listed property. Instead, use Part V.
8 9 10 11 12
Part II
14 15 16
Special Depreciation Allowance and Other Depreciation (Do not include listed property.)
14 15 16
Special depreciation allowance for qualified property (other than listed property) placed in service during the tax year (see page 3 of the instructions) Property subject to section 168(f)(1) election (see page 4 of the instructions) Other depreciation (including ACRS) (see page 4 of the instructions) Section A
Part III
17 18
MACRS Depreciation (Do not include listed property.) (See page 4 of the instructions.) 1,211
17 MACRS deductions for assets placed in service in tax years beginning before 2002 If you are electing under section 168(i)(4) to group any assets placed in service during the tax year into one or more general asset accounts, check here Section B—Assets Placed in Service During 2002 Tax Year Using the General Depreciation System
(b) Month and year placed in service (c) Basis for depreciation (business/investment use only—see instructions) (d) Recovery period (e) Convention (f) Method
(a) Classification of property
(g) Depreciation deduction
19a b c d e f g
3-year property 5-year property 7-year property 10-year property 15-year property 20-year property 25-year property
25 yrs. MM MM MM MM
27.5 yrs. h Residential rental 27.5 yrs. property 39 yrs. i Nonresidential real property Section C—Assets Placed in Service During 2002 Tax Year Using 20a Class life b 12-year 12 yrs. c 40-year 40 yrs. 50,000 4-02 Part IV
21 22 23
S/L S/L S/L S/L S/L S/L S/L S/L
21 22
the Alternative Depreciation System
MM
886
Summary (see page 6 of the instructions)
Listed property. Enter amount from line 28 Total. Add amounts from line 12, lines 14 through 17, lines 19 and 20 in column (g), and line 21. Enter here and on the appropriate lines of your return. Partnerships and S corporations—see instr. For assets shown above and placed in service during the current year, enter the portion of the basis attributable to section 263A costs 23
Cat. No. 12906N
2,097
For Paperwork Reduction Act Notice, see separate instructions.
Form
4562
(2002)
Page 31
1041
Simple trust Complex trust
Form
Department of the Treasury—Internal Revenue Service
U.S. Income Tax Return for Estates and Trusts
For calendar year 2002 or fiscal year beginning
Name of estate or trust (If a grantor type trust, see page 10 of the instructions.)
2002
C
OMB No. 1545-0092
A
Type of entity: (see instr.) Decedent’s estate
, 2002, and ending
, 20
Employer identification number
10
D
0123456
Date entity created
Qualified disability trust ESBT (S portion only) Grantor type trust Bankruptcy estate–Ch. 7 Bankruptcy estate–Ch. 11 Pooled income fund B Number of Schedules K-1 attached (see 1 instructions)
Estate of John R. Smith
Name and title of fiduciary E
4-9-02
Nonexempt charitable and splitinterest trusts, check applicable boxes (see page 11 of the instructions): Described in section 4947(a)(1) Not a private foundation Described in section 4947(a)(2) G Pooled mortgage account (see page 12 of the instructions): Bought Sold Date:
Charles R. Smith, Executor
Number, street, and room or suite no. (If a P.O. box, see page 10 of the instructions.)
6406 Mayflower St.
City or town, state, and ZIP code
Juneville, ME 00000
Amended return Change in fiduciary’s address
F
Check applicable boxes:
Initial return Final return Change in fiduciary’s name
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15a b 16 17 18 19 20 21 22 23 24 b c d e 25 26 27 28 29
Interest income Ordinary) IRD Salary and vacation pay Other income. List type and amount 15) (attach Schedules K-1 (Form 1041)) Estate tax deduction (including certain generation-skipping taxes) (attach computation) Exemption Total deductions. Add lines 18 through 20 Taxable income. Subtract line 21 from line 17. If a loss, see page 17 of the instructions Total tax (from Schedule G, line 7) Payments: a 2002 estimated tax payments and amount applied from 2001 return Estimated tax payments allocated to beneficiaries (from Form 1041-T) Subtract line 24b from line 24a Tax paid with extension of time to file: Form 2758 Form 8736 Form 8800 Federal income tax withheld. If any is from Form(s) 1099, check Other payments: f Form 2439 ; g Form 4136 ; Total Total payments. Add lines 24c through 24e, and 24h Estimated tax penalty (see page 17 of the instructions) Tax due. If line 25 is smaller than the total of lines 23 and 26, enter amount owed Overpayment. If line 25 is larger than the total of lines 23 and 26, enter amount overpaid Amount of line 28 to be: a Credited to 2003 estimated tax ; b Refunded
Tax and Payments
of 002 as , 2 ) of r 8 nge ro e ha P b c em ect to ov ubj N
(s
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15a 15b 16 17 18 19 20 21 22 23 24a 24b 24c 24d 24e 24h 25 26 27 28 29
2,250 750 200
Income
12,000 15,200 2,250
Deductions
325
2,575 12,625 2,000 600 2,600 10,025 2,826
2,826.
Charles R. Smith, Executor
Signature of fiduciary or officer representing fiduciary Preparer’s signature Firm’s name (or yours if self-employed), address, and ZIP code
3-24-03
Date EIN of fiduciary if a financial institution Date Check if self-employed EIN Phone no. Cat. No. 11370H (
May the IRS discuss this return with the preparer shown below (see instr.)? Yes No
Paid Preparer’s Use Only
Preparer’s SSN or PTIN
) Form
For Paperwork Reduction Act Notice, see the separate instructions.
1041
(2002)
Page 32
Form 1041 (2002)
Page
2
Schedule A
1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1
Charitable Deduction. Do not complete for a simple trust or a pooled income fund.
1 2 3 4 5 6 7 1 2 3 4 5 6 7
Amounts paid or permanently set aside for charitable purposes from gross income (see page 18) Tax-exempt income allocable to charitable contributions (see page 18 of the instructions) Subtract line 2 from line 1 Capital gains for the tax year allocated to corpus and paid or permanently set aside for charitable purposes Add lines 3 and 4 Section 1202 exclusion allocable to capital gains paid or permanently set aside for charitable purposes (see page 18 of the instructions) Charitable deduction. Subtract line 6 from line 5. Enter here and on page 1, line 13
Schedule B
Income Distribution Deduction
Adjusted total income (see page 18 of the instructions) Adjusted tax-exempt interest Total net gain from Schedule D (Form 1041), line 16, column (1) (see page 19 of the instructions) Enter amount from Schedule A, line 4 (reduced by any allocable section 1202 exclusion) Capital gains for the tax year included on Schedule A, line 1 (see page 19 of the instructions) Enter any gain from page 1, line 4, as a negative number. If page 1, line 4, is a loss, enter the loss as a positive number Distributable net income (DNI). Combine lines 1 through 6. If zero or less, enter -0If a complex trust, enter accounting income for the tax year as 8 determined under the governing instrument and applicable local law Income required to be distributed currently Other amounts paid, credited, or otherwise required to be distributed Total distributions. Add lines 9 and 10. If greater than line 8, see page 19 of the instructions Enter the amount of tax-exempt income included on line 11 Tentative income distribution deduction. Subtract line 12 from line 11 Tentative income distribution deduction. Subtract line 2 from line 7. If zero or less, enter -0Income distribution deduction. Enter the smaller of line 13 or line 14 here and on page 1, line 18
of 002 as , 2 ) of r 8 nge ro e ha P b oc em ect t ov ubj N
(s
1a 1b 1c 2a 2b 2c 2d
12,625
(200) 12,425
9 10 11 12 13 14 15
2,000 2,000 2,000 12,425 2,000
Schedule G
Tax Computation (see page 20 of the instructions) 2,826
2a b c d 3 4 5 6 7 1 2 3
Tax: a Tax rate schedule or Schedule D (Form 1041) b Tax on lump-sum distributions (attach Form 4972) c Alternative minimum tax (from Schedule I, line 56) d Total. Add lines 1a through 1c Foreign tax credit (attach Form 1116) Other nonbusiness credits (attach schedule) General business credit. Enter here and check which forms are attached: Forms (specify) Form 3800 Credit for prior year minimum tax (attach Form 8801) Total credits. Add lines 2a through 2d Subtract line 3 from line 1d. If zero or less, enter -0Recapture taxes. Check if from: Form 4255 Form 8611 Household employment taxes. Attach Schedule H (Form 1040) Total tax. Add lines 4 through 6. Enter here and on page 1, line 23
1d
2,826
3 4 5 6 7
2,826 -02,826 Yes No
Other Information
Did the estate or trust receive tax-exempt income? If “Yes,” attach a computation of the allocation of expenses $ Enter the amount of tax-exempt interest income and exempt-interest dividends Did the estate or trust receive all or any part of the earnings (salary, wages, and other compensation) of any individual by reason of a contract assignment or similar arrangement? At any time during calendar year 2002, did the estate or trust have an interest in or a signature or other authority over a bank, securities, or other financial account in a foreign country? See page 21 of the instructions for exceptions and filing requirements for Form TD F 90-22.1. If “Yes,” enter the name of the foreign country During the tax year, did the estate or trust receive a distribution from, or was it the grantor of, or transferor to, a foreign trust? If “Yes,” the estate or trust may have to file Form 3520. See page 21 of the instructions Did the estate or trust receive, or pay, any qualified residence interest on seller-provided financing? If “Yes,” see page 21 for required attachment If this is an estate or a complex trust making the section 663(b) election, check here (see page 21) To make a section 643(e)(3) election, attach Schedule D (Form 1041), and check here (see page 21) If the decedent’s estate has been open for more than 2 years, attach an explanation for the delay in closing the estate, and check here Are any present or future trust beneficiaries skip persons? See page 21 of the instructions
4 5 6 7 8 9
Form
1041
(2002)
Page 33
Form 1041 (2002)
Page
3
Schedule I Alternative Minimum Tax (see pages 21 through 27 of the instructions) Part I—Estate’s or Trust’s Share of Alternative Minimum Taxable Income
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Adjusted total income or (loss) (from page 1, line 17) Interest Taxes Miscellaneous itemized deductions (from page 1, line 15b) Refund of taxes Depletion (difference between regular tax and AMT) Net operating loss deduction. Enter as a positive amount Interest from specified private activity bonds exempt from the regular tax Qualified small business stock (42% of gain excluded under section 1202) Exercise of incentive stock options (excess of AMT income over regular tax income) Other estates and trusts (amount from Schedule K-1 (Form 1041), line 9) Electing large partnerships (amount from Schedule K-1 (Form 1065-B), box 6) Disposition of property (difference between AMT and regular tax gain or loss) Depreciation) Research and experimental costs (difference between regular tax and AMT) Income from certain installment sales before January 1, 1987 Intangible drilling costs preference Other adjustments, including income-based related adjustments Alternative tax net operating loss deduction Adjusted alternative minimum taxable income. Combine lines 1 through 24 Note: Complete Part II below before going to line 26. 2,000 26 Income distribution deduction from line 44 below 27 Estate tax deduction (from page 1, line 19) Add lines 26 and 27 Estate’s or trust’s share of alternative minimum taxable income. Subtract line 28 from line 25. If line 29 is: ● $22,500 or less, stop here and enter -0- on Schedule G, line 1c. The estate or trust is not liable for the alternative minimum tax. ● Over $22,500, but less than $165,000, go to line 45. ● $165,000 or more, enter the amount from line 29 on line 51 and go to line 52. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
12,625 2,250
( )
of 02 as 20 ) of 2, nge ro r P be ha oc to ject t Oc b
su (
(
)
(
)
14,875
28 29
2,000 12,875
Part II—Income Distribution Deduction on a Minimum Tax Basis
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Adjusted alternative minimum taxable income (see page 25 of the instructions) Adjusted tax-exempt interest (other than amounts included on line 8) Total net gain from Schedule D (Form 1041), line 16, column (1). If a loss, enter -0Capital gains for the tax year allocated to corpus and paid or permanently set aside for charitable purposes (from Schedule A, line 4) Capital gains paid or permanently set aside for charitable purposes from gross income (see page 26 of the instructions) Capital gains computed on a minimum tax basis included on line 25 Capital losses computed on a minimum tax basis included on line 25. Enter as a positive amount Distributable net alternative minimum taxable income (DNAMTI). Combine lines 30 through 36. If zero or less, enter -0Income required to be distributed currently (from Schedule B, line 9) Other amounts paid, credited, or otherwise required to be distributed (from Schedule B, line 10) Total distributions. Add lines 38 and 39 Tax-exempt income included on line 40 (other than amounts included on line 8) Tentative income distribution deduction on a minimum tax basis. Subtract line 41 from line 40 Tentative income distribution deduction on a minimum tax basis. Subtract line 31 from line 37. If zero or less, enter -0Income distribution deduction on a minimum tax basis. Enter the smaller of line 42 or line 43. Enter here and on line 26
30 31 32 33 34 35 ( 36 37 38 39 40 41 42 43 44
14,875
200 14,675 2,000 2,000 2,000 14,675 2,000 Form 1041
)
(2002)
Page 34
SCHEDULE D (Form 1041)
Department of the Treasury Internal Revenue Service
OMB No. 1545-0092
Capital Gains and Losses
Attach to Form 1041 (or Form 5227). See the separate instructions for Form 1041 (or Form 5227).
2002
0123456
Name of estate or trust
Employer identification number
Estate of John R. Smith Part I
Note: For m 5227 filers need to complete only Parts I and II.
Short-Term Capital Gains and Losses—Assets Held One Year or Less
(b) Date acquired (mo., day, yr.) (c) Date sold (mo., day, yr.) (d) Sales price (e) Cost or other basis (see page 29)
(a) Description of property (Example, 100 shares 7% preferred of “Z” Co.)
1
2 3 4 5
Short-term capital gain or (loss) from Forms 4684, 6252, 6781, and 8824 Net short-term gain or (loss) from partnerships, S corporations, and other estates or trusts Short-term capital loss carryover. Enter the amount, if any, from line 9 of the 2001 Capital Loss Carryover Worksheet Net short-term gain or (loss). Combine lines 1 through 4 in column (f). Enter here and on line 14 below
of 02 as 20 ) of 4, nge ro 1 ha P st oc gu ject t Au b
10
(f) Gain or (Loss) (col. (d) less col. (e))
2 3 4
(su
(
)
5
Part II
Long-Term Capital Gains and Losses—Assets Held More Than One Year
(b) Date acquired (mo., day, yr.) (c) Date sold (mo., day, yr.) (d) Sales price (e) Cost or other basis (see page 29) (f) Gain or (Loss) (col. (d) less col. (e)) (g) 28% Rate Gain or (Loss) *(see instr. below)
(a) Description of property (Example, 100 shares 7% preferred of “Z” Co.)
6
Coin Collection
4-9-02
9-22-02
3,000
2,800
200
200
7 8 9 10 11 12 13
Long-term capital gain or (loss) from Forms 2439, 4684, 6252, 6781, and 8824
Net long-term gain or (loss) from partnerships, S corporations, and other estates or trusts
Capital gain distributions Gain from Form 4797, Part I Long-term capital loss carryover. Enter in both columns (f) and (g) the amount, if any, from line 14, of the 2001 Capital Loss Carryover Worksheet Combine lines 6 through 11 in column (g) Net long-term gain or (loss). Combine lines 6 through 11 in column (f). Enter here and on line 15 below
7 8 9 10 11 ( 12 13
)( 200 200
)
*28% rate gain or loss includes all “collectibles gains and losses” (as defined on page 30 of the instructions) and up to 50% of the eligible gain on qualified small business stock (see page 28 of the instructions).
Part III
14 15 a b c d 16
Summary of Parts I and II
(1) Beneficiaries’ (see page 30)
(2) Estate’s or trust’s
(3) Total
14 Net short-term gain or (loss) (from line 5 above) Net long-term gain or (loss): 15a Total for year (from line 13 above) 15b 28% rate gain or (loss) (from line 12 above) 15c Qualified 5-year gain Unrecaptured section 1250 gain (see line 17 of the 15d worksheet on page 31) 16 Total net gain or (loss). Combine lines 14 and 15a
200 200
200 200
200
200
Note: If line 16, column (3), is a net gain, enter the gain on Form 1041, line 4. If lines 15a and 16, column (2), are net gains, go to Part V, and do not complete Part IV. If line 16, column (3), is a net loss, complete Part IV and the Capital Loss Carryover Worksheet, as necessary.
For Paperwork Reduction Act Notice, see the Instructions for Form 1041.
Cat. No. 11376V
Schedule D (Form 1041) 2002
Page 35
Schedule D (Form 1041) 2002
Page
2
Part IV
17
Capital Loss Limitation
Enter here and enter as a (loss) on Form 1041, line 4, the smaller of: a The loss on line 16, column (3) or ) 17 ( b $3,000 If the loss on line 16, column (3), is more than $3,000, or if Form 1041, page 1, line 22, is a loss, complete the Capital Loss Carryover Worksheet on page 32 of the instructions to determine your capital loss carryover.
Part V
Tax Computation Using Maximum Capital Gains Rates (Complete this part only if both lines 15a and 16 in column (2) are gains, and Form 1041, line 22 is more than zero.)
18 19 20 21 22 23 24
Note: If line 15b, column (2) or line 15d, column (2) is more than zero, complete the worksheet on page 34 of the instructions to figure the amount to enter on lines 20 and 38 below and skip all other lines below. Otherwise, go to line 18. 18 Enter taxable income from Form 1041, line 22 19 Enter the smaller of line 15a or 16 in column (2) If the estate or trust is filing Form 4952, enter –0– 20 the amount from line 4e; otherwise, enter -021 Subtract line 20 from line 19. If zero or less, enter -022 Subtract line 21 from line 18. If zero or less, enter -0Figure the tax on the amount on line 22. Use the 2002 Tax Rate Schedule on page 20 of the instructions 24 Enter the smaller of the amount on line 18 or $1,850 If line 24 is greater than line 22, go to line 25. Otherwise, skip lines 25 through 31 and go to line 32.
25 26 27
Enter the amount from line 22 Subtract line 25 from line 24. If zero or less, enter -0- and go to line 32 Enter the estate’s or trust’s allocable portion of qualified 5-year gain, if any, from line 15c, column (2) Enter the smaller of line 26 or line 27 Multiply line 28 by 8% (.08) Subtract line 28 from line 26 Multiply line 30 by 10% (.10)
of 02 as 20 ) of 4, nge ro 1 ha P st oc gu ject t Au b
su (
27
23
25 26
28 29 30 31
28 29 30 31
If the amounts on lines 21 and 26 are the same, skip lines 32 through 35 and go to line 36. 32 33 34 35 36 37 38 Enter the smaller of line 18 or line 21 Enter the amount, if any, from line 26 Subtract line 33 from line 32 Multiply line 34 by 20% (.20) 32 33 34 35 36 37 38
Add lines 23, 29, 31, and 35 Figure the tax on the amount on line 18. Use the 2002 Tax Rate Schedule on page 20 of the instructions Tax on all taxable income (including capital gains). Enter the smaller of line 36 or line 37 here and on line 1a of Schedule G, Form 1041
2,826
Schedule D (Form 1041) 2002
Page 36
SCHEDULE K-1 (Form 1041)
Department of the Treasury Internal Revenue Service
Beneficiary’s Share of Income, Deductions, Credits, etc.
for the calendar year 2002, or fiscal year beginning , 2002, ending , 20 Complete a separate Schedule K-1 for each beneficiary.
OMB No. 1545-0092
2002
Amended K-1 Final K-1
Name of trust or decedent’s estate
Estate of John R. Smith 123-00-6789 Beneficiary’s identifying number
Beneficiary’s name, address, and ZIP code
10 0123456 Estate’s or trust’s EIN Fiduciary’s name, address, and ZIP code Charles R. Smith, Executor 6406 Mayflower Street Juneville, ME 00000
(b) Amount (c) Calendar year 2002 Form 1040 filers enter the amounts in column (b) on:
James Smith 6407 Mayflower Street Juneville, ME 00000
(a) Allocable share item
1 2 3 4 b c d
Interest Ordinary dividends Net short-term capital gain Net long-term capital gain: a Total for year 28% rate gain Qualified 5-year gain Unrecaptured section 1250 gain
5a Annuities, royalties, and other nonpassive income before directly apportioned deductions b Depreciation c Depletion d Amortization 6a Trade or business, rental real estate, and other rental income before directly apportioned deductions (see instructions) b Depreciation c Depletion d Amortization 7 8 9 10 11 12 a b c d 13 a b c d e f g Income for minimum tax purposes Income for regular tax purposes (add lines 1, 2, 3, 4a, 5a, and 6a) Adjustment for minimum tax purposes (subtract line 8 from line 7)
Estate tax deduction (including certain generationskipping transfer taxes) Foreign taxes Adjustments and tax preference items (itemize): Accelerated depreciation Depletion Amortization Exclusion items Deductions in the final year of trust or decedent’s estate: Excess deductions on termination (see instructions) Short-term capital loss carryover Long-term capital loss carryover Net operating loss (NOL) carryover for regular tax purposes NOL carryover for minimum tax purposes
of 02 as 20 ) of 4, nge ro 1 ha P st oc gu ject t u b A
5a 5b 5c 5d 6a 6b 6c 6d 7 8 9
1 2 3 4a 4b 4c 4d
300 100
Schedule B, Part I, line 1 Schedule B, Part II, line 5 Schedule D, line 5 Schedule D, line 12, column (f) Schedule D, line 12, column (g) Line 5 of the worksheet for Schedule D, line 29 Line 11 of the worksheet for Schedule D, line 19 Schedule E, Part III, column (f) Include on the applicable line of the appropriate tax form
1,600
Schedule E, Part III Include on the applicable line of the appropriate tax form
2,000 2,000
Form 6251, line 14
(su
10 11
Schedule A, line 27 Form 1040, line 45 or Schedule A, line 8
12a 12b 12c 12d 13a 13b ( 13c ( 13d ( 13e 13f 13g 14a 14b 14c 14d 14e 14f 14g 14h
Include on the applicable line of Form 6251 2003 Form 8801 Schedule A, line 22 ) Schedule D, line 5 ) Schedule D, line 12, columns (f) and (g) ) Form 1040, line 21 See the instructions for Form 6251, line 27 Include on the applicable line of the appropriate tax form Form 1040, line 63 Form 1040, line 8b
14 Other (itemize): a Payments of estimated taxes credited to you b Tax-exempt interest c d e f g h
Include on the applicable line of the appropriate tax form
For Paperwork Reduction Act Notice, see the Instructions for Form 1041.
Cat. No. 11380D
Schedule K-1 (Form 1041) 2002
Page 37
Schedule D Tax Worksheet
Keep for Your Records
Complete this worksheet only if line 15b, column (2), or line 15d, column (2), of Schedule D is more than zero. Otherwise, complete Part V of Schedule D to figure the estate’s or trust’s tax. Exception: Do not use Schedule D, Part V, or this worksheet to figure the estate’s or trust’s tax if line 15a, column (2), or line 16, column (2), of Schedule D or Form 1041, line 22, is zero or less; instead, see the instructions for Schedule G, line 1a of Form 1041. 1. Enter your taxable income from Form 1041, line 22 2. Enter the smaller of line 15a or line 16 in column (2) of 200 Schedule D 2. 3. If you are filing Form 4952, enter the amount from line 4e of that form. Otherwise, enter -0-. Also enter this -03. amount on Schedule D, line 20 200 4. 4. Subtract line 3 from line 2. If zero or less, enter -05. Combine lines 14 and 15b, column (2) of Schedule D. If 200 zero or less, enter -05. 6. Enter the smaller of line 5 above or Schedule D, line 200 15b, column (2), but not less than zero 6. -07. Enter the amount from Schedule D, line 15d, column (2) 7. 200 8. Add lines 6 and 7 8. 9. Subtract line 8 from line 4. If zero or less, enter -010. Subtract line 9 from line 1. If zero or less, enter -011. Enter the smaller of: 1,850 ● The amount on line 1 or 11. ● $1,850 1,850 12. 12. Enter the smaller of line 10 or line 11 9,825 13. Subtract line 4 from line 1. If zero or less, enter -013. 9,825 14. Enter the larger of line 12 or line 13 14. 15. Figure the tax on the amount on line 14. Use the 2002 Tax Rate Schedule on page 20 If lines 11 and 12 are the same, skip lines 16 through 21 and go to line 22. Otherwise, go to line 16. 16. Subtract line 12 from line 11 16. 17. Enter the estate’s or trust’s allocable portion of qualified 5-year gain, if any, from Schedule D, line 15c, column (2) 17. 18. 18. Enter the smaller of line 16 above or line 17 above 19. Multiply line 18 by 8% (.08) 20. 20. Subtract line 18 from line 16 21. Multiply line 20 by 10% (.10) If lines 1 and 11 are the same, skip lines 22 through 34 and go to line 35. Otherwise, go to line 22. -022. 22. Enter the smaller of line 1 or line 9 -023. 23. Enter the amount from line 16. If blank, enter -0-024. 24. Subtract line 23 from line 22 25. Multiply line 24 by 20% (.20) If line 7 is zero or blank, skip lines 26 through 31 and go to line 32. Otherwise, go to line 26. 26. 26. Enter the smaller of line 4 or line 7 27. 27. Add lines 4 and 14 28. 28. Enter the amount from line 1 29. Subtract line 28 from line 27. If zero or less, enter -0- 29. 30. 30. Subtract line 29 from line 26. If zero or less, enter -031. Multiply line 30 by 25% (.25) If line 6 is zero, skip lines 32 through 34 and go to line 35. Otherwise, go to line 32. 9,825 32. 32. Add lines 14, 16, 24, and 30 200 33. 33. Subtract line 32 from line 1 34. Multiply line 33 by 28% (.28) 35. Add lines 15, 19, 21, 25, 31, and 34 36. Figure the tax on the amount on line 1. Use the 2002 Tax Rate Schedule on page 20 37. Tax on all taxable income (including capital gains). Enter the smaller of line 35 or line 36. Also, enter this amount on Schedule D, line 38, and line 1a of Schedule G, Form 1041 1.
10,025
of 02 as 20 ) of 4, nge ro 1 ha P st oc gu ject t Au b
su (
9. 10.
-010,025
15.
2,770
19. 21.
25.
-0-
31.
34. 35. 36. 37.
56 2,826 2,847 2,826
Page 38
Table A. Checklist of Forms and Due Dates —For Executor, Administrator, or Personal Representative
Form No. SS – 4 56 706 706 – A 706 – CE 706 – GS(D) 706 – GS(D – 1) 706 – GS(T) 706 – NA 712 1040 1040NR 1041 1041 – A 1041 – T 1041 – ES 1042 1042 – S 1310 Title Application for Employer Identification Number Notice Concerning Fiduciary Relationship United States Estate (and Generation-Skipping Transfer) Tax Return United States Additional Estate Tax Return Certificate of payment of foreign Death Tax Generation-Skipping Transfer Tax Return for Distributions Notification of Distribution From a Generation-Skipping Trust Generation-Skipping Transfer Tax Return for Terminations United States Estate (and Generation-Skipping Transfer) Tax Return, Estate of nonresident not a citizen of the United States Life Insurance Statement U.S. Individual Income Tax Return U.S. Nonresident Alien income Tax Return U.S. Income Tax Return for Estates and Trusts U.S. Information Return — Trust Accumulation of Charitable amounts Allocation of Estimated Tax Payments to Beneficiaries Estimated Income Tax for Estates and Trusts Annual Withholding Tax Return for U.S. Source Income of Foreign Persons Foreign Person’s U.S. Source Income Subject to Withholding Statement of Person Claiming Refund Due a Deceased Taxpayer Application for Extension of Time To File Certain Excise, Income, Information, and other Returns Application for Extension of Time To File a Return and/or Pay U.S. Estate (and Generation-Skipping Transfer) Taxes Request for Prompt Assessment under Internal Revenue Code Section 6501(d) Report of Cash Payments Over $10,000 Received in a Trade or Business Change of Address Due Date As soon as possible. The identification number must be included in returns, statements, and other documents. As soon as all necessary information is available.* 9 months after date of decedent’s death. 6 months after cessation or disposition of special-use valuation property. 9 months after decedent’s death. To be filed with Form 706. See form instructions. See form instructions. See form instructions. 9 months after date of decedent’s death. Part I to be filed with estate tax return. Generally, April 15th of the year after death. See form instructions. 15th day of 4th month after end of estate’s tax year. April 15th. 65th day after end of estate’s tax year. Generally, April 15, June 15, Sept. 15, and Jan. 15 for calendar-year filers. March 15th. March 15th. See form instructions. Sufficiently early to permit IRS to consider the application and reply before the due date of Form 1041. Sufficiently early to permit IRS to consider the application and reply before the estate tax form due date. As soon as possible after filing Form 1040 or Form 1041. 15th day after the date of the transaction. As soon as the address is changed.
2758
4768
4810 8300 8822
* A personal representative must report the termination of the estate, in writing, to the Internal Revenue Service. Form 56 can be used for this purpose.
Page 39
Table B. Worksheet To Reconcile Amounts Reported in Name of Decedent on Information Returns (Forms W–2, 1099 –INT, 1099 –DIV, etc.) (Keep for your records)
Name of Decedent Date of Death Decedent’s Social Security Number Estate’s Employer Identification Number (If Any)
Name of Personal Representative, Executor, or Administrator A Source (list each payer) Enter total amount shown on information return
1. Wages
2. Interest income
3. Dividends
4. State income tax refund 5. Capital gains 6. Pension income
7. Rents, royalties
8. Taxes withheld*
9. Other items, such as social security, business and farm income or loss, unemployment compensation, etc.
* List each withholding agent (employer, etc.)
Page 40.
with a tax problem. Now you can set up an appointment by calling your local IRS office number and, at the prompt, leaving a message requesting Everyday Tax Solutions Tax
Help Line for Individuals with your tax questions at 1 – 800 – 829 – 1040. Or, if your question pertains to an income tax return of an estate, call the Business and Specialty Tax Help Line at 1 – 800 – 829 – 4933. Solving problems. Take advantage of Everyday Tax Solutions service by calling your local IRS office to set up an in-person appointment at your convenience. Check your local directory assistance or. View Internal Revenue Bulletins published
in the last few years.
• Search regulations and the Internal Revenue Code.
• Receive our electronic newsletters on hot
tax issues and news.
• Get information on starting and operating
a small business. You can also reach us with your computer using File Transfer Protocol at.
• Services. You can walk in to your local
IRS office to ask tax questions or get help Page 41
Index
To help us develop a more useful index, please let us know if you have ideas for index entries. See “Comments and Suggestions” in the “Introduction” for the ways you can reach us.
A
Accelerated death benefits . . . . . . . . . . . . . 5, 12 Archer MSA . . . . . . . . . . . 5, 11 Assistance (See Tax help)
E
Education savings account, Coverdell . . . . . . . . . . Estate: Income tax return . . . . . Insolvent . . . . . . . . . . . Nonresident alien . . . . . Period of administration . Tax deduction . . . . . . . Termination . . . . . . . . . Transfer of unused deductions . . . . . . . . Estate tax deduction . . . . . Estimated tax . . . . . . . . . Example: Comprehensive . . . . . . Decedent’s final return . . Estate’s tax return . . . . . Exemption: Estate’s tax return . . . . . Final return for decedent Expenses: Accrued . . . . . . . . . . . Administration . . . . . . . Deductions in respect of decedent . . . . . . . . . Funeral . . . . . . . . . . . . Medical . . . . . . . . . . . . Extension to file Form 1041 . . . . . . . . . . . . . . 5, 11 . . . . . . . . . . . . 14 . 3 15 22 11 22
H
Help (See Tax help)
I
Identification number, application . . . . . . . . . . . . . 3 Income: Community . . . . . . . . . . . . 5 Distributable net income . . 17 Distributed currently . . . . . 20 Interest and dividend . . . . . . 5 Partnership, final return . . . . 4 S corporation . . . . . . . . . . . 5 Self-employment . . . . . . . . 5 Income in respect of decedent . . . . . . . . . . . . 8, 10 Income tax return of an estate: Credits, tax, and payments . . . . . . . . . . . 19 Exemption and deductions . . . . . . . . . . 16 Filing requirements . . . . . . 14 Income to include . . . . . . . 15 Name, address, and signature . . . . . . . . . . . 19 When and where to file . . . 19 Inherited IRAs . . . . . . . . . . . 14 Inherited property . . . . . . . . 12 Installment obligations . . . . 9, 16 Insurance . . . . . . . . . . . . . . 12
B
Basis: Inherited property . . . . . Joint interest property . . Qualified joint interest . . Beneficiary: Basis of property . . . . . Character of distributions . . . . . . . Excess deductions . . . . Income received . . . . . . Liability, estate’s income tax . . . . . . . . . . . . . Nonresident alien . . . . . Reporting distributions . . Successor . . . . . . . . . . Treatment of distributions . . . . . . . Unused loss carryovers . Bequest: Defined . . . . . . . . . . . . Property received . . . . . . . 13 . . 13 . . 13 . . 13 . . 21 . . 23 . . 14 . . . . . . . . 15 15 21 23
Personal representative: Defined . . . . . . . . . . . . Duties . . . . . . . . . . . . . Fees received . . . . . . . Penalty . . . . . . . . . . . . Two or more . . . . . . . . Prompt assessment, request . . . . . . . . . . . . Public safety officers, death benefits . . . . . . . . . . . . Publications (See Tax help)
. . . . .
. . . . .
. . . .
2 3 3 3 15
... 3 . . 14
. . 22 . . 11 19, 23 . . 23 . . 24 . . 25 . . 16 ... 5 . . 17 . . 17 . . 11 . . 19 . 5, 19 . . 20
R
Refund: File for decedent . . . . . Military or terrorist action deaths . . . . . . . . . . . Release from liability . . . . Return: Decedent’s final . . . . . . Estate’s income tax . . . . Information . . . . . . . . . Roth IRA . . . . . . . . . . . . ... 4 ... 7 ... 3 . . . . . . . . . 4 14 15 11
. . 20 . . 22 . . 22 . . 12
S
Separate shares rule Suggestions . . . . . . Survivors: Income . . . . . . . . Tax benefits . . . . . . . . . . . 18 ....... 2 . . . . . . 14 ....... 8
C
Claim, credit or refund . . . Combat zone . . . . . . . . . Comments . . . . . . . . . . . Coverdell education savings account (ESA) . . . . . . . . . . . . . Credit: Child tax . . . . . . . . . . . Earned income . . . . . . . Elderly or disabled . . . . Final return for decedent General business . . . . . ... 7 ... 2 ... 2
F
Fiduciary relationship . . . Filing requirements: Decedent’s final return . Estate’s tax return . . . . Final return for decedent: Credits . . . . . . . . . . . Exemption and deductions . . . . . . . Filing requirements . . . Income to include . . . . Joint return . . . . . . . . Name, address, and signature . . . . . . . . Other taxes . . . . . . . . Payments . . . . . . . . . When and where to file Who must file . . . . . . . Form: 1040NR . . . . . . . . . . 1041 . . . . . . . . . . . . 1042 . . . . . . . . . . . . 1310 . . . . . . . . . . . . 4810 . . . . . . . . . . . . 56 . . . . . . . . . . . . . . 6251 . . . . . . . . . . . . 706 . . . . . . . . . . . . . SS – 4 . . . . . . . . . . . . Free tax services . . . . . . Funeral expenses . . . . . .... 3 .... 4 . . . 14 .... 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4 4 4 7 6 7 7 4
J
Joint return: Revoked by personal representative . . . . . . . . . 4 Who can file . . . . . . . . . . . 4
T
Tax: Alternative minimum, estate . . . . . . . . . . . Alternative minimum, individuals . . . . . . . . Benefits, survivors . . . . Estimated, estate . . . . . Payments, final return . . Refund of income (claim) Self-employment . . . . . Transfer of credit . . . . . Tax help . . . . . . . . . . . . . Taxpayer Advocate . . . . . Terrorist action, tax relief . . Terrorist victim . . . . . . . . TTY/TDD information . . . . . . 19 ... 7 ... 8 19, 23 ... 7 ... 4 ... 6 . . 23 . . 41 . . 41 ... 7 ... 2 . . 41
. 5, 11 . . . . . . . . . . . . . . . 6 6 6 6 6
L
Losses: Deduction on final return . . . 6 Estate’s tax return . . . . . . . 17
D
Death benefits: Accelerated . . . . . . . . Public safety officers . . Decedent: Final return . . . . . . . . Income in respect of . . Deductions: Estate tax . . . . . . . . . In respect of decedent . Medical expenses . . . . Standard . . . . . . . . . . Distributable net income . Distributions: Character . . . . . . . . . Deduction . . . . . . . . . Limit on deduction . . . Not treated as bequests Property, in kind . . . . . . . 5, 12 . . . 14 .... 4 .... 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 11 . 5 . 5 17 18 17 19 22 18
M
Military or terrorist actions: Claim for credit or refund . . . 7 Defined . . . . . . . . . . . . . . . 7 Tax forgiveness, deaths due to . . . . . . . . . . . . . . . . . 7 More information (See Tax help)
. 4, 14 . . 14 . . 15 ... 4 ... 3 ... 3 ... 7 . . 23 ... 3 . . 41 . . 19
V N
Notice of fiduciary relationship: Form 56 . . . . . . . . . . . . . . 3 Termination . . . . . . . . . . . . 3 Valuation method: Inherited property . . . . . . . 13 Special-use . . . . . . . . . . . 13 Victims of terrorist attacks . . . . 2
P
Partnership income . . . . . . 4, 9 Penalty: Information returns . . . . . . 15 Substantial valuation misstatement . . . . . . . . 13
W
Widows and widowers, tax benefits . . . . . . . . . . . . . . . 8
■
G
Gift, property . . . . . . . . . . . . 12
Page 42
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.
|
https://www.scribd.com/document/545774/US-Internal-Revenue-Service-p559-2002
|
CC-MAIN-2017-04
|
refinedweb
| 25,160
| 62.38
|
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
)
Department of the Treasury Internal Revenue Service
Capital Gains and Losses
Attach to Form 1065.
OMB No. 1545-0099
Name of partnership
Employer identification number
Part I
Short-Term Capital Gains and Losses—Assets Held 1 Year or Less
(b) Date acquired (month, day, year) (c) Date sold (month, day, year) (d) Sales price (see instructions) (e) Cost or other basis (see instructions) (f) Gain (loss) ((d) minus (e))
(a) Description of property (Example, 100 shares 7% preferred of “Z” Co.)
1
2 3 4 5
Short-term capital gain from installment sales from Form 6252, line 26 or 37 Short-term capital gain (loss) from like-kind exchanges from Form 8824 Partnership’s share of net short-term capital gain (loss), including specially allocated short-term capital gains (losses), from other partnerships and from fiduciaries Net short-term capital gain (loss). Combine lines 1 through 4. Enter here and on Form 1065, Schedule K, line 4d or 7
2 3 4 5
Part II
6
Long-Term Capital Gains and Losses—Assets Held More Than 1 Year
7 8 9 10 11
Long-term capital gain from installment sales from Form 6252, line 26 or 37 Long-term capital gain (loss) from like-kind exchanges from Form 8824 Partnership’s share of net long-term capital gain (loss), including specially allocated long-term capital gains (losses), from other partnerships and from fiduciaries Capital gain distributions Net long-term capital gain (loss). Combine lines 6 through 10. Enter here and on Form 1065, Schedule K, line 4e or 7
involuntarily converted because of a casualty or theft, use Form 4684, Casualties and Thefts. For amounts received from an installment sale, the holding period rule in effect in the year of sale will determine the treatment of the amounts received as long-term or short-term capital gain. Report every sale or exchange of property in detail, even if there is no gain or loss. For more information, get Pub. 544, Sales and Other Dispositions of Assets.
7 8 9 10 11
General Instructions
(Section references are to the Internal Revenue Code.)
Purpose of Schedule
Use Schedule D (Form 1065) to report sales or exchanges of capital assets, except capital gains (losses) that are specially allocated to any partners. Capital gains (losses) specially allocated to the partnership as a partner in other partnerships and from fiduciaries are to be entered on Schedule D, line 4 or 9, whichever applies. Capital gains (losses) of the partnership that are specially allocated to partners should be entered directly on line 4d, 4e, or 7 of Schedules K and K-1, whichever applies. Do not include these amounts on Schedule D. See How Income Is Shared Among Partners in the Instructions for Form 1065 for more information. To report sales or exchanges of property other than capital assets, including the sale or exchange of property used in a trade or business and involuntary conversions (other than casualties and thefts), see Form 4797, Sales of Business Property, and related instructions. If property is.
Exchange of “Like-Kind” Property
Complete and attach Form 8824, Like-Kind Exchanges, to the partnership’s return to report an exchange of like-kind property. The partnership must report an exchange of business or investment property for “like-kind” property even if no gain or loss on the property is recognized. For exchanges of capital assets, enter the gain or loss from Form 8824, if any, on line 3 or 8. If an exchange was made with a related party, write “Related Party Like-Kind Exchange” in the top margin of Schedule D. See Form 8824 and its instructions for details.
What Are Capital Assets?
Each item of property the partnership held (whether or notandums; or similar property. 4. Accounts or notes receivable acquired in the ordinary course of trade or business for services rendered or from the sale of property described in 1 above.
For Paperwork Reduction Act Notice, see page 1 of the Instructions for Form 1065.
Cat. No. 11393G
Schedule D (Form 1065) 1992
Schedule D (Form 1065) 1992
Page
2
Items for Special Treatment and Special Cases
The following items may require special treatment: ● Transactions by a securities dealer. ● Bonds and other debt instruments. ● Certain real estate subdivided for sale that may be considered a capital asset. ● Gain on the sale of depreciable property to a more than 50%-owned entity, or to a trust in which the partnership is a beneficiary, is treated as ordinary gain. ● Liquidating distributions from a corporation. Get Pub. 550, Investment Income and Expenses. ● Gain on disposition of stock in an Interest-Charge Domestic International Sales Corporation or a Foreign Sales Corporation. ● Gain or loss on options to buy or sell, including closing transactions. ● Transfer of property to a foreign corporation as paid-in surplus or as a contribution to capital, or to a foreign trust or partnership. ● Transfer of property to a partnership that would be treated as an investment company if the partnership were incorporated. ● Transfer of property to a political organization if the fair market value of the property exceeds the partnership’s adjusted basis in such property. ●. ● Conversion of a general partnership interest into a limited partnership interest in the same partnership. See Rev. Rul. 84-52, 1984-1 C.B. 157. ● Transfer of partnership assets and liabilities to a newly formed corporation in exchange for all of its stock. See Rev. Rul. 84-111, 1984-2 C.B. 88. ● Contribution of limited partnership interests in exchange for limited partnership interests in another partnership. See Rev. Rul. 84-115, 1984-2 C.B. 118. ● Disposition of foreign investment in a U.S. real property interest. See section 897. ● Any loss from a sale or exchange of property between the partnership and certain related persons is not allowed, except for distributions in complete liquidation of a corporation. See sections 267 and 707(b) for details. ● Any loss from securities that are capital assets that become worthless during the year is treated as a loss from the sale or exchange of a capital asset on the last day of the tax year. ● Gain from the sale or exchange of stock in a collapsible corporation is not a capital gain. See section 341.
●. ● Gains and losses from section 1256 contracts and straddles are reported on Form 6781, Gains and Losses From Section 1256 Contracts and Straddles. If there are limited partners, see section 1256(e)(4) for the limitation on losses from hedging transactions. ● for installment gain that is not specially allocated among the partners, it must do the following on a timely filed return (including extensions): 1. Report the full amount of the gain on Schedule D. 2. If the partnership received a note or other obligation and is reporting it at less than face value (including all contingent obligations), state that fact in the margin, enter the face amount of the note or other obligation, and give the percentage of valuation. If the partnership wants to elect out of the installment method for installment gain that is specially allocated among the partners, it must do the following on a timely filed return (including extensions): 1. For a short-term capital gain, report the full amount of the gain on Schedule K, line 4d or 7. For a long-term capital gain, report the full amount of the gain on Schedule K, line 4e or 7. 2. Enter each partner’s share of the full amount of the gain on Schedule K-1, line 4d, 4e, or 7, whichever applies. 3. If the partnership received a note or other obligation and is reporting it at less than face value (including all contingent obligations), attach a statement to Form 1065 that states that fact. Also show on the statement the face amount of the note or other obligation and give the percentage of valuation. Label the statement “Specially Allocated Capital Gains from Electing Out of the Installment Method.” If the partnership received more than one note or obligation, list the amounts separately. passed through to a partner because of a sale of property to a charitable organization, the adjusted basis for determining gain from the sale is an amount that fee, commission, or option premium before making an entry in column (e). For more information, get Pub. 551, Basis of Assets.
Lines 4 and 9—Capital Gains and Losses From Other Partnerships and Fiduciaries
See the Schedule K-1 or other information supplied to you by the other partnership or fiduciary.
Line 10—Capital Gain Distributions
On line 10, report as capital gain distributions (a) capital gain dividends; and (b) the partnership’s share of undistributed capital gains from a regulated investment company. Report the partnership’s share of taxes paid on undistributed capital gains by a regulated investment company on Schedule K, line 22, and Schedule K-1, line 23.
|
https://www.scribd.com/document/540612/US-Internal-Revenue-Service-f1065sd-1992
|
CC-MAIN-2018-30
|
refinedweb
| 1,497
| 51.99
|
Nevermind everyone - I've fixed this one myself. You have to make sure that the Nextion is being powered by the Arduino (not an external source!) or your Serial will say recvRetString[0,]
for anyone who is interested this is my final code
#include <Nextion.h> #include <Wire.h> #include <LCD.h> #include <LiquidCrystal_I2C.h> LiquidCrystal_I2C lcd(0x27, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE); // 0x27 is the I2C bus address for an unmodified backpack void b_crlfPopCallback(void *ptr); //Nextion Assign Buttons NexButton b_crlf = NexButton(1, 23, "b_crlf"); //Enter Button //Nextion Assign Text NexText data1 = NexText(1, 2, "data"); //Data Text Field char text_char[17]; NexTouch *nex_listen_list[] = { &b_crlf, NULL }; void b_crlfPopCallback(void *ptr) { data1.getText(text_char,17); Serial.print (text_char); lcd.home (); // set cursor to 0,0 lcd.clear(); lcd.print(text_char); }; void setup() { //Setup the Touchscreen - MUST BE BEFORE SERIAL.BEGIN! nexInit(); //Setup Serial Connection Serial.begin(9600); //Attach button actions b_crlf.attachPop(b_crlfPopCallback,&b_crlf); //Setup LCD Screens lcd.begin (16,2); // for 16 x 2 LCD module lcd.setBacklightPin(3,POSITIVE); lcd.setBacklight(HIGH); } void loop(){ nexLoop(nex_listen_list); };
PS: When you check the library code (and everyone should check the code)
NexHardware.cpp: Serial is already initialized with dbSerialBegin in nexInit()
Thanks, that's good to know. I'm slowly working my way through - although I tend to be a bit more 'dump a lot of code in and make it work' when I get an idea in my head. Your articles have been a definite help thus far.
Cheers.
It's not forcibly needed to power the Arduino and the Nextion from the same supply. But in case of separate supplies, you'll need to connect the GND of both together, so that they have the same reference potential. Thus, 3 wires are needed: TX -> RX, RX -> TX, GND -> GND
I'm not working with Arduino but with processors powered @3.3V while I power the Nextion @5V and the 3 wire connection as above works without problems.
my Nextion is powered by arduino i still got recvRetString[0,]
Good. You have told one detail, left many out.
- does setup() send ok or err to Serial Monitor?
If ok, timing issue.
it is working after adding a small delay
bushra - stop double posting everywhere
I am not going to type the same answer in every place.
- timing issue in code.
Shane Cullen
Hi,
Apologies if this has already been covered, but a few hours of searching and testing and I'm still not getting the desired effect.
The Setup -
I have keyboard in a hmi file, that populates a text box called data
The Enter button on that keyboard (called b_crlf)
I want to take that data text box (as a string of text) and populate a variable inside my Arduino Mega, so that I can pass that variable into an LCD display when Enter button is pushed.
The actual event of sending text to the LCD on a button push is easy - however I am having no luck getting the text from the DATA box into an Arduino variable.
Any Ideas?
My current Arduino code is below
|
http://support.iteadstudio.com/support/discussions/topics/11000010880
|
CC-MAIN-2018-30
|
refinedweb
| 523
| 64.71
|
FileInfo.Delete Method
.NET Framework 4.5
Permanently deletes a file.
Namespace: System.IONamespace: System.IO
Assembly: mscorlib (in mscorlib.dll)
The following example demonstrates the Delete method.
using System; using System.IO; class Test { public static void Main() { string path = @"c:\MyTest.txt"; FileInfo fi1 = new FileInfo(path); try { using (StreamWriter sw = fi1.CreateText()) {}.: // //c:\MyTest.txt was copied to c:\MyTest.txttemp. //c:\MyTest.txttemp was successfully deleted.
The following example creates, closes, and deletes a file.
using System; using System.IO; public class DeleteTest { public static void Main() { // Create a reference to a file. FileInfo fi = new FileInfo("temp.txt"); // Actually create the file. FileStream fs = fi.Create(); // Modify the file as required, and then close the file. fs.Close(); // Delete the file. fi.Delete(); } }
- FileIOPermission
for reading and writing files.:
|
http://msdn.microsoft.com/en-us/library/system.io.fileinfo.delete.aspx
|
CC-MAIN-2014-52
|
refinedweb
| 134
| 56.72
|
Given N lines and one starting point and destination point in 2-dimensional space. These N lines divide the space into some blocks. We need to print the minimum number of jumps to reach destination point from starting point. We can jump from one block to other block only if they share a side.
Examples:
Input : Lines = [x = 0, y = 0, x + y – 2 = 0] Start point = [1, 1], Dest point = [-2, -1] Output : 2 We need to jump 2 times (B4 -> B3 then B3 -> B5 or B4 -> B6 then B6 -> B5) to reach destination point from starting point shown in below diagram. Each block i is given an id Bi in the diagram.
We can solve this problem using a property of lines and points which is stated as, If we put two points in line equation then they will have same sign i.e. positive-positive or negative-negative of evaluated value if both points lies on the same side of line, in case of different sign i.e. positive-negative they will lie on different side of line.
Now we can use above property to solve this problem, For each line, we will check if start and destination point lie on the same side or not. If they lie to the different side of a line then that line must be jumped to come closer. As in above diagram start point and the destination point are on the same side of x + y – 2 = 0 line, so that line need not to be jumped, rest two lines need to be jumped because both points lies on opposite side.
Finally, we will check the sign of evaluation of points with respect to each line and we will increase our jump count whenever we found opposite signs. Total time complexity of this problem will be linear.
// C++ program to find minimum jumps to reach // a given destination from a given source #include <bits/stdc++.h> using namespace std; // To represent point in 2D space struct point { int x, y; point(int x, int y) : x(x), y(y) {} }; // To represent line of (ax + by + c)format struct line { int a, b, c; line(int a, int b, int c) : a(a), b(b), c(c) {} line() {} }; // Returns 1 if evaluation is greater > 0, // else returns -1 int evalPointOnLine(point p, line curLine) { int eval = curLine.a* p.x + curLine.b * p.y + curLine.c; if (eval > 0) return 1; return -1; } // Returns minimum jumps to reach // dest point from start point int minJumpToReachDestination(point start, point dest, line lines[], int N) { int jumps = 0; for (int i = 0; i < N; i++) { // get sign of evaluation from point // co-ordinate and line equation int signStart = evalPointOnLine(start, lines[i]); int signDest = evalPointOnLine(dest, lines[i]); // if both evaluation are of opposite sign, // increase jump by 1 if (signStart * signDest < 0) jumps++; } return jumps; } // Driver code to test above methods int main() { point start(1, 1); point dest(-2, -1); line lines[3]; lines[0] = line(1, 0, 0); lines[1] = line(0, 1, 0); lines[2] = line(1, 1, -2); cout << minJumpToReachDestination(start, dest, lines, 3); return 0; }.
|
https://www.geeksforgeeks.org/minimum-block-jumps-reach-destination/
|
CC-MAIN-2018-09
|
refinedweb
| 528
| 67.62
|
Moose::Cookbook::Basics::HTTP_SubtypesAndCoercion - Demonstrates subtypes and coercion use HTTP-related classes (Request, Protocol, etc.)
version 2.0802
package Request; use Moose; use Moose::Util::TypeConstraints; use HTTP::Headers (); use Params::Coerce (); use URI (); subtype 'My::Types::HTTP::Headers' => as class_type('HTTP::Headers'); coerce 'My::Types::HTTP::Headers' => from 'ArrayRef' => via { HTTP::Headers->new( @{$_} ) } => from 'HashRef' => via { HTTP::Headers->new( %{$_} ) }; subtype 'My::Types::URI' => as class_type('URI'); coerce 'My::Types::URI' => from 'Object' => via { $_->isa('URI') ? $_ : Params::Coerce::coerce( 'URI', $_ ); } => from 'Str' => via { URI->new( $_, 'http' ) }; subtype 'Protocol' => as 'Str' => where { /^HTTP\/[0-9]\.[0-9]$/ }; has 'base' => ( is => 'rw', isa => 'My::Types::URI', coerce => 1 ); has 'uri' => ( is => 'rw', isa => 'My::Types::URI', coerce => 1 ); has 'method' => ( is => 'rw', isa => 'Str' ); has 'protocol' => ( is => 'rw', isa => 'Protocol' ); has 'headers' => ( is => 'rw', isa => 'My::Types::HTTP::Headers', coerce => 1, default => sub { HTTP::Headers->new } );
This recipe introduces type coercions, which are defined with the
coerce sugar function. Coercions are attached to existing type constraints, and define a (one-way) transformation from one type to another.
This is very powerful, but it can also have unexpected consequences, so you have to explicitly ask for an attribute to be coerced. To do this, you must set the
coerce attribute option to a true value.
First, we create the subtype to which we will coerce the other types:
subtype 'My::Types::HTTP::Headers' => as class_type('HTTP::Headers');
We are creating a subtype rather than using
HTTP::Headers as a type directly. The reason we do this is that coercions are global, and a coercion defined for
HTTP::Headers in our
Request class would then be defined for all Moose-using classes in the current Perl interpreter. It's a best practice to avoid this sort of namespace pollution.
The
class_type sugar function is simply a shortcut for this:
subtype 'HTTP::Headers' => as 'Object' => where { $_->isa('HTTP::Headers') };
Internally, Moose creates a type constraint for each Moose-using class, but for non-Moose classes, the type must be declared explicitly.
We could go ahead and use this new type directly:
has 'headers' => ( is => 'rw', isa => 'My::Types::HTTP::Headers', default => sub { HTTP::Headers->new } );
This creates a simple attribute which defaults to an empty instance of HTTP::Headers.
The constructor for HTTP::Headers accepts a list of key-value pairs representing the HTTP header fields. In Perl, such a list could be stored in an ARRAY or HASH reference. We want our
headers attribute to accept those data structures instead of an HTTP::Headers instance, and just do the right thing. This is exactly what coercion is for:
coerce 'My::Types::HTTP::Headers' => from 'ArrayRef' => via { HTTP::Headers->new( @{$_} ) } => from 'HashRef' => via { HTTP::Headers->new( %{$_} ) };
The first argument to
coerce is the type to which we are coercing. Then we give it a set of
from/
via clauses. The
from function takes some other type name and
via takes a subroutine reference which actually does the coercion.
However, defining the coercion doesn't do anything until we tell Moose we want a particular attribute to be coerced:
has 'headers' => ( is => 'rw', isa => 'My::Types::HTTP::Headers', coerce => 1, default => sub { HTTP::Headers->new } );
Now, if we use an
ArrayRef or
HashRef to populate
headers, it will be coerced into a new HTTP::Headers instance. With the coercion in place, the following lines of code are all equivalent:
$foo->headers( HTTP::Headers->new( bar => 1, baz => 2 ) ); $foo->headers( [ 'bar', 1, 'baz', 2 ] ); $foo->headers( { bar => 1, baz => 2 } );
As you can see, careful use of coercions can produce a very open interface for your class, while still retaining the "safety" of your type constraint checks. (1)
Our next coercion shows how we can leverage existing CPAN modules to help implement coercions. In this case we use Params::Coerce.
Once again, we need to declare a class type for our non-Moose URI class:
subtype 'My::Types::URI' => as class_type('URI');
Then we define the coercion:
coerce 'My::Types::URI' => from 'Object' => via { $_->isa('URI') ? $_ : Params::Coerce::coerce( 'URI', $_ ); } => from 'Str' => via { URI->new( $_, 'http' ) };
The first coercion takes any object and makes it a
URI object. The coercion system isn't that smart, and does not check if the object is already a URI, so we check for that ourselves. If it's not a URI already, we let Params::Coerce do its magic, and we just use its return value.
If Params::Coerce didn't return a URI object (for whatever reason), Moose would throw a type constraint error.
The other coercion takes a string and converts it to a URI. In this case, we are using the coercion to apply a default behavior, where a string is assumed to be an
http URI.
Finally, we need to make sure our attributes enable coercion.
has 'base' => ( is => 'rw', isa => 'My::Types::URI', coerce => 1 ); has 'uri' => ( is => 'rw', isa => 'My::Types::URI', coerce => 1 );
Re-using the coercion lets us enforce a consistent API across multiple attributes.
This recipe showed the use of coercions to create a more flexible and DWIM-y API. Like any powerful feature, we recommend some caution. Sometimes it's better to reject a value than just guess at how to DWIM.
We also showed the use of the
class_type sugar function as a shortcut for defining a new subtype of
Object.
This particular example could be safer. Really we only want to coerce an array with an even number of elements. We could create a new
EvenElementArrayRef type, and then coerce from that type, as opposed to a plain
ArrayRef.
|
http://search.cpan.org/~ether/Moose-2.0802/lib/Moose/Cookbook/Basics/HTTP_SubtypesAndCoercion.pod
|
CC-MAIN-2013-48
|
refinedweb
| 945
| 57.71
|
Sometimes.
These applications can only run on Windows 8 and they have a brand new look and feel compared to other and older Windows applications. In addition, Windows Store apps follow a different pattern as far as publishing and sales are concerned. Windows Store apps, in fact, can be distributed for free or for sale through a central marketplace known as the Windows Store. This channel is a valid alternative for widespread distribution to classic direct user-led install of applications.
Windows 8 specific applications are not limited to personal computers but can also run on a variety of compliant devices, most notably Microsoft Surface devices.
In summary, developers can now create two types of Windows applications: classic Window apps that also run on multiple and older Windows platforms (i.e., Windows 7, Windows Vista) and Windows 8 specific apps that only run on machines and devices equipped with Windows 8. Classic Windows applications can only be distributed manually or via custom setup programs; Windows 8 apps can be distributed via the public Windows Store or under the control of the system administrator.
Windows 8 apps are based on a different API and a different runtime system-the WinRT framework. The new API is offered to developers in a variety of languages. In this article, you’ll see how to build Windows 8 applications using JavaScript for the logic, and a combination of HTML and CSS for the user interface. Formerly known as Metro style applications, Windows 8 applications are now officially referred to as Windows Store apps.
Under the Hood of Windows Store Apps
As mentioned, you can write Windows Store apps in different ways. You can use HTML and JavaScript, as discussed in this article. You can also write applications using C#, Visual Basic or C++ as the programming language, and XAML to express the user interface.
All approaches deliver nearly the same programming power. This means that you can build the same application behavior regardless of the language and markup that you choose.
It goes without saying that JavaScript is fairly different from C++ and a good deal of difference also exists between C# and C++. While the programming power is nearly the same in all cases, the actual programming experience may be different as features and capabilities of the languages are not the same. I’ll return to this point later in the article, but I’d like to mention it here: asynchronous programming is much easier to code in C# than in JavaScript, but it can be done equally in both languages.
Windows Store applications rely on a new infrastructure; this is what makes them incompatible with operating systems older than Windows 8. In Figure 1 you can see the Windows 8 runtime stack at a glance. Two parallel stacks live side by side and support two different application development models-one centered on JavaScript and HTML and one on XAML and C# (and other .NET languages).
The WinRT API supplies several classes of functionality. In particular, you’ll find media, networking, storage and presentation functions.
Any application interaction with the operating system is actually mediated by the WinJS library. Subsequently, any Windows Store application written with JavaScript must include the WinJS library and use its API to access available functions.
Your Windows Store application results from the combination of JavaScript files and HTML pages. Before running, the JavaScript code is compiled by the Windows 8 JavaScript engine and access to the WinRT subsystem occurs dynamically as the user interacts with the application.
Your First Windows 8 JavaScript App
To build applications for Windows 8 you need any version of Visual Studio 2012. To start with, the Visual Studio Express 2012 edition for Windows 8 is more than fine. You can get it from:. Once you install it, in an attempt to create your first Windows 8 application using JavaScript and HTML you face the dialog box of Figure 2.
Personally, I never used (and have no plans to use) Grid, Split and Fixed Layout templates. I found them too specific and even quite difficult to adapt to my own needs, which are often significantly different from those represented in the template. Most of the time I start from the Blank App; I start from the Navigation App template if I need navigation capabilities between pages. To understand the internal mechanics of WinJS-based appl, the best you can do is just start from the Blank App template.
The app you get from the Blank App template consists of a single and nearly empty page with no visual controls, no widgets and no layout defined. Figure 3 shows the newly created project as it appears in Visual Studio.
It turns out that a Windows 8 app written using JavaScript looks like a self-contained web application made of HTML pages properly styled using CSS and powered by some JavaScript logic. If you are familiar with the web paradigm and client-side web development, then you only need to make sense of the Windows 8-specific API exposed to you via a few JavaScript files to link. Let’s have a look at the HEAD section of the default.html page:
<head> <meta charset="utf-8" /> <title>HelloWin8 Step2</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0/js/base.js"> </script> <script src="//Microsoft.WinJS.1.0/js/ui.js"> </script> <!-- Your app references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/default.js"></script> </head>
The section with WinJS references should be common to all pages and enable the code behind the page to access WinRT functionalities. You should plan to have a different page for each screen of the application. The second section you see in the listing above refers to page-specific links for the page stylesheet and code behind. You stuff in default.css all styles you plan to use on HTML elements. The role of the default.js file is instead subtler.
When you open it up, you find the code in Listing 1 (cleaned up a bit for clarity):
Compared to the wizard-generated code, I removed all comments and empty branches. I also added the onready handler which plugs in the code that bootstraps your application.
Although the file default.js is named after an application’s page, the code it contains has little to do with an individual page-it is the application’s startup code. For this reason, I recommend that you create a separate set of files to contain the application’s logic and keep default.js as the application’s startup code.
If your application needs to do some work upon startup, you should add a handler for the onready event and call a specific function from there.
app.onready = function (args) { YourApp.init(); };
So you may add a new JavaScript file and begin it like below:
var YourApp = YourApp || {}; YourApp.init = function () { // Start-up code }
The YourApp object is intended to be the main repository of the application’s logic. Needless to say, the new JavaScript file must be referenced from the default.html page and likely from any other page you may have in your application.
Building an HTML-based Rich Presentation Layer
The user interface of any page is based on HTML. Therefore you use HTML elements and CSS classes to lay out the graphics as you like it. With the sole help of HTML, however, you can hardly provide a Windows 8 typical experience to the user. Horizontal scrolling, square blocks, fading and hovering effects, rich tooltips are not certainly impossible to reproduce with HTML and JavaScript, but the code required is not trivial to write.
In plain web applications, most developers resort to ad hoc libraries such as jQuery UI to set up nice visual components. In Windows 8, you can rely on a bunch of native components that once applied to a basic HTML skeleton that can produce a result very close to what you can get with XAML and C#.
To arrange an effective Windows 8-style presentation, you attach ad hoc widgets to basic HTML elements in much the same way it works with jQuery UI and jQuery Mobile. The markup of a typical page is often a plain collection of DIV elements; ad hoc data-* attributes, however, transform that scanty DIV element into a colorful and animated component. As a result, it won’t make any difference for the end user whether the application was built with JavaScript or C#. Let’s examine the following snippet:
<div class="block-element"> <h3>DESCRIPTION</h3> <textarea id="taskSubject" required></textarea> </div> <div class="block-element"> <h3>DUE DATE</h3> <div class="block-element" id="taskDueDate" data- </div> </div> <div class="block-element"> <h3>PRIORITY (1=VERY LOW - 5=VERY HIGH)</h3> <input id="taskPriority" type="range" /> </div>
Functionally speaking, the HTML contains two input fields: an input range area and a date picker. In effect, you can see an INPUT element of type range, but there’s no clue of another INPUT element of type date or text. Instead, you see a DIV element decorated with a weird attribute: data-win-control.
The markup above is quite representative of the way in which you author a user interface in a Windows Store JavaScript application. You mix classic HTML elements with DIV elements with some Windows 8-specific behavior attached. It is essential to note that while you use HTML markup and CSS classes to author the user interface, the resulting page is not hosted within a classic web browser. This means, for example, that you may not feel the need to use jQuery to arrange Ajax calls and more importantly, you don’t need the FORM element to post data to some backend layer.
The role of HTML is simply that of a markup language to author the user interface of each application screen. You’ll plan to have an HTML page for each screen of the application; in the HTML page you can use basic HTML elements as well as generic DIV blocks enriched with some WinJS native functionalities. Because the underlying engine that processes the markup is based on Internet Explorer 10, you can make full use of HTML5 attributes in the markup such as the placeholder, autofocus and required attributes, and can use HTML5-specific values with the type attribute of INPUT fields.
To style markup elements, you should use CSS classes knowing that the underlying engine supports the CSS3 syntax to comfortably identify groups of page elements.
In the listing above, you may have noticed the data-win-control attribute. An acceptable value for the attribute is the name of a WinJS control class that internally contains the logic to append to the basic DIV element a richer HTML sub-tree that represents a particular piece of the user interface. For example, the following fragment renders a date picker in native Windows 8 style on top of the specified DIV element.
<div class="block-element" id="taskDueDate" data- </div>
Figure 4 shows the sample input form resulting from the above markup.
The TEXTAREA element is just rendered in the usual way only with a squared skin. The range element is automatically rendered as a slider, in much the same way it happens with any HTML5 page in Internet Explorer 10. The data-win-control attribute set to WinJS.UI.DatePicker produces an ad hoc interface with drop-down lists for month, day and year.
Table 1 summarizes the widgets available to Windows Store applications. All of them are activated on specific markup sub-tree (mostly plain DIV elements) through the data-win-control attribute.
Using HTML to render the user interface doesn’t mean that interactions and page switches are working over HTTP. Therefore, you don’t need to use the FORM element to save or process the content of an input form: all you need is a plain button with its own click handler.
<button id="buttonAddTask" />
You can add inline event handlers to HTML elements through attributes such as onclick and onchange; it is preferable, however, that you opt for a less obtrusive solution and add handlers via script code. This is where the aforementioned bootstrap function comes into play.
YourApp.init = function () { // Add event handlers as appropriate document.getElementById("buttonAddTask") .addEventListener("click", YourApp.addTaskClick); // Find a way to initialize the page }; YourApp.addTaskClick = function () { // Do what's required to do when the user // pushes the button };
In the init function you typically add any initialization code for the application and/or for a given page of the application. Keeping helper JavaScript functions defined as members of a global object-in this case the YourApp object-helps keeping the code clean and avoids pollution of the JavaScript global namespace.
WinJS Data Binding
In the bootstrap code of the application (or an individual page of the application) you might want to bind data to user interface elements. In XAML, you use the integrated data binding engine, preferably within a Model-View-ViewModel (MVVM) schema. In web applications, more and more developers are enjoying the KnockoutJS library that replicates the same pattern within HTML pages.
In JavaScript applications written for Windows 8, you can leverage a native engine that mimics the XAML data binding syntax. When it comes to data binding, it is all about having a way to set the data source for a section of the user interface and having a syntax to bind properties of the data source object to user interface elements.
A recommended practice to implement for each screen of the application is the following:
- Define a view model object that defines any data coming in and out of the screen
- Set the view model object as the data source of the form
- Use the WinJS data binding syntax to link user interface elements to data source properties.
A view model object is nearly a class in C# or Visual Basic. How would you get such a class in JavaScript? Here’s an example:
var Task = WinJS.Class.define(function () { var that = {}; that.description = "This is my new task"; that.dueDate = new Date(); that.priority = 1; that.status = "Not Started"; that.percCompleted = 0; that.minPriority = 1; that.maxPriority = 5; return that; });
You can use the Task function like a C# class. Here’s how you can set a new instance of Task as the data source of an HTML page:
YourApp.init = function () { document.getElementById("buttonAddTask") .addEventListener("click", YourApp.addTaskClick); // Create the view model object var model = new Task(); // Enable binding on the HTML element(s) of choice var root = document.getElementById("main"); WinJS.Binding.processAll(root, model); }
The processAll function takes an HTML element and a view model object and visits the entire sub-tree rooted in the element trying to resolve any data binding expressions found along the way. The net effect of the code above is that any data binding expressions underneath the element named main is resolved. Here’s a sample of the WinJS data binding syntax:
<textarea id="taskSubject" required </textarea> : <input id="taskPercCompleted" type="number" min="0" max="100" data-
The data-win-bind attribute defines a data binding expression. An expression takes the form of:
targetProperty: expression
The target property refers to the HTML property that is going to receive the value calculated out of the expression. In its simplest form, the expression is just a path rooted in the view model object. The preceding example binds the value of the description property from the provided Task object to the innerText property of the TEXTAREA element.
The WinJS data binding also supports a formatting layer. It is a concept similar to converters you find in XAML. Here’s how to pre-process the value of the percCompleted property on the data source to format it to a percentage string:
<span data-
A WinJS converter looks like below:
YourApp.percForDisplay = WinJS.Binding.converter( function (value) { return value + "%"; })
The WinJS data binding is not bidirectional. This means that in order to process data the user entered in a form you need to read it explicitly from the DOM, as shown in Listing 2.
If input elements are plain HTML elements you just use the DOM interface to read values. If input elements are WinJS widgets (i.e., a date picker was used) then you need to pass through the winControl extra property and then use the documented interface of the widget.
Dealing with List Views
In modern user interfaces, lists are a very common visual element. In Windows 8, in fact, you find a strong support for list-based views as the ListView widget seen in Table 1 demonstrates. A list view is a repeater component that takes a data bindable template and repeats it for each item found in a collection. The ListView widget in Windows 8 supports two view modes-as a vertical list and as a grid. Once data binding is completed, developers can easily switch from a list-based view to a grid-based view and vice versa. Listing 3 shows how to set up a list view in the HTML page of a Windows Store application.
You create a DIV element and bind it to the ListView widget through the data-win-control attribute. This step is necessary but not sufficient to populate the list with graphical items. First, you indicate a bindable item template; next, you bind the list to a collection of data objects.
You can configure data binding both programmatically in JavaScript code and declaratively via ad hoc markup. In the preceding example, things are done declaratively using the data-win-options attribute. You’ll use this attribute to pass initial arguments to be used in the construction of the widget.
To initialize a ListView widget you must especially indicate the itemDataSource parameter. The parameter refers to the data source that the widget gets data from. In the example above, I’m using a global object that exposes a collection of data. I’ll return to data source objects in a moment.
The itemTemplate parameter refers to the HTML template to be repeated within the list, once for each data item in the data source. The layout parameter refers to the object that describes the layout of the view. It can be a list or grid layout. Finally, selectionMode and tapBehavior refer to the selection mode (single item or multiple items) and the action that should follow the tap or click on an item. The value directSelect indicates that a tap or click produces a double effect: the item is selected and invoked. Alternate values for tapBehavior are toggleSelect (toggles selection of the list item), invokeOnly (item is invoked but not selected) and none.
Data-bound components such as the ListView component can’t be bound to just a basic JavaScript array and require a very Windows 8 specific BindingList object. Let’s have a look at the code below:
WinJS.Namespace.define("RssReader", { Items: new WinJS.Binding.List() });
In light of this code, the RssReader.Items refers to a newly created (and thus empty) binding list object. The list object can be populated programmatically using an array-like programming interface; the actual binding to a Windows 8 data-bound component occurs through the dataSource property exposed by binding list objects.
To complete the example, here’s some code that connects to an external URL, downloads a RSS feed, and fills up the list view with news.
WinJS.xhr({ url: "" }) .then( function (response) { RssReaderApp.parseFeed(response); }, function (error) { // Recovery });
The WinJS.xhr object is a Windows 8 wrapper for the XmlHttpRequest object and is used to make out-of-band calls to remote URLs. A Windows 8 application is considered a desktop application from a security point of view; therefore there’s no need to be concerned about cross-domain issues.
The response received from the remote endpoint takes the same form you should be acquainted with from using XmlHttpRequest. If you’re expecting an XML response-as it is the case when you request RSS data-then you get it through the responseXML property of the received response object. Instead, if you’re expecting, say, a JSON response then you might want to get the text response via the responseText property. The code in Listing 4 shows the code that parses RSS into the RssReader.Items object.
Figure 5 gives an idea of a ListView in a Windows Store application.
Application View States
By default, Windows 8 runs native (i.e., Windows Store) applications in full screen. However, applications should be ready to handle four different view states: full screen landscape, full screen portrait, filled and snapped. Any Windows Store application receives proper notification when the user manages to change the view state. This typically happens when the user rotates a Windows 8 device thus changing its orientation or when the user resizes the application’s screen to allow for two applications to be active and in foreground at the same time. There’s probably not much to say about portrait and landscape full screen modes. Figure 6, instead, illustrates snapped and filled views.
A full screen application can be resized (shrunk actually) to make room for a second application. The added application is snapped to one edge of the screen and the application that was originally in full screen mode takes up approximately two/thirds of the screen. This application is now said to be filled. The two applications are separated with a split bar. By acting on the split bar the user can swap the states of the two applications so that the snapped becomes filled and vice versa. When this happens, the originally filled application is snapped to the opposite edge of the screen. According to Figure 6, by enlarging the split bar, the filled application is snapped to the right edge.
When snapped, an application is resized to a segment of the screen that is only 320 pixels wide and takes the entire height of the screen. Any application can experience both view states during its lifecycle. It’s the user, not the application, that decides about the display mode. From the application’s perspective, support for the various view states (including snapped and filled view states) is all about being ready to render any relevant content in a screen of different sizes.
The Windows 8 operating system sends ad hoc notifications to running applications to inform them about view state changes ordered by the user. For a developer, there are two main (and non-exclusive) ways to react to these view state changes. First, you can define different CSS styles for each view state you intend to target. Second, you can handle screen resize events and adjust the layout as appropriate. Let’s just start here.
Listing 5 shows how to register a handler for the resize event of the host window.
The Windows.UI.ViewManagement.ApplicationViewState object defines the four possible states of a Windows 8 application. The Windows.UI.ViewManagement.ApplicationViewState property returns the current view state at a given time. The resize handler tells when the view state changes. A change in the view means a change in the size of the screen and it may mean that you need to rearrange your layout to fit controls and visuals into the new screen size. When an application gets snapped, its area is only 320 pixels wide: it may make sense to hide some widgets or move some controls to the bottom to fit in the reserved screen. Doing so ensures the best possible experience for the user.
You handle the resize event when you need code to adjust the layout. For example, this may mean switching a list view from the grid layout to the list layout or replacing a large item template with a smaller one.
If your application screen layout is entirely based on relative sizes and automatically scales down on a smaller screen, then it may be that all you need is to switch to a different CSS stylesheet. If the layout can be adapted to the new size via CSS, then you don’t need a resize handler but can leverage the built-in Windows 8 support for CSS media queries.
Any Windows 8 project has a default.css file that contains the following:
/* Media Queries */ @media screen and (-ms-view-state: fullscreen- landscape) { /* styles here */ } @media screen and (-ms-view-state: filled) { } @media screen and (-ms-view-state: snapped) { } @media screen and (-ms-view-state: fullscreen-portrait) { }
The @media keyword indicates a region of the stylesheet that should be applied only when the subsequent query is matched. The media query is a Boolean expression that results from the combination of a few environment variables exposed by the host window.
CSS media queries are a feature supported by most recent browsers and they’re quite useful to build responsive (mobile) websites. The variables exposed by browsers are dictated by the W3C standard that rules media queries. (See http://.) In Windows 8, the host environment of WinJS applications also supports the -ms-view-state variable. As the name suggests, the variable is automatically set with the current view state. Any CSS styles defined in each @media { … } block is applied automatically when the corresponding view state is set.
Next Steps
Windows 8 marks the debut of a significantly updated runtime platform-the WinRT platform. Like the .NET platform, WinRT supports several programming languages. Side by side with popular .NET languages such as C# and Visual Basic, you also find the JavaScript language.
If you’re already familiar with C# and XAML then you probably want to stick to them and just focus on the WinRT API and its differences with the .NET Framework. If you lack a significant background in XAML, though, you might find the WinJS framework-the WinRT framework for JavaScript developers-a very pleasant surprise. This article just scratched the surface of WinJS programming, but hopefully it covered a wide spectrum of programming features available in Windows 8.
|
http://www.codemag.com/Article/1304051
|
CC-MAIN-2014-49
|
refinedweb
| 4,351
| 53
|
2 Get Started
Where you start in Oracle Visual Builder Studio depends on the type of IDCS role you've been assigned, as well as your membership status within the project:
Access VB Studio
You can access VB Studio using the latest version of Google Chrome, Firefox, and Safari. Google Chrome is the only browser currently certified to work with the VB Studio Designer. Other browsers can be used with the Designer, but some features may not work correctly.
To access VB Studio, you need the service URL, plus your identity domain name, username, and password. If you’re a new user, you can sign in from the Oracle Cloud home page. If you’re a returning user, you can find the service URL from several of the emails you received, the ones with the subject Welcome to Oracle Visual Builder Studio or Verify your Oracle Visual Builder Studio.
During the onboarding process, you'll receive a series of emails, including some optional ones:
After adding a user, the OCI administrator can choose to send an email to the new user:
Description of the illustration oci-reset-password-signin-email.png
This email serves two purposes: to send the password reset URL and to provide the OCI sign-in url. The Oracle Cloud account name in the email is in double quotes (myaccount). If you bookmark the second URL in this email, the sign-in URL, you won't need the account name for signing in.
After the OCI administrator assigns the VB Studio IDCS role to the new user, the user receives this email:
Description of the illustration oci-access-granted-email.png
This email shows the Oracle Cloud account name (myaccount), the sign-in URL for Oracle Cloud, and the username (don.developer).
After the VB Studio organization administrator adds a new user to a VB Studio project, the new user receives a verification email:
Description of the illustration vbstudio-verification-email.png
The user needs to click on the verification link in this email to verify their email address to the service.
After signing in to VB Studio for the first time, a new user will receive this Welcome email, with the VB Studio sign-in URL:
Description of the illustration vbstudio-welcome-new-user-email.png
After an existing user is added to a project, the user receives this email:
Description of the illustration vbstudio-welcome-existing-user-email.png
The email contains project information such as the organization name (My Org), the project name (VisualApp), the project's privacy setting (private), the name of the project owner Alex Admin), a list of the project's members (on Developer), and the date and time that the member was added to_0<<
- On the User Preferences page, click the General tab.
- Select the Show News Banner on Organization and Project Home check box.
Set Up a Git Client
You can use use any Git client, such as the Git command-line interface (CLI), to access Git repositories from your computer. However, you cannot access projects, issues, and builds from a Git client.
Git Command-Line Interface
Before you can use a Git client to access your project's Git repository, you must first install and configure it on your computer. The Git command line-interface (CLI) is the most popular Git client.
Here's how to download, install, and configure the Git CLI:
Download and install the Git CLI.
On Windows, use the Git Bash CLI to access project Git repositories. You can download Git Bash (version 1.8.x or later) from.
On Linux and Unix, install Git using the preferred package manager. You can download Git for Linux and Unix from.
The VB Studio pages display your username and email address as the committer's name and email ID. Configure variables to set up your name and email address:
To configure your user name, set the
user.namevariable:
git config --global user.name "John Doe"
To configure your email address, set the
user.emailvariable:
git config --global user.email "johndoe@example.com"
To disable SSL or configure the proxy server, set the
http.sslVerifyor
http.proxyvariables:
git config --global http.sslVerify false
git config --global http.proxy
Tip:
To find out the value of a variable, use the
git config
<variable> command:
git config user.name
Use Projects
After signing in to VB Studio, you can create a project, open a shared project, or open a project you're a member of.
Create a Project
From the Organization page, you can create different types of projects:
Empty Project
If you haven’t decided which applications you want to upload, or want to start from scratch, create an empty project that has no pre-configured Git repository or any other artifact:
- On the Organization page, click + Create.
- On the Project Details page of the New Project wizard, in Name and Description, enter a unique project name and a project description.
- In Security, select the project's privacy.
- Click Next.
- On the Template page, select Empty Project, and click Next.
- On the Project Properties page, from Wiki Markup, select the project’s wiki markup language.Project team members use the markup language to format wiki pages and comments.
- Click Finish.
With an Initial Git Repository
If you plan to upload application files soon after you create a project, you should create a project with an initial Git repository. You can choose the Git repository to be empty, populated with a readme file, or populated with data imported from another Git repository:
- On the Organization page, click + Create.
- On the Project Details page of the New Project wizard, in Name and Description, enter a unique project name and a project description.
- In Security, select the project's privacy.
- Click Next.
- On the Template page, select Initial Repository, and click Next.
- On the Project Properties page, from Wiki Markup, select the project’s wiki markup language.Project team members use the markup language to format wiki pages and comments.
- In Initial Repository, specify how to initialize the Git repository.
If you prefer a blank repository or want to push a local Git repository to the project, select Empty Repository.
Some Git clients can’t clone an empty Git repository. Select Initialize repository with README file if you’re using such a client. VB Studio creates a
readme.mdfile in the Git repository.
You can edit the contents of the
readme.mdfile after creating the project, or delete the file if you don’t want to use it.
To import a Git repository from another platform such as GitHub or Bitbucket, or from another project, select Import existing repository.
In the text box, enter the external Git repository's URL. If the repository is password protected, enter the credentials in Username and Password. Note that VB Studio doesn’t store your credentials.
- Click Finish.
From an Exported Project
If you’ve created a project before and backed up its data to an OCI Object Storage bucket or an OCI Object Storage Classic container, you can create a project and import the data from the backed up project.
To import project data from an OCI Object Storage bucket or OCI Object Storage Classic container, you need this information:
After you have all the required input values, import the project:
- On the Organization page, click + Create.
- On the New Project wizard's Project Details page, in Name and Description, enter a unique project name and a project description.
- In Security, select the project's privacy setting.
- Click Next.
- On the Template page, select Import Project, and click Next.
- To import the project from an OCI Object Storage bucket, in the Project Properties page's Storage Connection section, in Account Type, select OCI and enter the required details:
- In Tenancy OCID, enter the tenancy's OCID copied from the Tenancy Details page.
- In User OCID, enter the user's OCID value (for a user that can access the bucket).
- In Home Region, select the OCI account's home region.
- In Private Key, enter the user;s private key (for a user who can access the bucket).
- In Passphrase, enter the passphrase used to encrypt the private key. If a passphrase wasn't used, leave the field empty.
- In Fingerprint, enter the private-public key pair's fingerprint value.
- In Compartment OCID, enter the compartment's OCID copied from the Compartments page.
- In Storage Namespace, enter the storage namespace copied from the Tenancy Details page.
- To import the project from an OCI Object Storage Classic container, in Account Type, select OCI Classic. Then, enter the required details:
- In Service ID, enter the value copied from the last part of the REST Endpoint URL field on the Service Details page.For example, if REST Endpoint URL's value is, enter
Storage-demo12345678.
- In Username and Password, enter the user credentials for a user who can access the archive file.
- In Authorization URL, enter the URL copied from the Service Details page's Auth V1 Endpoint field:.
- Click Next.
- On the Project Properties page, from Wiki Markup, select the project’s wiki markup language.Project team members use the markup language to format wiki pages and comments.
- In Container, select the storage bucket or the container where the data was exported.
- In File, select the exported file.
- Click Finish.
If the import fails, an empty project will be created. You can try to import the data again without creating a project. To check the import log, under Project Settings, in the Data Export/Import page's History tab.
From a Project Template
Using a project template, you can quickly create a project with predefined and populated artifacts, such as Git repositories and build jobs. When you create a project from a project template, the defined artifacts of the project template are copied to the new project. If you don’t want to use a copied artifact, you can delete it. Note that after you create a project from a template, updates made to the project template won’t be reflected in the project you created.
These types of project templates are available:
- On the Organization page, click + Create.
- On the Project Details page of the New Project wizard, in Name and Description, enter a unique project name and a project description.
- In Security, select the project's privacy.
- Click Next.
- On the Template page, select the project template, and click Next:
- To create a project for a visual application, select the Visual Application template, follow the instructions in Create a Project for Visual Applications, and see what was created for you in the project.
- To create a project for an application extension, select the Application Extension template, follow the instructions in Create a Project for Fusion Application Configuration, and see what was created for you in the project.
- To create a project from a private template, select Private Template, and click Next. On the Private Template Selection page, enter the private key in Private Key, and click Next.
- On the Project Properties page, from Wiki Markup, select the project’s wiki markup language.The markup language is used to format wiki pages, and comments on Issues and Merge Request pages.
- Click Finish.
In the new project, these artifacts are copied from the project template:
Open a Project
You can open a project only if you're a member or an owner, or if the project is shared. To open a project, click its name as it appears on the Organization page. To search for a project, use the filter toggle buttons or the search box:
To quickly access a project, click Favorite
and add it to your favorites list. To see your favorite projects, click the Favorites toggle button.
If you’re invited to join a project, you’ll find the project link in the email you received when you were added to the project.
To switch to another project from an open project, click
next to the project name. From the menu, click the project name to open it.
After opening the project, you land on the Project Home page.
Review a Project’s Summary
From the Project Home page, you can see a summary of the project's actions, repositories, team members, and statistics:
The Project Home page remembers the last opened tab (Repository, Graphs, or Team) in the current browser session and opens it automatically the next time you open the Project Home page. If you sign out or close the browser and then sign in and open the Project Home page, the Repositories tab opens by default.
Add and Manage Project Users
After creating a project, you'll probably want to add team members to collaborate with. You may also want to allow or limit their access to project data or actions they can perform on the project.
You must have the Project Owner project membership to add and manage project users (team members), which you do from
the Project Home page's Team tab. Before adding a user, make sure
that the user is a member of the identity domain and is assigned the
DEVELOPER_ADMINISTRATOR (Developer Service Administrator) or the
DEVELOPER_USER (Developer Service User) identity domain role and is
assigned one of these project memberships:
- Project Owner
- Developer
- Limited Developer
- Contributor
If the user you want to add doesn'tt have the required identity domain role or project membership, contact your organization administrator.
Note:To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.
From the Team tab, you can manage project users, as shown in this table:
Add Users from Another Project
If the users that you want to add to your project are members of another project that you can access, you can copy that project’s user list and add the users to your project:
- Open the project that has users already added.
- In the navigation menu, click Project Home
.
- Click the Team tab.
- Click Export
.
- In the Members List Export dialog box, copy the names of project members.
- Click OK or Close
to close the dialog box.
- Open the project where you want to add the copied users.
- In the navigation menu, click Project Home
.
- Click the Team tab.
- Click + Create Member.
- In the New Member dialog box, select the Multiple Users check box.
- In Username List text box, paste the copied names of project members.
- Click Add.
|
https://docs.oracle.com/en/cloud/paas/visual-builder/visualbuilder-manage-development-process/get-started.html
|
CC-MAIN-2020-40
|
refinedweb
| 2,400
| 61.87
|
Enlighten Progress Bar
Project description
Overview
Enlighten Progress Bar is a console progress bar module for Python. (Yes, another one.)
The main advantage of Enlighten is it allows writing to stdout and stderr without any redirection.
Documentation
Installation
PIP
$ pip install enlighten
EL6 and EL7 (RHEL/CentOS/Scientific)
(EPEL repositories must be configured)
$ yum install python-enlighten
Fedora
$ dnf install python2-enlighten $ dnf install python3-enlighten
Examples
Basic
For a basic status bar, invoke the Counter class directly.
import time import enlighten pbar = enlighten.Counter(total=100, desc='Basic', unit='ticks') for num in range(100): time.sleep(0.1) # Simulate work pbar.update()
Advanced
To maintain multiple progress bars simultaneously or write to the console, a manager is required.
Advanced output will only work when the output stream, sys.stdout by default, is attached to a TTY. get_manager can be used to get a manager instance. It will return a disabled Manager instance if the stream is not attached to a TTY and an enabled instance if it is.
import time import enlighten manager = enlighten.get_manager() ticks = manager.counter(total=100, desc='Ticks', unit='ticks') tocks = manager.counter(total=20, desc='Tocks', unit='tocks') for num in range(100): time.sleep(0.1) # Simulate work print(num) ticks.update() if not num % 5: tocks.update() manager.stop()
Counters
The Counter class has two output formats, progress bar and counter.
The progress bar format is used when a total is not None and the count is less than the total. If neither of these conditions are met, the counter format is used:
import time import enlighten counter = enlighten.Counter(desc='Basic', unit='ticks') for num in range(100): time.sleep(0.1) # Simulate work counter.update()
Additional Examples
- basic - Basic progress bar
- context manager - Managers and counters as context managers
- floats - Support totals and counts that are floats
- multiple with logging - Nested progress bars and logging
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/enlighten/
|
CC-MAIN-2018-26
|
refinedweb
| 343
| 50.94
|
- Ryan O'Connor , Tom Kopchak
- Jul 11, 2017
- Tested on Splunk Version: N/A
While Splunk is well equipped for ingesting large quantities of data without issue, it can be significantly more difficult to extract the original raw data from Splunk (if you ever need to). This blog post is based on a true “worst case scenario” story when an excessive amount of bad data was accidentally ingested into Splunk and then how it was eventually handled.
In the Splunk world, it’s normal to find yourself dealing with massive amounts of data - that’s what Splunk was designed for after all. While Splunk is well equipped for ingesting large quantities of data without issue, it can be significantly more difficult to extract the original raw data from Splunk (if you ever need to).
In many respects, this makes sense. Splunk is primarily designed to be a log archive and analysis platform. The true power of Splunk comes from being able to return the needle in the haystack with some cool visualizations along the way. But, what if you find yourself needing an inordinate amount of hay? Can Splunk be coerced to export a massive amount of data?
You might be asking yourself, “Hey Tom and Ryan, why would you ever want to do such a thing?” Well, we’re glad you asked. This blog is actually based on a true story where a very important Technology Add-on (TA) went missing from an Index Cluster. That’s not great in general, but especially when you think about the Splunk data pipeline. In our specific case, this missing TA equated to 4 days worth of data not being extracted properly. That resulted in 4 days worth of data that wasn’t CIM compliant, and subsequently 4 days of data that wasn’t populating a customer’s Splunk App for Enterprise Security.
The Palo Alto Networks Add-on for Splunk is the add-on that went missing in our case. This add-on requires that data is ingested via a very specific sourcetype. When it passes through the Indexing tier, it is broken out into different sourcetypes to be analyzed in Splunk. For version 5.x or later of this add-on, the incoming syslog must be configured to use the sourcetype pan:log. The add-on will automatically break this up into different sourcetypes, such as pan:config, pan:traffic, and pan:threat.
Unfortunately, if you don’t follow the add-on installation instructions and pick a different sourcetype, none of this magic will happen. Additionally, if you do pick the right sourcetype, but don’t have the add-on present (which is what happened in this case), you will wind up with a bunch of data that you can’t effectively use in Splunk. Garbage in, garbage out.
There is another issue that customers typically have with their data that made this problem truly difficult to tackle. That is: “How to store backups of syslog data”. Unfortunately for the customer we were working with, the Palo Alto logs were approximately 80-90% of their daily License of 300GB per day. It’s fairly unreasonable to expect a customer to pay for storage to not only Index all of that data to meet their retention policies, but to also store a raw copy of the data separately. One of the reasons that is so ludicrous, is because Splunk technically stores your data in its raw format in a default field called _raw - so there really is no need.
So, what do you do if you wind up in a similarly all-around bad situation? You find a way to export that data, despite the fact that you may have been told “it’s not possible” by some fairly reputable sources.
Fortunately, Splunk has several mechanisms available to return the raw events from a search. For a small dataset, this can be done through SplunkWeb when viewing the search results. For a larger dataset, this often will require the search to be run a second time (even if it was already completed), in order to ensure all the events are returned properly. When dealing with exports containing millions of events or hundreds of gigabytes of data (where a search to export this data could conceivably take days to run), this approach isn’t all that practical.
To address this issue, there are other methods available to export data from Splunk, the full list of which can be referenced here. According to Splunk, “for large exports, the most stable method of search data retrieval is the Command Line Interface (CLI)”. Since we were facing what appeared to be over a terabyte of exported events, this seemed to fit into the “large exports” category. When working with a Splunk Cloud deployment, you don’t technically have CLI access. But this is Splunk - there has to be a way to get this to work.
One of the most powerful Splunk features is the the Splunk CLI. While this is not available locally for Splunk Cloud, you can request access to the Splunk Cloud management port, which we do for all of our customers. With that level of access, we have the ability to run search commands on a remote Splunk instance - which means that the CLI export method was now available with Splunk Cloud.
We started by spinning up a VM in the lab with a lot of disk space. For this example, we calculated approximately 300gb of raw syslog, over the course of a 4 day window, which approximates to 1.2TB of space. If you need to do a similar calculation, you can use the following search:
index=_internal component=metrics series=<your_sourcetype_here> | stats sum(kb) as total_kb | eval total_gb=total_kb/1024/1024
In this lab instance, we installed a copy of Splunk to match the customer’s Splunk Cloud instance, which, at the time of this writing, was 6.4.x. (Note: we initially tried this with a 6.5 instance, but due to some SSL changes in that version, it was unsuccessful. Rather than troubleshooting that issue, matching the version was the simplest solution). From this system, we were able to test running searches against the remote Splunk instance using a sample command such as the following:
$SPLUNK_HOME/bin/splunk search "index::pan_logs sourcetype::pan:log" -output rawdata -maxout 0 -max_time 0 -uri > output.csv
You can run this search to verify your basic connectivity and confirm you are getting data returned. However, you probably don’t want to run this as-is because by default it will return everything in that sourcetype/index combination. In the case of a massive amount of data, timeouts and bandwidth will be your enemy.
Once we had our CLI Access and a healthy amount of storage, that was just about all we needed to get off the ground and start exporting our data. All we needed was a little bit of python, which can be found below.
import datetime import os import subprocess import re import getpass # STUFF THAT NEEDS TO BE DONE BEFORE RUNNING THIS SCRIPT# #Specify the location of your Splunk Home Directory splunk_home='/opt/splunk/bin/splunk' search_string='index::pan_logs sourcetype::pan:log' #Specify a local username and password so we don't need to do Two Factor #Replace this with a username for your environment. Python will prompt for a password user="admin" p = getpass.getpass() password=p ### Define Stopping_stop = datetime.time(14, 30, 00) d_stop = datetime.date(2017, 4, 4) ### Define the Starting_start = datetime.time(17, 30, 00) d_start = datetime.date(2017, 4, 4) #--------------------------------------------------------------------# dt_stop = datetime.datetime.combine(d_stop, t_stop) dt_start = datetime.datetime.combine(d_start, t_start) format = "%m/%d/%Y:%T" i = 0 while True: #if the Start Time and Stop time are the same, end the loop if dt_start==dt_stop: print "This is the end" print dt_start.strftime(format) + " it the same as " + dt_stop.strftime(format) break else: #Find the "earliest" time for the search (15 minutes before the current start time) dt_start_early = dt_start - datetime.timedelta(minutes=15) #Convert times to a readable format dt_start_early_string = dt_start_early.strftime(format) dt_start_string=dt_start.strftime(format) #ensure the file exists that we'll export to. file = open(str(i)+"_"+str(i+1)+"export.csv","w+") file.close() #spawn a process to run a search p = subprocess.Popen([splunk_home + " search \""+search_string+" earliest=\""+str(dt_start_early_string)+"\" latest=\""+str(dt_start_string)+"\"\" -output rawdata -maxout 0 -max_time 0 -auth "+user+":"+ password +" -uri > "+str(i)+"_"+str(i+1)+"export.csv"], stdout=subprocess.PIPE, shell=True) output, err = p.communicate() #Set the next start time to the earliest time of our last search dt_start = dt_start_early #increment our counter for next file name i=i+1
This situation was made extra exciting because we ended up running two Splunk instances on the data exfiltration node at the same time, one for outputting data to files and one to re-index data as those files were written. We don’t recommend this scenario long term, but it helped us set this up over a weekend and when we came back in on Monday, everything was solved. Theoretically, if you had everything in place right away, you could do this with one UF, or one HF. We were figuring this out in stages however, so we setup a UF to start downloading the data as we knew it would take a while to export the quantity of data we needed. We utilized a second Splunk instance to start testing ingesting the data to a test index to make sure it was working as expected and to fine-tune some hostname props/transforms settings. This was probably overkill in hindsight, but if you go to implement this, keep in mind you may be able to streamline things even more.
1.) This code runs a search for the data that was indexed incorrectly. In our case, anything that was indexed as “pan:log” was actually unusable. Data should have ended up in the indexes as pan:threat, pan:traffic, etc. So to identify the “bad” data that we needed to re-index, we set our search_string to “index::pan_logs sourcetype::pan:log”. The code also has two variables for “Start” and “End” times. So you can say “I want to look for this data over this span of time”. Both times should be in increments of 15 minutes. So if your most recent “bad” event is at June 4, 2017 at 10:01AM, you’ll set your start time to June 1, 2017 at 10:15AM. If the oldest bad event is June 1, 2017 at 9:37AM, you’ll set your end time to June 1, 2017 at 9:30AM. This way you make sure to capture all bad events.
2.) Next when you run the script, it will export data in 15 minute chunks and leave them in the same folder that you ran the script in. This helps alleviate any issues with timeouts that can result in trying to download 1.2 TB of data all at once. (Especially if you work from home and are plagued with a terrible internet connection.)
3.) You’ll need to decide how to re-index the data. For this script, you’ll setup a File Input to read in the files from whatever directory you exported them to. Ours looked like the following:
[monitor:///opt/splunk/export/*.csv] disabled = false followTail = 0 sourcetype=pan:log index=pan_logs blacklist = \.gz$
4.) Lastly you’ll need to decide if you’re going to have a Heavy Forwarder or Universal Forwarder do the work. We used a Heavy Forwarder for some very specific reasons that I won’t get into here. Especially because Heavy Forwarders are, in general, not recommended. The point being, we displayed a Heavy Forwarder in this diagram, but you could also do this with a single Universal Forwarder if you didn’t need any custom props/transforms (we did in this case).
Last but not least, we should at least show the results from our efforts. Below is a look at the Palo Alto logs from the 4 days in question:
Since the logs that we exported and re-indexed had a different source than the logs previously indexed, we could look across all Palo Alto logs and use a timechart command with a count to create a nice visualization. The search we used and graph are included here. The only sourcetype we filtered out was “pan:log” since we didn’t care about those events. You’ll notice that the blue line shows a count of events that came from this script. The yellow line is the count of events that came from the standard process. So what we can see is that we successfully backfilled this timeframe with all 1.2TB of data.
index::pan_logs sourcetype!=pan:log | fields source| eval source_location=case(LIKE(source,"/opt/splunk/%"),"script", 1=1, "standard") | timechart count by source_location
Hopefully this helps someone else in the future who ends up in a “worst case scenario” when they accidentally ingest an excessive amount of bad data that they don’t have a backup of. I know this is the first time we had to tackle this problem and in theory, though it seemed possible, it’s always nice to see theory meet practice. If anyone has tackled this problem before, finds this useful, or has any comments or questions feel free to comment below.
If you're looking for something different than the typical "one-size-fits-all" security mentality, you've come to the right place.
|
https://www.hurricanelabs.com/splunk-tutorials/splunk-answers-how-to-export-massive-amounts-of-data-from-splunk
|
CC-MAIN-2019-43
|
refinedweb
| 2,263
| 61.06
|
Project Server Events
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
The Microsoft Office Project Server 2007 Eventing service provides a mechanism to extend Project Server by adding new business logic. Project Server raises pre-events and post-events when there are changes in business object data. When you associate an event handler with an event, Project Server executes the event handler when that particular event occurs. Windows SharePoint Services 3.0 stores the list of registered Project Server event handlers.
In earlier versions of Project Server, the only way to invoke custom code based on events was to use SQL Server triggers. To write triggers, you need an intimate knowledge of Project Server internals, and triggers can adversely affect system performance. As a result, partners and ISVs often have to write code for the earlier Project Server versions that replaces or circumvents Project Server functionality.
Server-side events provide the "hooks" that allow third-party developers to extend the Project Server platform with new functionality. Custom processes can include additional validation routines, data processing, notification services, and initiating and sustaining workflows. Server-side events make Project Server a viable platform for integration with line-of-business (LOB) applications and for custom solutions in enterprise project management.
The Project Server Architecture and Programmability topic describes the overall architecture of the Project platform and introduces Project Server events. The Project Server eventing architecture includes the following features:
The Eventing service is part of the core Shared Services Provider (SSP) application.
Business objects in the Project Server middle tier register events with the Event Manager in the Eventing service.
The Event Manager handles loading and execution of event handlers.
Project Web Access includes an event catalog that shows the Project Server events that are available and provides the user interface to register event handlers.
Project Web Access maintains event handlers in the associated Windows SharePoint Services 3.0 configuration database at the Project Web Access site level. The event catalog stores associations of Project Server events and event handlers.
You can have independent custom event handlers for each Project Web Access site that is hosted on the same SSP.
Figure 1 shows the Project Server Eventing and Queuing services, which by default are started automatically.
Event Sources
An event source is typically a business object in the Project Server middle tier. Business objects (Project Server entities) are represented by Microsoft .NET Framework 2.0 classes. An event definition in a business object uses a delegate class to maintain a collection of methods that are called when the event occurs. The events are defined in terms of the related delegate.
Figure 2 shows the sequence of processes involved in managing and raising events.
Project Server creates an instance of a business object. The business object (event source) registers its events with the Event Manager.
The Event Manager queries the event catalog for associated events and event handlers.
If event handlers are available, the Event Manager creates an instance of the event receiver for the business object.
The event receiver is bound to the event source. Whenever the business object raises an event, it triggers the event receiver, which then runs the event handler.
Microsoft Office Project Professional 2007, Project Web Access, and third-party applications can raise server-side events when they interact with Project Server. Applications interact with Project Server by using the Project Server Interface (PSI). PSI calls invoke business objects, which are the event sources.
Event Catalog
The event catalog contains the registered events and event handler associations. If an event has an associated event handler, then the event catalog contains the following information:
Event source type: case-sensitive, fully qualified class name of the event type.
Event handler assembly: the strong name of the event handler assembly file that exists in the global assembly cache or bin directory
Event handler type: case-sensitive, fully qualified name of the event handler class in the event handler assembly.
Event handler method: case-sensitive name of the event handler method.
The event catalog is part of the Windows SharePoint Services configuration database. When a change occurs in the event catalog, Windows SharePoint Services refreshes the in-memory catalog on the application server. This means that you can add new event handlers and remove or deactivate existing event handlers without restarting the Project Server computer.
Pre-Events and Post-Events
There are two types of events: pre-events and post-events. Business objects raise pre-events before saving data to the database. Event handlers can cancel an operation that raises a pre-event. Post-events cannot be canceled; business objects raise post-events after saving data changes in the database.
Project Web Access shows a list of events in the event catalog for each business entity. Figure 3 shows that you can expand the business entities such as LookupTable, Project, and Reporting to see their events.
For example, Project Publishing is a pre-event. You can stop a project from being published if an event handler determines that certain requirements are not met in your custom business rules for publishing projects. Project Published is a post-event and cannot be canceled; therefore, you cannot use Project Published to stop a project from being published. However, your Published event handler can start a workflow after a project is published.
For a list of all Project Server events, see PSEventID.
Event handlers for Project Server are class library assemblies built on the .NET Framework 2.0. When you develop an event handler, you create a class that inherits from an event receiver class such as CubeAdminEventReceiver or ProjectEventReceiver. The base event receiver classes are all in the Microsoft.Office.Project.Server.Events namespace, which is in the Microsoft.Office.Project.Server.Events.Receivers.dll assembly installed on the Project Server computer.
If a Shared Service Provider hosts two or more Project Web Access sites, then you can have different custom event handlers for each site. Event handlers can run either synchronously or asynchronously using the .NET Framework threading services. Synchronous handlers execute on the same thread as the calling code. The calling method waits until the event handler returns control. Asynchronous event handlers are queued on a separate thread pool.
For more information about asynchronous programming, see Asynchronous Programming Overview in the MSDN Library.
For information about developing, registering, and debugging a simple event handler, see How to: Write and Debug a Project Server Event Handler. For an example that changes e-mail using the NotificationsSending event, see How to: Customize E-Mail for Project Server Notifications.
Multiple Event Handlers
You can add more than one event handler to a Project Server event. When you register an event handler assembly and class using Project Web Access, you set the order of execution for that event handler. The order can be from 1 to 999.
Event Handler Failure
If an event handler fails when it is one of a series registered for a single event, the result depends on whether the event is a pre-event or post-event.
Pre-event If an event handler fails, the rest of the event handlers registered for the same event are not executed. The pre-event is effectively canceled.
Post-event If an event handler fails, the remaining event handlers registered for the same event are executed.
Event Data
Event handlers run in a separate application domain. An event source passes data when it calls the event handler. For example, an event handler for the OnPublishing pre-event overrides the Project object's base method for the event as follows.
All events pass contextInfo, which includes the user's ID, name, and other data for authorization checks. Some events include additional data in the e parameter, such as a DataSet, a project ID, or other arguments in the PSI call that triggered the event. Some events consume data modified by an event handler; other events disregard modified data.
The Microsoft.Office.Project.Server.Eventsnamespace includes event receiver classes along with the pre-event and post-event argument classes and event handler delegates. The event class descriptions show which events read changes in the event arguments and the properties that are available. The event arguments parameter is an object that is passed by reference. Therefore, even though the event handler returns void, the event source gets any changes the handler makes to the event arguments object. To find out what data are in the pre- and post-event arguments, check the properties in the event arguments classes. For example, the ProjectPublishing pre-event arguments include the all of the properties in the PreEventArgs, ProjectPreEventArgs, and ProjectPrePublishEventArgs classes.
Following is the general sequence of processes for an event handler:
Call a PSI method. The PSI method is an API for a Project Server business object, such as Project. For example, the Project.QueuePublish method includes projectUid as one of the arguments.
The business object (the event source) raises an event. For example, the QueuePublish method raises the ProjectPublishing event and determines if there are registered event handlers for that event.
The event source passes the PSContextInfo and event argument objects by reference to the registered event handler. For example, the Microsoft.Office.Project.Server.Events.ProjectPrePublishEventArgs object includes the ProjectGuid, ProjectName, and WssUrl properties. The ProjectPreEventArgs object adds the IsWorkingStore and ProjectDataSet properties, and all pre-events add the Cancel and CancelReason properties in the PreEventArgs object.
The event handler processes the custom business rules, and then it can call other processes and modify the event arguments.
If the event is a pre-event, the event handler determines whether the process should be canceled. If so, set the Cancel property to true in the event handler.
The event source checks the value of the Cancel property (if it is from a pre-event handler). If the pre-event is not canceled, the event source continues with the modified event arguments.
|
https://msdn.microsoft.com/en-us/library/ms481079(v=office.12).aspx
|
CC-MAIN-2015-48
|
refinedweb
| 1,687
| 55.95
|
Animal Rights vs. Animal Welfare: Making the Case
May 18, 2008. Brighton Town Lodge, Rochester, NYKindly transcribed by David Stasiak. Some additional editing by Alex Chernavsky.
Original audio recording is
here (2 hrs 18 mins; 16MB). Microsoft Word version is
here.
Gary Francione: Thank you very much. I just wanted to correct one thing: the
first time animal rights was taught was when I was teaching at Penn. By the time
we got to Rutgers, it was becoming a more popular sort of thing. But I taught it
in 1984, animal rights theory, at the University of Pennsylvania – which I
believe was the first time it was taught in an American law school.
The title of the talk is, “Animal Rights vs. Animal Welfare: Making the Case”,
which is actually Ted’s title. He sent me an email which said, “I’m going to
give it this title unless you tell me not to”. And my alternative was –
Ted Barnett: Well, actually, it was [my wife] Carol’s idea [laughter]
Gary: [some inaudible] Ted purported it to be his idea. [inaudible] And my
alternative title was, “Animal Welfare: Scourge of the Earth”. [laughter] I
thought his was more neutral, and we’ll go with Ted’s. [inaudible]
Gary: What I want to talk about tonight is basically the position that we hear a
great deal: that there’s no difference between the animal welfare and the animal
rights position; that we need to pursue welfarist regulation, i.e. to make
animal exploitation more humane, in order to help animals now; that welfare
regulation will lead to abolition in the long run; that we need to pursue
regulation because that is going to lead to abolition in the long run. Part of
this position is that the animal rights or the abolitionist position is utopian
or ideal, and that it doesn’t really provide anything practical to do, and that
it’s just too idealistic and too non-practical. And that even if we are inclined
with the abolitionist or animal rights perspective, we ought to follow this
‘two-track’ approach: that we ought to pursue veganism, but we also ought to
pursue animal welfare regulation. That’s the position that I want to talk about.
And that position, I admit, has a lot of intuitive plausibility. You know, “Yeah
we got to do something to help now, we all feel frustrated, billions of animals
are suffering...”.
The answer? Well what I’m talking about tonight is why I think those positions,
the various parts of what I just described as the position I want to talk about,
are wrong. Let me start off by saying that there is a huge difference between
animal welfare and animal rights, a huge difference. Let’s think about two
concepts: the concept of use and the concept of treatment. These are different
concepts. Whether we use animals for a particular purpose is a different
question from how we treat them pursuant to that purpose. Whether, for example,
we use animals for food is a different question from whether we keep them in
intensive situations, or whether we keep them in free range situations, or
whatever. They’re different questions.
And by the way, I’m going to talk, and then we’ll have time for questions,
answers, criticisms, slurs, whatever [laughter]. Whatever you want to do, we’ll
do it. But what I do want is if there’s something that’s not clear, just pop
your hand up, because I don’t want anything to be unclear. As a matter of fact,
let me say this: I had a PowerPoint presentation done for tonight. And last
night, I looked at it, and I didn’t like it, precisely because I was afraid that
it wouldn’t really be everyone’s cup of tea. It was really heavy duty into a lot
of philosophy. And I thought, no, I didn’t want to do that, so I redesigned it.
Unfortunately, I didn’t do a PowerPoint presentation.
There is something that I’m going to show. There’s a video I’m going to show
you, which is why I had put them to the trouble of setting up the equipment
anyway. But I’m not going to be doing a PowerPoint presentation. I do want you
to understand the points that I make, so if something is not clear then just let
me know, and I’ll be more than happy to clarify it.
So the distinction between use and treatment. This is very, very important to
understand. Whether we use animals is a different question from how we use them.
We use them for purpose “X” is different from how we treat them pursuant to the
use for purpose “X”. The animal welfare position focuses on treatment,
basically. It focuses on treatment, and doesn’t really look at use. In other
words, the animal welfare position is that the use of animals per se doesn’t
raise the primary moral question – or for some welfarists, even a moral
question. The issue is how we treat animals. It is all right to use animals for
human purposes, as long as we treat them a particular way. As long as we accord
whatever level of protection the particular welfarists advocate, as long as we
give that level of protection, then our use of animals is morally acceptable. As
opposed to the animal rights position which is: it doesn’t really matter how
well we treat them. We have no moral justification for using them under any
circumstances.
Those are very, very different positions. Now I want to discuss a little bit
about the history of the animal welfare position, so you can see how it got to
where it is today. Before the 19th century, basically animals were regarded as
things. They were excluded completely from the moral and legal community. They
were regarded as not having any moral boundary whatsoever. We were thought not
to have any moral obligations that we could owe directly to animals. Now, that
doctrine took two forms – two or three depending on how you count, but let’s
just say two forms. I’ll describe them, you can figure out whether there are two
or three, but that’s really a sub-issue.
The first way of thinking about animals as things is exemplified by a guy like
René Descartes, who was a late 16th century early 17th century thinker, who
basically maintained that animals had no minds whatsoever. And he had all sorts
of reasons for this. But he basically did not think that animals had any
interests whatsoever – that is, there was nothing they preferred, desired, or
wanted. When I use the expression, “to have interests”, when I say, “An animal
has an interest”, what I mean is that the animal prefers or desires or wants
something. And I think we all understand what that means. We all prefer, desire
or want something – we want cheaper things, for example. This is what we call an
interest.
Descartes didn’t think animals had interests. He thought that they were
automatons, he thought that they were machines. He actually called them
automatons. Actually, he got the idea from walking around the royal gardens in
France at that time. He would see these hydraulic figures, and they were quite
elaborate. When you walked near these hydraulic figures, the pressure from your
feet would cause the water to go into these hydraulic devices, which were large
statues. And they would move, and they would seem to be alive. That’s what
Descartes thought. Descartes thought just as humans built these machines that
appear to be alive, God created animals. They appear to be animated, they appear
to be moving of their own volition – but really what they are is they’re
automatons. They don’t have volitions, they don’t have interests, they don’t
have ideas – they don’t have any thought. They have no minds. These are
creatures without minds. In other words, there is no difference between a clock
and a dog.
At the time, there was no anesthesia. Anesthesia had not been invented yet. So
Descartes would cut open animals that were nailed to boards, and when the
animals screamed and people said “Hey René, don’t you think that animal is
experiencing pain?” René would say, “no!”. He would say, “The noise that this
animal is making is really no different from the whining of a gear in a machine
that needs to be oiled”. Hey, look, you might think that this is crazy, but this
guy is considered to be one of the great minds... [inaudible]. You can draw your
own conclusions about that.
So if Descartes were right, if in fact animals are automatons, if they’re
machines, then we really couldn’t have moral obligations to them, any more than
we could have moral obligations to clocks. I could have a moral obligation that
concerns a clock, like I have an obligation perhaps not to smash a clock if the
clock is your clock. Or I have an obligation not to take that machine over there
and throw it at you, because – but that’s an obligation that I owe to you. It’s
an obligation that concerns machines, but it’s an obligation I owe to you.
So Descartes would accept that there may be obligations that I have that concern
animals. For example, I may have an obligation both legal and moral not to
injure your cow, because that cow is your property. But Descartes did not
believe that we could have obligations that we owe directly to animals, because
they weren’t the sorts of creatures to whom one could have moral obligations,
any more than that is the sort of device to which one can have a moral
obligation.
So that’s the first way of thinking about animals as things, as exemplified by
Descartes. Now Descartes was a pretty wild and crazy guy. He was unusual in
Western thinking, in that most people really didn’t think that animals were not
sentient – for example, they weren’t perceptually aware and able to feel pain.
Most people, like Aristotle, like Aquinas and Kant and Locke and basically most
of the other thinkers of Western civilization – again I want to limit... when
I’m talking about this idea, I understand there’s a huge difference. When you
start talking about Eastern civilization, you’re then getting into a very, very
different way of thinking about non-human animals, particularly because of the
way the concept of “ahimsa” or non-violence has played in various religions like
Hinduism and Jainism and Buddhism, and so it’s very, very different. I’m talking
about Western civilization, which is basically what influences us most and what
is really the background noise of our lives in terms of how we think about
non-humans, at least for most of us.
Descartes is sort of in a class by himself, then we’ve got everybody else. And
everybody else – Kant, Locke, everybody – they recognized that animals had
interests. They recognized that animals could feel pain. They realized that
animals were perceptually aware. They realized animals had interests – there
were things they preferred or desired or wanted. But that it was all right for
us to treat animals as if they were things, as if they were automatons, as if
they didn’t matter – because they were inferior to us. And they were inferior to
us in two ways. And these overlap. They were either inferior because they were
spiritually inferior, they were made, you know – we’re made in God’s image,
particularly those of us who are born male [laughter]. And animals are not
created in God’s image. They don’t have souls, they’re our spiritual inferiors.
That’s an idea that you see in a lot of thinkers.
The other sort of inferiority was cognitive or mental inferiority. Kant, a
German philosopher, recognized that animals were sentient, that they were
perceptually aware, that they felt pain, that they had interests – he understood
that. He understood that one could harm them. Whereas for Descartes, you can’t
really harm an animal any more than I could harm that machine. I could wreck the
machine, I can’t harm the machine. But Kant recognized that we could harm
animals. But he thought that it was all right for us to exclude them from the
moral community, because they were not rational, they were not self-aware, they
didn’t have mind layers, they weren’t capable of engaging in moral reciprocity.
We have moral obligations to each other; we can reciprocate; we can have a
reciprocal moral relationship. Animals can’t do that, Kant thought. Animals
aren’t rational, Kant thought. It’s this is what permeates most of Western
thinking about non-human animals. It also permeates Western thinking about
slaves. It permeates Western thinking about women – at various times. But its
most extreme form was in talking about non-humans.
And so, people like Kant, people like Locke... Locke for example, thought
animals were rational, but he didn’t think that they were capable of
understanding abstract concepts. They didn’t have concepts like a class; we know
there’s a bunch of things called chairs, they all look different, but we
understand, we have the concept of a chair. And we know that it can be a beanbag
thing, or it can be one of these sorts of things, or it can be a big plush
chair, it can be... there are all sorts of chairs. But we have the concept of a
chair. He did not believe that animals had abstract concepts – therefore, we
could treat them as if they were automatons or machines, or we could exclude
them completely from the moral community. This is also reflected in the law.
Basically, before the 19th century, you don’t have legislation... you have
legislation that protects people’s property. So if you had malice towards me and
you injured my cow, you might be prosecuted for malicious mischief. But that was
because you had malice towards me, and you wrecked some of my property. It
really didn’t matter whether you damaged my cow or my tractor. What mattered was
you had malice to me. Basically, there wasn’t legislation before the 19th
century that recognized that animals had some sort of legal personality, were at
least partial members of the moral and legal community. and that they were
beings to whom we had direct moral and legal obligations. This changed in the
19th century. First in England, largely as a result of progressive social
movements where people started to say “Gee, you know, slavery is not really a
good idea, and there’s a problem with women not being able to vote”. So
progressive social movements start rising at the end of the 18th and beginning
of the 19th century. You have people like Jeremy Bentham, who was a lawyer and a
philosopher, and he said, “You know, what difference does it make if they can
think rationally or they can use symbolic communication with language? What
difference does it make? What matters is they can suffer – and if they can
suffer, they matter morally”. That sentience is all that is necessary to be a
member of the moral community and for us to have direct obligations that we owe
to the other, basically.
The problem is – it sounds really revolutionary. And it was in certain ways, but
it wasn’t in other ways, in that Bentham thought that animals were sentient so
therefore they mattered morally, but he thought because they weren’t rational –
and particularly, he didn’t think they were self-aware. He didn’t think they had
an interest in their lives. He didn’t really think that they thought about
themselves as ‘selves’. So basically, what Bentham says – and I actually do
have, for those of you who want to be... I would say pedantic, or perhaps
obsessive about it, I do actually have quotes on cards that I was going to make
you all read. But I decided I didn’t want to put people to sleep – you know,
“Here’s a quote from Jeremy Betham – let’s all read it” [laughter]. But I do
actually have the quotes, and if you’re that sort of person, and you want to
incur a reaction of others in the group who will scorn you and hate you
[laughter], but when we do the question and answers section I’ll have the quotes
on the thing and you can read it.
In any event, so what Bentham said was, “Hey look, they can suffer, and that’s
all that’s necessary. It doesn’t matter whether they can think rationally,
whether they can do mathematics, whether they can use language, whether they can
use symbolic... That don’t matter. What matters is that they can suffer.
However, because they don’t have minds like ours, they don’t really have an
interest in their lives. They don’t care that we use them, they only care how we
use them.” So he talks about how eating them, that’s OK, and if you ever saw
pictures of Jeremy Bentham, you know he didn’t get that stomach from eating
vegetables [laughter]. So Bentham said it’s all right for us to eat them, that’s
fine. Because they don’t care, they don’t care if we eat them. They just care
about how we treat them. We have an obligation to treat them well. And thus is
born the animal welfare movement with this fundamental foundational premise that
animals don’t have an interest in continued existence. They don’t an interest in
their lives. And this is an idea that sort of permeates... basically, the folks
who are thought to be the founders of the animal welfare movement, John Stuart
Mill thought similarly, [unintelligible] to Bentham and as did most people then.
And what’s interesting now is that the person that I would identify as the
leading spokesperson for the animal welfare movement now is Peter Singer. And it
is a fundamental part of Peter Singer’s view – that animals don’t have any
interest in continued existence. He thinks that the non-human great apes do. He
thinks that some other animals might, like dolphins. And basically what Peter
said is that animals don’t have an interest in their lives. It’s very important
about how we treat them, but he draws a distinction between the killing issue
and the suffering issue. Which is why he talks about, he says things like
“Veganism is a good thing generally to reduce suffering, but the luxury... if
you want to eat meat now and then, that’s fine. And there’s nothing wrong with
that, and people like to be vegans but they like to go out to expensive
restaurants and have disgustingly horribly tortured corpses, that’s okay”, that
sort of thing. Because he draws this distinction between killing and suffering.
And he sees veganism only as a means to reduce suffering – which, again, is
really something that permeates a lot of quarters of the movement now. So I’m
going to get to that in a little while.
Now, the animal rights view rejects this position. The animal rights position
rejects the view that it’s all right to use animals, and it rejects basically
the foundational premise of the animal welfare movement – that animals don’t
have an interest in continued existence. At least as I have developed that view,
I mean Regan’s view and mine differ in certain ways, but basically the position
I take is that if you’re sentient, you’re self aware. The notion that Bentham
said that, “You can be sentient but not self-aware”, not have a sense of
yourself, that’s a very bizarre notion to me. And one of the things I find very
strange is that a biologist at Harvard, his name is
Donald R. Griffin, he died not too far in the past. And Don Griffin was not
an animal rights guy, he was a biologist. And he wrote a book called,
Animal Minds. And he’s a biologist who was interested in cognitive
development. And one of the things that Don in his book was, “If an animal is
perceptually aware, and the animals are up there watching other animals running
up the tree – and the animal realizes, on some level, on some level, the animal
realizes that, ‘Hey, it ain’t me that’s running up the tree, somebody else is
running up the tree’ ”. So if I’m perceptually aware, Don argues, I have to be,
on some level, self aware. And I think really the problem with Bentham’s view
and Singer’s view that animals aren’t self aware is that it’s really tied to
this notion that in order to be self aware, you have to be somebody who looks in
the mirror and says “Hey that’s me”. That’s one way of recognizing yourself, but
it’s not the only way of recognizing yourself. Or Singer talks a lot about the
ability to think in the past and anticipate the future – that in order to have a
sense of yourself, you have to have a sense of the past and a sense of the
future. And the answer is yes, but that’s one way of having a sense of yourself,
it’s not the only way of having a sense of yourself. How many people in this
room saw the movie, Memento? Well, then you are aware of the phenomenon of
transient global amnesia. The guy in Memento was sort of ‘stuck’ in this
perpetual present, which is the way Bentham viewed and the way Singer views the
mind of most animals. Bentham viewed all animal minds that way. Singer views
animal minds that way that aren’t non-human great apes, dolphins and perhaps
some other species. But basically, the welfarists see the animal mind as rooted
in the continual present. For those of you who didn’t see Memento, it’s actually
an interesting movie, because it involves a guy who’s got transient global
amnesia which is a neurological phenomenon where you don’t have a sense of the
past, you don’t have a sense of the future, you have a sense of yourself right
now, right here. And it doesn’t go anywhere else. You have a sense of yourself,
it just doesn’t go into the future, and it doesn’t go into the past.
Would we say it’s all right to use such a person in biomedical experiments to
help people who didn’t have transient global amnesia? Is it all right to take
the organs out of somebody who has transient global amnesia in order to save the
lives of people who don’t have transient global amnesia? Most people I meet,
virtually all of them, would say, “No, that would really be horrible”. So a
person who’s got transient global amnesia may have a different sense of self
than the sense of self I’ve got, you’ve got, and most other people have, but
there’s still a sense of self there.
Furthermore, the notion that sentient beings, that something can be perceptually
aware, able to feel pain, and not have an interest in continued existence
strikes me as nonsense. I mean think about it for a second. What beings are
sentient? Well, we may not know the answer about insects and stuff like that,
that’s a question I get, “What about the insects?” [laughter], the “what about
insects” question. And I don’t know about insects. I don’t kill them. When
they’re in my house and they’re too large that I don’t feel co-existence is
plausible, [laughter] I will catch them and put them outside.
I will never forget, my first job at a law... Actually, I was in graduate school
and law school at the same time. My first job out, I was a clerk for a federal
judge in New Orleans, Louisiana. And I went to the University of Virginia, and I
knew bugs, but I didn’t know what a palmetto bug was, which is a nice term for a
big cockroach [laughter]. And I’d never saw one of these things until I was down
there. And there was a question in my mind as to whether or not it would be
inconsistent with my animal rights position to ride them to work [laughter].
They were big. And I’ll never forget, when we first moved down to New Orleans,
we were living in the French Quarter, and we went into our apartment, which we
had rented without seeing it. And we went in, and the landlord had set a roach
bomb, and there were these bugs. And I walked in and I saw these things, and I
said “What are these?”. And Anna Charlton, who is my colleague and my partner,
was from Britain. They don’t have things like that in Britain because, you know,
the rain kills everything [laughter]. And so we were both a little concerned
about it. So we called the Orkin guy. And the Orkin guy comes over, and he had a
briefcase. And he opens the briefcase, and he’s got Plexiglas blocks in it, and
they’ve got all the different sorts of cockroaches in them. And he showed us the
one that we had. We listened to him for a while, and I said, “My only question
is, how do we do this so we don’t kill them?” And he just looked at me and said,
“You don’t want to kill them?” And I said, “No, no, there must be a way to not
kill them”. And he just, very quietly and very gently, put the blocks back
[laughter]. He closed his briefcase, and he said, “There’s nothing I can do for
you” [laughter]. And he walked out of our house, and what we learned later on
was that they don’t like light. So we had all of our food, everything, including
dry breakfast cereals, in the refrigerator for an entire year. And we didn’t
have any food in the cupboards, and we kept the light on – environmentally it
was not cool, I agree. But we kept the lights on and we hoped they stayed... And
they fly. When we first saw one, we had very high ceilings in the living room,
and I thought it was a bird [laughter]. I don’t kill things that crawl, but the
one thing that we did know was that, with respect to all the animals that we
exploit, all the chickens, all the cows, all the pigs, all the fish, you know, I
mean mollusks, maybe an open... however, I don’t eat them either.
But you know, with respect to all the animals we routinely exploit, those
animals are sentient, they’re able to feel pain. Those animals have evolved
sentience in order to survive. Sentience is not something which develops for the
hell of it. It is a characteristic which develops in beings that move and can
use sensation to get away from things which are dangerous to their lives. So the
notion that for somebody to say, “X is sentient, but X does not have an interest
in continued existence” strikes me as nonsense.
I recently came back from a wonderful time at a major university in which I was
speaking just to faculty. It was a faculty retreat, and I was the guest person
who was the outside faculty, the person who was coming and talking about animal
issues. And we spent a lot of time – these are people who are not animal people.
They’re people who haven’t really thought about these issues. So we ended up
spending a lot of time talking about whether plants are sentient – which is
something we can talk about, if you want.
And I’ve always been a little puzzled by that: why do people think that plants
are sentient? I mean why would, just as a matter of common sense... you know,
we’re not in Kansas, so we all basically accept evolutionary theory. So why
would plants evolve the characteristic of being sentient, if they can’t do
anything about it except stand there. And if you take a cigarette lighter and
you put it to a dog, the dog will behave just like any of us would behave. You
take a cigarette lighter, and you put it to a plant, the plant will just
shrivel.
But the idea that animals are sentient but don’t have an interest in continued
existence strikes me as just being totally crazy. So the animal rights position,
first of all, rejects the notion that animals don’t have an interest in
continued existence because they’re not self aware. It takes the position that
they’re self aware... And I’m perfectly happy to acknowledge that because we use
language, because my concepts, your concepts, because our heads, everything that
goes on in our heads is very, very much tied to the fact that we use language,
symbolic communication. All our concepts are very, very intimately tied to our
language. I don’t know what it would be like to be conscious and have concepts
that aren’t linked to out linguistic characteristics that we have. But that
doesn’t mean that animals don’t think or have very, very complicated ways of
thinking.
As a matter of fact, if you’ve ever lived with a dog or a cat and you wondered
whether they can think, I find it peculiar. I never had a dog until I was an
adult. I grew up in a house where my brother had allergies, and I had allergies.
We had frogs and snakes and stuff when I was a child, but there’s a limit to how
you can... you can call a snake, the snake doesn’t come. [laughter,
unintelligible]. And most of those animals have very complicated cognitions,
snake cognitions and frog cognitions and whatever. Now that I have dogs, and I
have been living with dogs since I was 28, it’s really quite remarkable how
anybody could wonder about whether they think. Are their concepts different? I
have no doubt that their concepts are different from mine, but there’s also no
doubt in my mind that they have some sort of equivalent concepts of certainly
rationality, of abstract concepts.
I have a Border Collie. I have a rescued Border Collie – she’s one of the dogs
that we have. And she’s one of the most remarkable animals I’ve ever met. She
understands quite a bit. She’s smarter than most of the human animals
[laughter]. She’s very, very smart. She clearly doesn’t have concepts that are
the same as mine. I don’t know what her concepts are like, because her concepts
are not based on language, but there’s no doubt in my mind that she has
equivalents.
The animal rights position rests very heavily on the principle of equal
consideration. You have to treat similar cases similarly. Now we accord, or at
least we in theory accord every human being the right not to be property.
Slavery still does exist, but you know what: nobody but Bentham... You don’t
hear people say, “Well, we’ve discovered slavery here, and we think it’s a good
idea”. But who knows, there are people, Republicans [laughter], who may think
that way. But most people think that – [to audience member] Are you a
Republican? [laughter]...I’m kidding, I’m only kidding.
Audience member: I’m independent [laughter continues]
Gary Francione: So nobody thinks that slavery is a good thing. And we regard
every human, every sentient human... We might have a debate about what you do
with somebody who’s irreversibly brain dead. With something like that, I think
we could have an interesting philosophical question. We could also have an
interesting question about early-term abortion. I don’t think that there’s any
evidence to indicate that first trimester fetuses, which is where most abortions
are had, there’s no evidence whatsoever to suggest that they’re sentient. So if
you’ve got beings which aren’t sentient, [unintelligible] souls, I don’t want to
offend anybody who believes they have souls, I want to make it clear. But we
regard every sentient human, irrespective of whether they’re intelligent,
whether they’re stupid, whether they’re geniuses or severely mentally
challenged, or whether they’re really beautiful or not, or whatever their
personal characteristics are, we regard every human as having a pre-legal – it’s
protected by the law, but it’s really sort of a pre-legal issue – it’s the right
you’ve got to have in order to have any legal rights: you can’t be somebody
else’s property.
That’s what’s particularly insidious about slavery. All forms of exploitation
are bad, but slavery is particularly bad. Because slavery treats someone as an
economic commodity, and it empowers somebody else to value all of that person’s
interests, including that person’s interest in continued existence and having
suffering inflicted on him or her... All of the fundamental interests are valued
by somebody else, i.e. the owner. And most of us think that’s not good, that’s
not a good situation. And we regard every human as having a right not to be the
property of somebody else.
So now the question becomes: Is there any reason, any logical, morally sound
reason, other than speciesism – which is not logical, nor morally sound and it’s
no different from racism, sexism, heterosexism, ageism, and any of the other
irrelevant criteria that we use and have historically used to exclude people
from the moral and legal community – is there any reason to say that animals
should have the right not to be treated as property? The answer is no, there is
not. And whatever defect it is that we believe they have that entitles us to
treat them as a commodities, is a defect that some of us have, and yet we would
never think that it’s appropriate to use those people who have that particular
quote defect end-quote as forced organ donors and painful biomedical experiments
or as slaves.
So if we accord this one right to animals, the right not to be property, what it
basically means is, we have to get rid of all institutionalized exploitation. It
means no more domestication – I mean we take care of the animals that we’ve got
now, but we don’t bring more domestic animals into existence. People always say
“Aha, but you have dogs”. Yes, I have dogs, they are rescued animals. They are
refugees, they live with me. They would all be dead if they didn’t live with us.
You will never find anybody on planet Earth who enjoys hanging out with dogs
more than I do. Yet if there were two left on the planet and it were up to me
whether they were going to breed so that we could continue to have pets, the
answer would be no way, absolutely not. We should not have domestic animals. I
mean when you think about it, animal ethics deals with how we deal with
conflicts between human animals and non-human animals. We manufacture the
conflicts – the conflicts are false conflicts. We bring these animals into
existence – we drag them.
One of the books I wrote is, Introduction to Animal Rights: Your Child or the
Dog. And on the front of the book there’s a burning house with a kid in one
window and a dog in the other. And what tried to argue in the book – no,
actually, I do argue in the book – is that we drag these animals into the
burning house, and then we say “Oh God, what are our moral obligations?”. We’ve
created them, we’ve dragged them. We create domestic animals, we facilitate the
creation of domestic animals, we drag them into the burning house, and then we
say, “What are our moral obligations?”. The answer I suggest to you is
foreordained: they lose, we win. That’s the way it always is. That’s the way it
worked with slavery, by the way. We’re going to talk about that in a few
minutes.
Now, one thing I want to say before I go on to the next point is that welfarism
relies on the notion that less suffering is better than more suffering. Well,
you know what – duh, yeah, that’s right. I mean that’s sort of hard to argue
with. Obviously, it is better to inflict less suffering than it is to inflict
more suffering. However, that doesn’t mean that it is a good idea to maintain
that inflicting less suffering is a morally desirable thing that we ought to be
praising.
It really bothers me very, very much when I see things like the Certified Raised
and Humane label which is sponsored by... I forget the primary organization, but
it is supported by HSUS. Or the Animal Compassion Standard of Whole Foods, which
is supported by PETA, Animal Rights International, Farm Sanctuary, Vegan
Outreach. and virtually all the other large organizations. Or Freedom Foods,
which is supported by the RSPCA. It really bothers me that we’re telling people,
“Hey, buy these... these corpses and these animal products which have been
tortured. This is a morally desirable thing”. And again, I actually have
photographs of some of the websites on my PowerPoint presentation – which you’re
being deprived of at this very moment – in which people are encouraged to buy
this stuff. This is how you show you care: go to the store and buy Freedom
Foods. This is how you show you care: go and buy your corpses from Whole Foods.
This is how you show you care: get a HSUS certified ‘humane raised and
handled’-labeled corpse or animal product, and that’s a good thing to do.
Think about it for a second. If I murder you, is it worse if I torture you? Yes,
as a matter of fact, I teach criminal law in addition to animal rights (you
know, like Rutgers is not Cambridge, I don’t just teach animal rights). I teach
criminal law, criminal procedure, I teach evidence. And when I teach criminal
law, yes, if you kill somebody, in a lot of states, if you torture them in
addition to killing them, you can become eligible for the death penalty. I
always thought that was weird, “eligible for the death penalty”, you want to
say, “Hey, I’m eligible for the death penalty!” [laughter]. But you are eligible
for the death penalty. If you torture somebody in addition to murdering them –
but if you murder somebody without torturing them, we don’t give you an award.
We don’t do what PETA did with slaughterhouse designer Temple Grandin and give
her an award. We don’t do what PETA did and give Whole Foods an award for the
best animal-friendly retailer.
Assuming that these welfare regulations actually do something, and I’m not going
to admit for a second that they do – but assuming that that were the case, it
still is bizarre that we’re telling people, “Hey, you didn’t torture somebody,
you murdered somebody and you could have used that cigarette lighter for half an
hour, you only did it for 25 minutes, you get an award! We’re going to give you
an award”. And there’s something very bizarre about that.
So yeah, obviously it’s better to inflict less suffering rather than more, but
that doesn’t mean that inflicting less harm is a morally desirable thing to do
in the sense that we want to normatively praise it or encourage it or promote it
as the goal of a social movement. I think it’s bizarre.
All right [fakes panting, as if tired], now point two – that was just point one
[laughter]. Point two is that I have a very practical concern: animal welfare
regulation does not work. In a sense, it’s really interesting to say, if animal
welfare worked, it would really be interesting to discuss whether or not it was
morally the right thing to do. But you know what? It doesn’t work. And it
doesn’t work for the following reason: animals are property. They are economic
commodities. They don’t have any intrinsic, inherent value. They only have
extrinsic or conditional value. They have economic value. They have only the
value that we accord them.
Now, if you look at the history of animal welfare, basically most animal welfare
measures – and I’m going to talk about a couple of them. And this analysis or
this framework that I’m going to give you is true of virtually all animal
welfare measures. What animal welfare does is it makes animal exploitation more
economically efficient for producers, and it makes meat cheaper for consumers,
or animal products cheaper for consumers. Let me give you an example of what I’m
talking about. You see the theory is, the theory of the animal welfarists is,
well, if we regulate it, it’s going to make it more expensive. And if we make it
more expensive, then the demand will drop because output will drop. And then
only people with money – not only is it wrong, but it’s elitist. It’s basically,
“Well, you know, let everybody else eat Styrofoam or whatever” [laughter]. But
the rich people, people with money can still afford to buy animal products. It’s
a very, very troubling theory in a number of different ways. But the basic
position of animal welfare is this: by regulating you increase production costs,
you decrease output, you decrease demand. So you’re shifting the demand curve
over. It don’t work that way.
Case in point: the Humane Slaughter Act of 1958. And this by the way is the area
in which I’m doing most of my research now, I’m working with an economist who’s
an econometrician, she’s a microeconomics person and an econometrician. And what
we’re doing is we’re examining various instances of animal welfare, both in the
United States and in Europe. And what we’re finding is that basically, the
animal welfare regulation fit this category pretty clearly. That what animal
welfare regulation does is that it actually makes animal exploitation more
efficient. Animal industries are very inefficient, so they do not operate in
accordance... I mean, whether any industry operates in accordance with the
economic model of efficiency is a big question, but certainly animal
exploitation industries don’t. And there are all sorts of reason for that
historically, as to why they don’t. We’ll get into that later on, if you’re
interested. But it’s not an efficient industry. And what animal welfare
regulation does is it makes it more efficient.
Case in point: the Humane Slaughter Act of 1958. Look at the legislative history
of the Humane Slaughter Act, because it’s really quite instructive. Basically,
it was something that when you slaughter an animal – a big animal – a cow, a
sheep, a pig – you’re basically putting the chain around the animal’s legs and
hoisting the animal up. Now, that animal is moving around a lot. A 2,000 pound
animal, a cow for example, moves around a lot, kicks people. Injured workers,
carcass damage occurs. If you look at the history of the Humane Slaughter Act –
which is very, very minimal – all it does is require that before the animal is
shackled or hoisted and you cut into the animal, the animal has to be stunned
unless it’s kashrut or halal. But the animal has to be stunned. I really
provides very, very limited... All sorts of parts of the animal’s life that are
not touched by the Humane Slaughter Act, including a lot that goes on at the
slaughterhouse. It’s at the very moment of death. Because we don’t want the
animal moving around a lot, because there were all sorts of worker injuries, and
there was carcass damage. And so if you look at the legislative history, you see
Congress was quite explicit in saying “We believe this is good legislation,
because it will cut down on worker injuries, it will produce higher quality
meat – it’s economically justified”. And if you look at the campaigns we’re
dealing with right now, whether it’s the gestation crate campaign which is going
on, or the controlled atmosphere killing campaign. And again, I have a blog
essay coming out on this probably when I get home tomorrow night, so it will
probably be Tuesday sometime in which I talk about this. But if you look at the
most recent campaigns, the gestation crate campaign, the controlled atmosphere
killing campaign, you see HSUS and PETA basically promoting these things. They
will cut down on production costs. That if you, for example, the sow
productivity – that’s not my expression, that’s HSUS’s expression – is higher if
you don’t use a gestation crate. Studies showed it, and that’s true. If you take
an animal out of a gestation crate and use what they call electronic sow
feeding, which allows you to have animals in this – you don’t give them a lot of
space, you give them more space, but not a lot of space – it actually cuts down
on veterinary costs. It causes their reproductive cycles to function more
efficiently from the producers’ point of view. And it makes the animals more
productive, and it cuts down on the production costs. For controlled atmosphere
killing, even though the Humane Slaughter Act did not apply in 1958 to chickens
because people thought, “What the hell? Chickens? If they move around, they’re
not really going to kill anybody are they? It’s not like a cow would move around
a lot, chickens moving around a lot, bumping into somebody, ain’t going to cause
worker injuries”. But one of the things we now know, as a result of studies done
by the meat industry, is that the way we’re killing poultry now results in a lot
of carcass damage. That’s economically un-cool, that’s not good for the
producers, and that’s not good for those of us who eat chicken. Because the more
bodies they have to throw away, the higher the cost – it’s not efficient. So one
of the arguments which is made explicitly – actually I’ve written other essays
about this in which you can find these links, but the one I’m going to do on
Tuesday deals specifically with this. You can actually find the website,
[unintelligible] the links for HSUS and PETA, and you can read their literature,
and you can see what they’re saying – and what they’re saying is, “Look at the
studies”, they’re citing the studies done by the meat industry, done by poultry
scientists, done by sow productivity scientists – basically, agricultural
economists. And what they’re telling us is those studies show that by providing
some added protection to animal interests, you’re actually putting more money in
the pocket of producers.
Ted Barnett, do you have a question, sir?
Ted Barnett: Why would something that helps the industry require legislation?
Gary: Ted, that’s a good question, and if I had planted that one, it couldn’t
have been any better [laughter]. Because the industry is inefficient Ted...
Ted: Why is it inefficient?
Gary: Well, the reason for that is because factory faming, intensive farming,
developed about 50 years ago on the theory that if you got ten animals and you
got them in a space and you’re making a dollar, and if you add ten more animals
you’re going to make two dollars. And basically, it was we’ll get a greater
economy scale, the more animals we shove into this building, the more money
we’re going to make. And nobody ever thought about the fact that these are
sentient beings, these are beings who are going to respond to the stress that we
impose on them, in ways that will economically screw us. So what’s happening now
is the industry – I mean, I don’t know enough economics to know, although my
economist colleague tells me this is not uncommon in other industries –
basically, this industry developed fairly rapidly, and the information wasn’t
perfect. There wasn’t perfect information, so there couldn’t be a purely
rational response. So it is only now that information is starting to come out
which suggests that these practices which have been used by animal agricultural
people are not economically efficient.
And you can look at this with respect to animal experimentation. I first started
thinking about this when I was teaching at Penn. I went to a meeting of
vivisectors at the medical school. And this was at a time when they were
thinking about the 1985 amendments to the Animal Welfare Act, which would
require an animal-care committee and certain sorts of animal – I hate to use the
word “husbandry” – improvements. And I remember going to that meeting, and a lot
of vivisectors were really very unhappy about it. This one guy got up and he
spoke. He said, “I don’t know why you all are upset about this. Because think
about it, all it says is we can’t expose the animals to stress. If it’s not a
dehydration or starvation experiment – which we can do, we can do those, it
doesn’t stop us – but if it’s not a dehydration or a starvation experiment, and
we don’t give them food and water, what that’s going to do is it’s going to
introduce stress. That’s going to result in scientific data that is compromised.
So we’re going to be adding a variable that is not a good variable to add. It’s
a variable that we’re not controlling for. And this is going to result in bad
science. All this is doing is saying that we’ve got to use the animals in a way
to get good data out of them”. I’m sitting there and thinking, “Wow, this is
really very interesting”. And I believe that the Animal Welfare Act of 1985
actually passed. And subsequent to that, I had all sorts of talks with people at
NIH who said yes, publicly we’re going to oppose it, but it’s going to be great
for us, because it’s going to allow us to go around saying “We’ve got animal
care committees, and they function just like institutional review boards which
decide human experimentation”. And they do. Every time I’m debating with a
vivisector, they’re always saying “I don’t know what these animal people are
upset about. The animals have the same protection as humans have. They have
institutional review boards that say whether human experimentation is OK, and
they have animal care committees that say whether animal experimentation is
okay”. Of course, the big difference is humans have to give informed consent,
animals can’t – and they can be used for all sorts of purposes that you could
never humans for. So there’s a huge, huge qualitative distinction. But the
Animal Welfare Act of 1985 was a great boom for vivisectors, and it doesn’t stop
them from doing anything, except introducing variables which are going to result
in an inefficient use of this animal proper. So animals are economic
commodities. And if we look at animal welfare, both historically and current,
contemporary animal welfare campaigns, we see that these are based upon the
concept of efficient exploitation. That basically the argument that’s being made
is, “Do this, it will actually make things better – it will improve your
production efficiency”. And it shouldn’t surprise us that this is the only sort
of thing that – I mean, think about it. If you’re a producer, you’re not going
to say, if something is worth a dollar, and you can use it productively by
spending 30 cents – your total cost of use is 30 cents, and you’re getting a
dollar. You’re not going to spend 35 cents to make that same dollar – that would
be economically inefficient. That example was from the production standpoint.
And as consumers, some of us – “affluent altruists” or whatever – might say,
“All right, I’ll pay a little bit more for quote humanely raised meat or
something”. But the bottom line is, most of us, if we really cared about
animals, so that we would really be willing to pay a lot more money for our
animal products, we wouldn’t be using them. If we really thought that much, if
we really cared that much, if we really thought they had that much value that we
could impose that sort of cost on our use of them, I suggest to you we wouldn’t
be using them.
So, from a production standpoint, a consumption standpoint – and let me also say
this: there’s something in economics called elasticity of demand, and that just
has to do with how the demand functions in response to increased costs. So if
the demand curve is inelastic, you can raise prices and people will still buy
the product. For example, if you smoke a particular brand of cigarette, and you
really like those cigarettes, they can raise the price, and you’ll end up
spending less money on other items, and you will buy that brand of cigarette. At
some point in time, because it’s too expensive, you’ll switch and you’ll buy
some sort of generic brand, or you might even stop smoking. But the elasticity
of demand for particular brands of cigarettes is quite inelastic: you can raise
the price, and people will still buy the cigarettes. On the other hand, there
are certain products that you can raise the price, and people will shift to
another product. That’s what you call an elastic demand curve. If the demand
curve is elastic, then price increases will result in changes in demand fairly
quickly. If it’s an inelastic demand curve, then changes in prices will not
result in demand changes – at least until you get a significant increase.
For many animal products, the demand is very inelastic. You can raise prices,
and people will continue to pay: the demand isn’t going to change dramatically.
But, even if it does, if the price of animal products goes up... If you look at
the demand for a particular animal product, for cows or for sheep or for fresh
beef or for fresh pork or something: even if the price goes up and people stop
buying fresh beef and fresh... they don’t buy tofu. If you look at the demand
for animal protein as a general matter, the demand is all infinitely inelastic.
You can keep raising prices, and people will simply buy other animal products.
So if you raise the price of beef too much, they’ll buy chicken. If you raise
the price of chicken, then they’ll buy fish. Or they’ll buy canned beef, or
they’ll buy canned pork products, or they’ll buy frozen stuff. But they don’t
buy tofu. This notion that I keep on hearing from animal welfare people: “Well,
if you raise the price, they’re going to eat vegetables”. I actually had a
debate with one guy. There’s a fellow in Austria who was debating with me about
this, and he basically said, “If you raise the price of animal products, people
eat vegetarian foods”. The answer is, no they don’t. They don’t! If you raise
the price of animal products, then people buy other animal products – they don’t
buy tofu. They don’t buy zucchinis and say, “Beef is a bit expensive this
weekend, let’s have zucchini”. [laughter] It doesn’t work that way. It’s crazy,
that’s what I mean.
And the other thing that we’ve got to keep in mind, because the economic reality
is that we now live in a world of free trade agreements. Whether it’s the
European Economic Community, NAFTA, GATT, or whatever. So this guy I was
debating with in Austria, he said, “Well, since they got rid of battery cages in
Austria, which they have to do (there’s a directive that says that the European
community is supposed to get rid of battery cages by 2012, we’ll talk about that
in a second. That’s never going to happen, but that’s what the directive says).
Austria has gotten rid of battery cages before, and he was claiming that the
production of eggs fell 35% in Austria. Now, I haven’t been able to find any
evidence of that. As a matter of fact, all the statistics I have show that the
production of eggs has actually gone up in Austria, it hasn’t gone down. But
let’s assume it went down 35%. What’s going to happen? If there’s a demand for
battery eggs, for the cheaper eggs, they’re going to come in from Poland,
they’re going to come in from Spain, and you can’t stop them. Because under
NAFTA, under GATT, under the European Community rules, you cannot stop the
import of a product from a member nation simply on moral grounds. So you can’t
say, “Well, we got rid of battery cages, so we’re not going to let Poland
export...”. The answer is no, it ain’t going to happen. So, keep in mind:
welfare regulation basically makes animal exploitation more efficient. It
doesn’t increase production costs, it reduces production costs. It doesn’t
decrease output, it increases output. It doesn’t decrease demand, it increases
demand. But even if it didn’t, even if welfare reforms had some effect on price
and that effect had some effect on demand, it would still be the case that
people would just turn to other animal products (number one). Or, they’re going
to demand the same product coming in from a market which is not regulated. So
this just doesn’t work.
Now, I also suggest to you, animal welfare regulations really do not result in
significant protection for animals. There is a big campaign going on now, and I
know that people at Rochester are involved with this Wegmans egg campaign. Let’s
go cage free, and let’s go free-range, etc. I think it’s nonsense. And this is
not just in Rochester, it’s all over the place. Vegan Outreach and HSUS and
everybody promoting cage-free eggs, free-range eggs. As far as I am concerned,
it is at best a fantasy. It is empirically wrong to tell people that they are
doing anything morally desirable by buying cage-free eggs or free-range eggs.
I want to show you a four-minute film made by the folks over at Peaceful Prairie
Sanctuary, which is out in Colorado.
[Difficulties with the equipment prevented Gary from showing the video. It’s
available on-line at the
Peaceful Prairie website]
Gary: I’m going to keep going, and I will show this later, if it is wanted. I
have some brochures from Peaceful Prairie Sanctuary here. They have a sanctuary
out in Colorado. They have farm animals there. It’s really an interesting place.
The film I was going to show you basically shows animals that they have rescued
from a free range situation. I have seen cage-free facilities myself.
[A second attempt at setting up the video does not work]
So, I don’t think it makes any real difference. I’ve got some literature up
here. If you want, afterwards, I can show you the video. You can see the video
by going to.
I think it’s called ‘Free Range Myth’. You watch it, you draw your own
conclusions about whether you want to spend your time talking to people telling
them, “You can do the morally right thing by eating eggs from these sorts of
birds rather than from these sorts of birds”. It doesn’t make any sense to me,
maybe it does to you – if it does, God bless you.
I also think that if you look at some of the animal welfare regulation... For
example, look at the European Union. We hear from people like Peter Singer or
institutions like Animal Rights International, some of the other animal groups,
“It’s wonderful that the Europeans are so far ahead of us – they’re getting rid
of battery cages by 2012”. Nonsense. First of all, it’s never going to happen.
Secondly, under the EU directive, you can use what they call “enriched cages”,
which are basically battery cages with some straw in them. That’s what people
are calling a great victory, a tidal wave of progress, things like that. It’s
nonsense, it’s absolute nonsense.
There are a lot of loopholes in animal welfare legislation. For example the
Californian foie gras ban is a perfect example. I cannot understand why anybody
thinks that that’s a victory for animals. It’s a ban that’s supposed to be
coming into effect in California in 2012. It was supported by the guy who owns
the only foie gras place in California, the Sonoma Company. Why was it supported
by him? Because it basically immunized what he was doing until 2012, and the
legislation is absolutely clear: if they can come up with a more quote humane
way – and they’re doing experiments right now to show that there are more humane
ways of force-feeding geese or producing this product. That law is never going
to come into effect. So it doesn’t come into effect when it was passed in 2005
or 2004 or 2003, or whenever it was passed, and even though it’s not going to
come into effect until 2012, it will probably never come into effect.
Look at the Chicago ban which was repealed last week. A lot of these things
aren’t enforced. Look at Britain with the hunting ban: it is not enforced at
all, and it’s probably going to be repealed as soon as the Tories get into
government. And I think that basically these sort of things have one great
effect – if you want to talk about animal welfare having an effect, I’ll tell
you the effect it has – it makes people feel better about exploiting animals. It
makes us feel better. Because we feel, “Hey, we’re doing something right.”
Vegan Outreach, I met one of their representatives – they said I should eat
cage-free. “We’ve got cage free eggs in the college cafeteria, and we’re doing
the right thing”. So what we’re telling people is, “Yeah, this is a good thing
to do. Eat cage free eggs, you’re actually reducing suffering”. I suggest to you
if you could only see this video – maybe you will sometime tonight. But if not
when you go home. If we can’t figure out how to deliver it to you, when you go
home tonight, you watch it – draw your own conclusions about whether you want to
spend your time trying to tell people that eating cage-freeeggs is anything you
want to spend your time asking people to do. Yeah [calls on questioner].
Female: From what I remember, Karen Davis of United Poultry Concerns has the
same point of view as you, right.
Gary: No, actually, I don’t think she does.
Female: Oh, she used to, sorry, I’m not...
Gary: Anyway, and I think it’s interesting, Farmed Animal Net, which is a
website that is sponsored by PETA, HSUS and a couple of other organizations,
they had a big article about how wonderful it is that Strauss Veal, the biggest
veal producer in the country, is going to get rid of, phase out veal crates. Go
read the article. Go read the article, and see what Randy Strauss, head of
Struass Lamb and Veal I think is the name of the company, what he says. And what
he says, basically, is that, “We will increase the consumption of veal by
getting rid of veal crates”. And the studies show, again, that if you get rid of
veal crates and you have the calves in slightly larger units, your veterinary
costs go down, your production efficiency increases, and because you can market
this stuff as humane, per capita consumption increases.
By the way, there’s an article in the New York Times, not always known for
accuracy, but nevertheless – the New York Times said that in 1961, the world
meat supply was 71 million tons. In 2007, it was 284 million tons. And before
anyone says, “Yes, but the population’s increased”, per capita consumption of
meat in that 46-year period has doubled.
Third point: Regulation does not lead to abolition – there is no empirical
evidence whatsoever to suggest that animal welfare regulation causes people to
think in an abolitionist way – that it moves us toward the abolition of animal
exploitation. Look, we’ve had animal welfare for 200 years. Factory farming
developed at a time when animal welfare was very, very popular in terms of
Western thinking. We have more animals now, being exploited in more horrific
ways than at any time in human history. The idea that animal welfare regulation
is going to lead to abolition is sheer fantasy.
Fourth point: The.
The title of my new book, which is quite deliberate, I mean as most titles of
books are, is, Animals as Persons. This book is going to be out in two weeks –
that, by the way, is one of our dogs, she’s a Maltese, who was going to be
killed because she had been returned to the shelter twice as non-house-trainable
[points to photo of dog on enlarged graphic of book cover]. We have had her for
seven years. It is either the case that she has never had an accident, or she’s
so small we’ve just never noticed it [laughter].
So Columbia University Press is publishing it. They gave me a bunch of flyers,
they’re giving pre-publication discounts. Before anybody asks the question, let
me say this: all of my book royalties go to [charities]. Last year, they went to
feral cats and to Peaceful Prairie Sanctuary, and to other sorts of
organizations like that. So, don’t think that you’re putting money in my
pockets.
So veganism is the principle of abolition applied to the life of the individual.
Just as an abolitionist of slavery would not own slaves, people who really
believe that we ought to abolish animal exploitation should not be consuming
animal products. We shouldn’t be eating them; we shouldn’t we wearing them; we
shouldn’t be using them on our bodies; we shouldn’t be doing that. As I say,
this is not a lifestyle thing – this has to do with non-violence and respect for
persons, human and non-human. And I want to get to the fact that when I say
human, because I think that veganism has a lot to do with human rights, just
like it’s got to do with the rights of non-humans. And it has a lot to do with
the respect of human persons, just as it has to do with respect to non-human
persons.
Things are never going to change – this society is never going to change – as
long as we own them, as long as we’re eating them, we ain’t never going to find
our moral compass while they’re sitting there on our plates. It’s never going to
happen, it’s never going to happen.
It’s important to understand that when welfarists talk about this “two-track”
approach – “Oh, well, it’s all right to be vegan, but, you know...” – it’s like,
“It’s all right to be vegan”, although you’ve got people who describe it as
fanatical. The welfarist literature about veganism is in my judgment very, very
disturbing. To the extent that the welfarists say, “We ought to promote
regulation, and we ought to promote veganism” – first of all, I don’t know why
they’re promoting regulation. I don’t know what empirical evidence any of them
has to show that welfarism does anything except increase production efficiency.
Nothing. Absolutely nothing. I don’t understand why they do it.
But what I do see happening is this notion that veganism is a way of reducing
suffering, so it’s just like everything else. So whether you’re vegan, or
whether you pass out literature about cage-free eggs, or whether you’re in favor
of some other welfarist campaign, it’s all the same, it’s all lumped in. And I
suggest to you that that doesn’t make a whole lot of sense. Yes, obviously to
the extent that veganism helps reduce suffering, yeah I think it’s a great idea.
But I also think that it goes well beyond that. And as I said, it has to deal
with a real personal commitment to ahimsa, the principle of non-violence. And I
think that really is what the center of what the movement ought to be. The idea
that violence is not good. That violence is responsible for the mess that we are
in – I mean, this world right now. And if we really want to think seriously
about moral solutions, we need to be thinking about the principle on
non-violence. We need to be thinking seriously about it, and non-violence starts
with what you stick in your mouth three times a day. And it’s really great to
talk about non-violence as some abstract thing, while you’re having coffee in
your New York café, eating a hamburger or some meat. It’s very interesting to
talk about it as an abstract matter. But you know what, it begins with what you
stick in your mouth. And if you start with violence three times a day, then the
rest of it is just mental masturbation. I’m sorry, I just realized there are
children [laughter]. Sorry, because they’re not sitting there. Sorry, those in
the back: It was Ted Barnett who said that [laughter].
It’s interesting, and I quote this in the blog post that I’m about to publish –
literature from Vegan Outreach and from Peter Singer, saying that we might
actually even have an obligation not to be vegan. If other people think that –
you know, if it’s going to make other people feel uncomfortable, for example. If
we go to somebody’s house and they produce something that’s got animal products,
or when we go to a restaurant and we start quizzing the waiter or the service
person or whatever, “Has this got butter in it? Does it have cream in it? Does
it have cheese in it?”, that we’re just going to make people think, “Oh, well,
these people are fanatical”. Let’s imagine you’re sitting in your friend’s
house, and your friend is showing family movies from the vacation. And all of a
sudden your friend says, “Let me show you a movie I took of the 6-year-old child
next door not wearing clothing”. Would you say, “Look, I don’t want to be sort
of too fanatic about child molestation, so I’m just going to sit there”? Or what
if somebody tells a racist joke? Are we supposed to just sit there and say, “Oh,
I don’t want to be politically correct”? Or, is the right thing to do to say, “I
don’t like racist jokes. Please don’t tell racist jokes. I don’t want to hear
racist jokes”. And so I think this idea that, with respect to veganism, when it
comes to animals, when somebody brings out the meal that they’ve made, and it’s
got cheese on it, you’re supposed to say, “Hey, that’s cool, I’ll eat the cheese
because I don’t want anyone to think that I’m like fanatical. I don’t want
anybody to think... God forbid should they think that I’m consistent about my
moral principles”. And so I suggest that this way of looking at things is really
very strange.
Finally, I wanted to state that it’s a zero-sum game. We live in a world with
finite resources. And every dollar that we spend, every moment of labor that we
spend promoting things like cage-free eggs, is a moment that we’re not spending
engaged in social activism, in the form of creative, non-violent vegan
education.
I was thinking before, as I was eating – way too much of that food... I was
engaging in gluttony before [laughter]. And I was thinking, you know what
activism is: this is activism. Yeah, we’re a bunch of converts. But I was
thinking, activism is getting people to taste this stuff. Like being at fairs
and festivals and events and having this sort of stuff. Put your time into that,
put your money into that, so people understand that, you know what, if they
become vegan, they’re not going to be eating paper [audience claps]. It’s
really, really good: that’s social activism. And that’s social activism:
creative non-violent vegan education. But it’s a zero-sum game. If you’ve got
two hours to spend tomorrow, you have a choice: you either spend it passing out
leaflets at the U of R saying, “Let’s start the revolution – get your commissary
to have cage-free eggs from less tortured birds”. Is that really what the
revolution is about? Or is what we want to do spending our time trying to
convince students not to eat eggs at all, not to eat dairy at all, not to eat
meat at all. And you know what? Yeah, with a lot of people it’s not going to
work. But with some it will. And what we need to do is to build a political
movement – it don’t exist now. The existence of a movement that is opposed to
inhumane treatment – it’s worthless, it’s useless – everybody is opposed to
inhumane treatment.
I just came from Vanderbilt University, where I was talking with people who are
doing animal research. And they say they’re opposed to inhumane treatment. And
everybody I’ve ever worked with in the university for 25 years of my life – and
everybody I meet who uses animals: they all agree. And they mean it sincerely –
we are not to treat them inhumanely. Everybody says that. Everybody agrees with
that.
So the existence of a movement... If what a movement is, is a movement that is
opposed to inhumane treatment, it is useless, it is meaningless.
Final point. On education. People always say to me, “If people do not want to go
vegan” – you know, you’re talking to somebody, and they say, “I just don’t want
to do it.” Should you tell them, “well, eat humane”? And the answer is no.
Never, ever, ever, ever say that the consumption of animal products is ever
anything that you put your imprimatur on and that you think is morally right.
When I talk to people and they say, “Look man, I agree with you but I cant do
this right away”. I always say, OK, fine. You really should, because eating this
stuff is not good. But, if you can’t do it right away, what you ought to think
about doing is, start off with breakfast being vegan. Do that for a while. And
then go to lunch vegan. And then do dinner vegan. And then do all your snacks
vegan. And then watch that beer and wine, because not all of them are vegan. And
get to your substance abuse, and get that to be vegan. [laughter]. And
basically, work incrementally towards it – but never say that eating animals is
okay. Never say to people that, “Oh yeah, you can be socially responsible”. I’m
using this expression intentionally, because this is a quote I believe is
attributable to Paul Shapiro at HSUS that cage-free eggs is – I think, is that
right – a socially responsible thing to do. Never tell people that. Because what
you’re saying is, “That’s good, that’s a good thing to do”. Social
responsibility is a good thing. So you’re telling people that social
responsibility: that’s a good thing. When you have your Wegmans campaign,
saying, you know, great organization – one glaring spot [exclaimed]. ‘One
glaring spot’. That’s a quote from someone – I was looking it up last night.
‘One glaring oversight’ or ‘one glaring thing’ or ‘one glaring omission’. I
don’t know what it was. Something glaring [laughter]. Glaring was the adjective,
I don’t know what the noun was. But cage-free eggs. The fact that they’re
selling battery eggs. So let’s get down to selling cage-free eggs. What does
that say? It says if Wegmans does this, it’s a good organization. Nonsense.
Nonsense. That’s complete nonsense.
So two other points. Single issue campaigns – people ask me about that all the
time: what do I think about single issue campaigns? Then I’ll take your
questions. What do I think about single issue campaigns? I think they can be
very dangerous. Because, in a society where animal exploitation is the default
position and is considered normal, focusing on one thing suggests that – for
example, when you focus on meat. In a society in which meat, dairy and eggs are
all considered normal things to do and that’s all part of the default position,
you focus on flesh, you’re basically saying there’s a morally relevant
difference between flesh on one hand and eggs, dairy on the other. When you talk
about fur – I mean I’ve always had problems with fur campaigns, because I’ve
thought it’s sexist. I thought it was yet another opportunity to go up to women
on the street who are wearing things... For some reason, it’s not all right to
campaign and go up and give people are hard time who are wearing leather
jackets, because people didn’t want to lose their teeth [laughter]. But it’s all
right to go up to women who are wearing fur coats, I’ve always had a problem
with that. So my deal is when I have to speak at these anti-fur things, to the
extent that I do it, when I did used to do it, I always said as long as I can
call it an anti-clothing event – because as far as I’m concerned, there’s no
difference between fur and leather and wool. I mean wool is absolutely horrible.
The way wool is produced is absolutely horrible. And so I think these single
issue campaigns are really problematic, because they suggest there’s a
morally... in the fur case, it suggests that there’s a morally relevant
difference between fur on the one hand, and leather and wool on the other hand –
but there isn’t.
Final point – human rights, animal rights – there’s a huge intersection. We are
right now seeing food riots in the world. And you know, if I have to hear
another NPR story about how the problem is ethanol, and the problem is the
demand for corn. Is that a problem? You know what the problem is? The problem is
animal agriculture, that’s the problem. Because it takes between 6 and 12 pounds
of plant protein to produce one pound of flesh; it takes 1,000 times more water
to produce flesh than it does to produce potatoes or wheat.
We feed enough grain every day to animals in this country that we are going to
slaughter that we could give two loaves of bread to every human being on the
planet. You know what? I don’t care if you don’t care at all about animals – if
you care about human animals, that’s got to be resonating somewhere that this is
not right, this is not good. Because we are selfish, and we eat animal products,
we are condemning a substantial part of the world’s population to starvation –
and that is just wrong. And I think that we really do need to see. That’s one of
the reasons why I have problems with these sexist campaigns – as far as I’m
concerned, speciesism is a lot like sexism. And I really think that as long as
we’re treating women like pieces of meat, we’re going to treat meat like pieces
of meat.
I used to work with PETA. As a matter of fact, I was like their first regular
lawyer. And I met the PETA people in the early 1980s and worked with them. And
there were two things that basically caused an end to that relationship. One was
the killing of healthy animals at the Aspen Hill Sanctuary which occurred. The
fact that PETA kills animals at its Norfolk facility, apparently, from what I’ve
been reading, is really no surprise – it’s been going on for a long time. There
was that, and there was the issue of “I’d rather go naked than wear fur”. It
never made sense to me. I never understood why we want to eroticize the fur
issue. Sexism is a problem. And really, the relationships between pornography
and meat-eating are very, very close. And so I think that we really need to be
thinking about that. But as far as the food issue was concerned, it ain’t
ethanol we’ve got to be worried about. It’s the fact that China and India are
increasing their meat production, their meat demand by zillions of percentage
points. And that we all eat this stuff. And that rich Western nations are
condemning a lot of people in the world to starvation because of selfishness. I
think this raises very, very important issues. I think it’s all part of the same
puzzle. I think it’s all part of the same problem. I think ahimsa non-violence
is the answer to all of it.
And now I am done, and I would be happy to answer your questions.
[applause]
Ted Barnett: Before we take any questions, I would like to make a presentation.
At Gary’s request, we have our honorarium made out to Peaceful Prairie
Sanctuary.
Gary: Thank you, thank you very much.
[applause]
Lois Baum: I encourage everyone to look at Gary’s website. It’s at.
It’s got everything he just talked about.
Gary: What I have on that website is four video presentations: theory of animal
rights, welfare, animals as property, and animal law. It’s got it in English,
French, German, Portuguese, and Spanish. And I’ve also got blog essays, which
are basically why animal welfare is equivalent to, like, vampires or something
[laughter] – you know, criticism of animal welfare. And then I’ve got an FAQ
section inwhich I answer questions like, ‘What about abortion?’, ‘What about
insects?’, ‘What about plants?’, – they’re canned answers which you can use
when you’re talking to people, in terms of answering the sorts of questions that
come up. Like ‘Who would you save if you were in a burning house?’, like that
sort of thing. Because we all get that sort of stuff, like ‘Well, if you were
walking by the house, who would you save – the human or the animal in the
burning house?’. The answer is, I would try to save both. But let’s assume that
I can only save one, and I chose to save the human. What does that tell me about
whether it’s okay to exploit the animal? It doesn’t tell me anything about it,
any more than it does if every time I go up inside the burning house I see a
young person and an old person who’s 115 who I know is going to be dead. If I
would only save the young person simply on the basis that the person’s got his
or her whole life ahead of him or her, that doesn’t mean it’s all right to use
elderly people in circuses, rodeos, zoos, or [inaudible] [laughter]. But it’s
those sorts of responses.
Chris Hirschler: I was listening to a lecture by Carl Cohen...
Gary: Carl Cohen? Wow.
Chris Hirschler: ...about why animals don’t have rights. He spent most of the
lecture talking about research and the need for animals in research so that we
can get immunizations... Would you see the point in even conceding something
like that?
Gary: Nope.
Male: ...because the average omnivore might say, “That’s so important to have
this immunization, and they’re only 1% of all animals killed”.
Gary: You ask a very, very good question. And in the book that I have coming
out, I have a chapter on experimentation. I make the point that it’s interesting
to look at the movement in Britain and America in the 19th and 20th centuries;
there’s a real focus on vivisection. And we all claim to agree with the
principle of “unnecessary suffering” – that it’s wrong to inflict unnecessary
suffering on animals. Now, we could have an interesting philosophical
discussion, which we don’t have time to have, but we could have an interesting
discussion on what “necessity” means. But whatever it means, it’s got to mean as
a minimal matter that it’s wrong to inflict suffering or death for reasons of
pleasure, amusement, or convenience. And yet, 99.9% of our animal use can only
be justified by human amusement, pleasure, or convenience – mainly our eating of
animals, our use of animals for entertainment, our use of animals for sport
hunting, etc. The only use of animals which is not transparently frivolous
(although I don’t agree with it) is the use of animals to cure important human
illnesses. I don’t agree with it – I want to make it clear, I don’t agree with
it. I just think you need a more complicated sort of analysis on that, because
there is a situation where people really do perceive there to be a conflict
situation. There isn’t a conflict situation in any of these other situations.
So it’s interesting, and I think a lot of it has to do with the fact that you
have this very, very weird sort of focus on vivisection. And I think that has to
do with the fact that animal people have historically not wanted to be vegans.
So it’s easy to say, “Hey, I’m an anti-vivisectionist”, and so you have all
these weird situations in England in the 19th century, where they have these
really sometimes violent demonstrations against vivisection in London – and then
they all go out and they eat meat.
And I think a lot of that has to do with the fact that we don’t know people who
do vivisection. I know a lot of vivisectors, because I work in a university. But
if you don’t work in a university, you don’t meet a whole lot of vivisectors –
it’s not like there are vivisectors crawling all over the place [laughter]. But
we’re not confronted with them socially, and they’re easy for us to make
abstract enemies out of.
What I always tell people is this: I always focus on the eating issue, and I
will talk about vivisection – but I always try to steer the discussion over to
eating. But when people want to discuss it, I discuss it. And I always say,
“Look, let’s take animals out of the equation. Let’s imagine you could find a
cure for cancer by using mentally disabled people – would that be okay?” And
sometimes you’re at a party and you’re talking to some university people and
they get a few drinks in them, and they’ll say, “Well I don’t really think that
that’s a bad thing”. But by and large, people don’t say that that’s a good
thing. As a matter of fact, most people think it’s really monstrous. Most of us
think, or many of us think, that not only is that wrong, but we have a special
obligation to more vulnerable humans. I think we miss the fact that it doesn’t
capture the morality. It doesn’t capture the moral view that many of us have,
that we feel we have special obligations to really vulnerable humans. And I
think we have really special obligations to non-humans, in part because of their
vulnerability. But I think you’ve really got to put the question to that person:
“Would you use a mentally disabled person? Why is it that you think it’s all
right to use animals?”. And they’ll give you some sort of nonsense, “Well, you
know, they can’t reason, they can’t think”. Well, I think that’s wrong. I think
that as a matter of evolutionary theory that’s probably wrong. One of the things
Darwin said, whether he was right or wrong about it, one of the things that
Charles Darwin said is that the distinction between humans and other animals is
a distinction of degree and not kind. It’s a quantitative distinction, not a
qualitative distinction. As a matter of fact, Darwin rejected the use of the
expression “higher” and “lower” animals.
And so for Carl Cohen, what is it, Carl, that makes it OK to use them? I know
Cohen’s views, and Cohen’s views are basically that animals can’t act in morally
reciprocal ways. It’s sort of a Kantian argument, that they can’t engage in
moral reciprocity. And the answer is: lots of humans can’t do that either. Lots
of humans are incapable of acting in morally reciprocal ways, does that mean
it’s okay to use them as forced organ donors?
Adam Hayes: Do you have any hope for...
Gary: No, I have no hope, period. [laughter]
Adam Hayes: Under capitalism, I mean how close do you think [inaudible]–
Gary: Are you a communist? [laughter]. Do you realize, do you understand that
there’s a Republican sitting in front of you? That you’re within striking
distance? [laughter] The question is, do I have any hope of the situation
improving, and of our achieving abolition under capitalism? That’s an
interesting and complicated question.
I think in theory it would be – just as you could get rid of chattel slavery...
We have a capitalist society, we don’t have chattel slavery. We exploit people
in the Marxist sense of exploitation, we alienate people from the value that
they... When you work, you get paid only a portion of what your labor is worth.
I, the capitalist, take the rest of it. And I appropriate, and I take, and I
alienate you from your labor. So that exists, but we don’t have chattel slavery.
It is in theory possible that we could eliminate the chattel slavery of
non-humans in our society, but I do think you have hit upon an important thing.
We need to be more critical of capitalism as an economic system. Capitalism
creates a lot of mischief, and there are good arguments that... For example, we
live in a society, one of the few, that doesn’t regard health care as a basic
right, so that people like [Dr.] Barnett can make lots of money [laughter].
Now we’re starting to change a little bit, and we’re starting to say, “Well, gee
you know maybe it would be good if we had a more socialized healthcare system.
But yet, the expression “socialized medicine” is something of an expression that
isn’t even used in political discourse because it is so charged. But we really
do need to be looking at capitalism a bit more critically. However, having said
that, we got rid of chattel slavery in a capitalist economy – we could, in
theory, get rid of animal slavery in a capitalist economy.
Female: I’ve been re-thinking (I’ve been a member of PETA for a long time), but
I’ve been thinking of dropping out, because I’m upset about this latest
publicity stunt where they’re offering a million dollars to somebody who clones
meat in the laboratory. And I’m also upset that they wanted to kill the fighting
dogs [Michael Vick’s dogs], but they were opposed by the Best Friends Animal
Sanctuary. Are you still a member of PETA?
Gary: No!
Female: OK, answer the other questions now. What do you think of the wacky
million dollar –
Gary: Well, I was asked to comment on that. And I said, first of all they’re not
risking any money, because the idea that there’s going to be commercially viable
quantities of in vitro meat by 2012 is ridiculous. Number two, the idea that an
animal rights organization – as far as I know there was no limit on the use of
animals, because you have to use animals in various ways to develop those
products, whether it’s the media you’re growing the cells in. And so animal use
is involved. And I just think that PETA has become a gimmick organization. It
stopped a long time ago being an animal organization. It’s got nothing to do
with the animals –- it’s got to do with PETA. And it’s for the promotion of
PETA. PETA is one big publicity stunt after another.
As far as the Michael Vick situation was concerned, I actually wrote about that.
I think the Michael Vick thing has nothing to do with anything but racism. We’re
all sitting around saying Michael Vick... It’s like the OJ Simpson business: “My
God he’s married to a blond woman, this is frightening, they’re getting close”.
And I think the OJ Simpson thing was racist, I think the Michael Vick thing was
racist.
There’s no difference between sitting around watching fighting dogs, and sitting
around you’re barbeque pit and having hamburgers. I don’t think there’s any
difference, whatsoever except one’s a rich black guy doing it. So I just got
tired of that Michael Vick thing.
I was on a radio show, and I was asked “Was I in favor of killing those dogs?”
And I said, “absolutely not”. I said I’m a great believer, for example, what
Cesar Millan can and has done with pit bulls at in his place in Los Angles. And
I called on PETA on the radio show. PETA is now like a multi – I think they’re
like a 60 million dollar – I think, I don’t remember what their... I do know
that HSUS, the organization which is so horribly concerned about suffering is
sitting on top of a quarter of a billion dollars. Do you know how much money
that is? A quarter of a billion dollars. There’s lots of suffering you could
stop with that money. And they take in 125 million dollars a year. And what I
did was, I said why doesn’t PETA use some of the ‘X’ millions of dollars they
have in helping those animals to overcome their aggressive tendencies? Because
you know what, that can be done. And that can be done without violence. Cesar
Millan does not use violence. At least as far as I’m aware, and what I’ve read
and seen of him, he doesn’t use violence.
Female: Is this the trainer Oprah Winfrey got to teach her dog not to do
something?
Gary: I don’t know. He’s the dog whisperer. He’s an interesting guy.
Female: This is my approach: What about teaching people to become vegans because
of health issues? I’m a nurse, and I belong to a group of people, and we are on
a diet called the Hallelujah diet. You’re familiar with it?
Gary: No.
Female: [Unintelligible]... George Malkmus. It’s a vegan diet. We do 85% raw
most of the time, we’re juicing a lot. And he has books and a lot of information
teaching people that meat, chicken, fish, dairy products, all that, eggs,
everything, are really a detriment to your health. And I found a lot of people
are very interested in listening to this, at least, and maybe trying it. And his
organization is getting humongous all over the world. He has a growing community
of people that are becoming vegans for health issues – and also because they
care about animals and don’t want them to suffer. And that’s my way of doing
things.
Gary: When I talk to people about this, I always mix the moral issue, the
environmental issue (and by environmental issue, I’m not an ecologist in the
sense that I do not believe that plants or ecosystems have any interests. I
think that only sentient beings have interests, and that we can only have
obligations to sentient beings). But I talk about the environmental issues
because I think that the resource allocation that is involved in the meat-based
or animal protein-based diet is horribly bad for the environment, but also it
has a bad effect on people.
And I talk about health. I talk about the fact that we’re all taking drugs for
high blood pressure or for cholesterol, for this, for that – when we could be
dealing with these issues in a natural way through eating healthy, whole foods.
I don’t know the person you’re talking about – but, as a matter of fact, Ted
Barnett has been talking about this stuff for a long time. There are people like
Joel Fuhrman, like T. Colin Campbell, and...
Ted Barnett: Caldwell Esselstyn?
Gary: Yes, yes. And there’s Milton Mills. So there have been a number of people
who have been talking about it. I think that’s important. The only issue I have
with really focusing on health issues exclusively is because the meat and animal
protein industry is a huge business. If you want to make the argument [for
veganism] rise and fall on the health argument, they’ve got more money. So
anytime we’re saying this is bad for you, they can come back and they can say
this is good for you. And they really brainwash people to the point where a
large part of the population still believes if they don’t eat animal protein,
their arms and legs are going to fall off [laughter] and they’ll go blind.
What’s really disconcerting is I have lot of younger people that I teach at
Rutgers. And I go to other universities and I talk, and I always have kids come
up to me and they say, “Are you a vegan?” And I say, “Yeah I’m a vegan”. And
they say, “How long have you been a vegan?”, and I say, “For 26 years”. And they
say, “How do you feel?” [laughter]. Right now? I feel really good.
And the really odd thing is that at the end of this month, I’m going to be 54
years old, and I have more energy than most of these 26-year-olds. And I think a
large part of it is that I don’t eat processed foods, I don’t eat any animal
protein. I eat mostly raw but some cooked. And I think that has a lot to do
with... I teach at a university, and I have people sneezing on me all the time.
I never get these flus. I don’t know, but I think it probably has to do with my
vegan diet. But I think we ought to talk about the health issue – but really,
the important issue to me is the moral issue, the non-violence point.
Female: I do that too, as well.
Gary: Yeah, and that’s fine. But I always say to people, “Look, let me be real
frank with you, just so you understand where I’m coming from. If it were
necessary to eat meat to live an optimally healthy life, I still wouldn’t do it,
because I’m much more concerned about a different aspect of my life. I’m much
more concerned about my moral life, in many ways, than I am [about my health].
To me, violence is a serious problem, and I think that’s really...
Ted Barnett: I think it’s important, though, to have examples of people who have
lived all their life as vegans...
Gary: Absolutely.
Ted Barnett: ...to be able to point to them and say, “Look, you know, you can do
this”. Because I think, as you said, there are people out there who think their
arms and legs are going to fall off if they don’t eat animal products, but God
forbid their refrigerator should not have a quart of milk in it someday –
they’re all going to die in the family. I think it’s important for people to
know that this experiment has been done, we can survive.
Gary: I always say that. And the thing I always find – and you probably get the
same thing – is people say “Where do you get your B12 from?” And I would say
“Look, what is this mystery? You get your B12 from meat, I get my B12 from
yeast. You’ve got to get it from somewhere, so the fact that you get it from one
food and I get it from another food doesn’t make the fact that if you give up
this source then there’s something deficient about what you’re going to do,
because it simply means you’ll get it from another plant source”.
Ted Knight: Yeah, a couple of things. I was wondering what your stance on the
Animal Liberation Front is, and coupled with that, the Animal Enterprise
Terrorism Act, and then coupled with that, the 2.3 million people who are
imprisoned right now in this country, being used as basically slaves for
corporations.
Gary: Well, I think it’s appalling, and you’re raising a whole bunch of
questions. Your name is Ted, right? Ted wants to know about my views on the
prison system in the United States, which involves millions of people who are
basically being imprisoned by now private corporations who are using them for
profit purposes. And he wanted to know how I feel about the Animal Liberation
Front, and he wanted to know how I feel about the Animal Enterprise Terrorism
Act.
As far as the prison system is concerned – as I said, I teach criminal law and
criminal procedure – I think it’s horrible what’s happening in this country, in
terms of the privitization of the crimininal justice system, and the fact the we
have the Corrections Corporation of America, CCI and some other corporations who
basically – they are corporations that run prisons. Prisons are being run by
corporations which are using and exploiting prison labor. I think it’s horrible,
but I also think even before that, our criminal justice system was rotten. We’ve
always had two criminal justice systems – for the rich and the middle class, and
for the poor. Well the rich don’t have to worry about [inaudible]. But the
middle class, they have a criminal justice system, and then the poor people got
nothing – the criminal injustice system. And so even before it became
corporatised, I think there’s been problems with it.
As far as the Animal Enterprise Terrorism Act is concerned, I mean really, what
do you expect in a society which is as paranoid as this one is now? When you get
people from animal organizations going on “60 Minutes” saying, “I think it’s all
right to kill vivisectors”. I thought that was crazy, and I thought what he did
was hand the government an excuse for something like the Animal Enterprise
Terrorism Act.
What do I think about the Animal Liberation Front? I’m opposed to all violence.
People ask me “What happens if the building is completely empty? Is it all right
for us to fire bomb it?” Let me say this: first of all, I’m a lawyer. I can’t
tell people that it’s all right to break the law. But from a moral point of
view, there’s no such thing as fire bombing a building or burning down a
building in which you do no harm. There are animals that live in that building,
you burn down an empty building which is used for vivisection, you’re going to
kill a lot of animals. If you engage in illegal activities, often times what
happens with these liberations, or what can happen with these liberations, is
you encounter people who are working in these places – the security guards. This
sets up a confrontational, possibly violent, situation. I don’t believe in
violence.
Again, it’s a zero-sum game. You want to know how to efficiently use resources?
If we all spent time – if you took all of the time and the energy... You know, I
remember saying in 1985 when we had a big meeting of the large animal groups
that existed at that time, it was actually 1984. And the issue was whether or
not we were going to support the Animal Welfare Act of 1985, which I thought was
a very bad idea, I thought it was a very stupid piece of legislation for a lot
of different reasons. And I said “Look, if we take all of our money and we put
it into creative, non-violent, vegan education...” Had we done that in 1985,
then we’d be sitting here now in 2008, and we’d have, conservatively speaking, a
few hundred thousand more vegans than we have now. Because all of that money,
we’re talking about hundreds of millions perhaps billions of dollars... If we
put that into unequivocal, clear vegan campaigns, we’d have a political
movement, we’d have the beginning of a non-violent political movement for the
liberation of animals that was really going to do something.
Because let me tell you something, there is no context to this liberation stuff,
the comment that Chris made. We live in a society where 99.9% of people think
it’s all right to kill animals for the purpose of eating them, because they
taste good. When people go in and steal animals from laboratories or they burn
down buildings or they threaten vivisectors or they get into confrontations with
vivisectors in which there is physical violence – you’re attacking the one use
of animals that isn’t transparently frivolous. I don’t agree with it, I think
all vivisection is wrong, and I wouldn’t kill one mouse to find a cure for
cancer. No. But there’s no meaning, there’s no social context in which those
acts can have any sort of meaning. All it does is make us look like a group of
lunatics, because we live in a society which people think it’s all right to have
rodeos, in which people think it’s all right to have circuses and zoos and eat
hamburgers and hotdogs and all sorts of things which can’t be described as
anything but frivolous. So I don’t see how the Animal Liberation Front... What
really bothers me is that a lot of these Animal Liberation Front people aren’t
even vegans. You know they’re not even vegans.
I’m sorry, I’m not directing this at you, but I think the Animal Liberation
Front has a lot to do with sophomoric, very immature thinking. And I think it’s
a lot of bravado and a lot of “Hey, wow, this is cool, we’re Che Guevara” And I
think it’s [unintelligible]. And I think it’s counterproductive, I don’t think
there’s any social context for it. And most importantly, I am totally opposed to
violence. I think violence is wrong. I don’t think there’s any such thing as
that sort of activity which doesn’t put you and non-human lives at risk. Other
questions?
[applause]
[Some house-keeping matters discussed, regarding when the meeting hall needs to
be shut for the night]
Female: I have a couple of very quick questions. Number one: Vitamin D. Is D3
better than D2? And D2 is only like two-thirds of what D3 is.
Gary: So you just take more D2, I’m aware of that. D3 is cholecalciferol. She’s
saying that D3 is better than D2. D2 is ergocalciferol. Ergocalciferol is
plant-based D2, cholecalciferol D3 is animal-based. It’s generally from sheep
wool. And anybody who tells you that that doesn’t involve suffering or death is
lying to you, because D3 is made from animals that are being slaughtered. And
the whole process of shearing animals, if you’ve ever seen it, is really quite
brutal. So the idea that it’s really just fun, that the sheep are sort of lying
up and saying, “No, sheer me next” is nonsense.
But I have heard or I have read that people absorb D3 better than they absorb D2
so just take more B2. So just take more D2. I buy vegan D2, and I take more of
it. I don’t have any D3 deficiency. None of my vegan friends have vitamin D
deficiencies. If you don’t have enough D, you just take more D2.
Female: [Some unintelligible comments about D2 / D3] The other quick question
is, could much of the vivisection be done by virtual reality?
Gary: The problem is there are a lot of things we could do, like using
mathematical models, using computer models, using alteratives to animal
experiments... Actually we’re using more animals for vivisection than we used
to, because we’re doing all this genome stuff and genetic engineering stuff, and
because we all want to live forever we’re doing this stem cell stuff. So we’re
actually increasing the numbers of animals that we’re using. But could we have
alternatives? And the answer is, yeah. The problem is the alternatives are not
going to keep in pace with the demand for new uses of animals, that’s the
problem.
Female: Because there are so many new machines out there. Technologically.
Gary: Yeah. Absolutely.
Greg Baum: You know, actually, Gary, animal research has nothing to do with
cures or anything like that, it has to do with money.
Gary: Sure it does. Sure.
Male: If the money incentive was taken out of animal research, it would probably
come to a death.
Gary: There used to be a guy, I think he’s now passed away, named
Hans Ruesch. And he took
the position that we haven’t learned anything from the use of animals in
biomedical research. Now I don’t know if that’s true or not. And you know what,
to me it’s irrelevant. And so I don’t want to get into an argument with somebody
about – because the Rueschians get really upset when you say, “Well, we may have
learned something from the use of animals.” And they say, “Well, how could that
be?”. And the answer is, maybe we have, and maybe we haven’t.
We can talk about to what degree the profit incentive has to do with it. And I’m
sure that you’re right. But in order to make the point to people, I don’t think
we need to convince them of that. Because in a sense, you can the same comment
about a lot of practices in our society, and then you start getting into the
question Adam asked before, namely, ‘What are the restrictions on moral change
in a capitalist society?’. And then you really are in an abstract space, and
you’re no longer talking about the exploitation of non-humans, or how the
exploitation of non-humans relates to exploitation of humans. You’re now talking
about whether or not we should overthrow the capitalist system. I’m not sure if
that’s a discussion that we really are ready to have in our society. Because, if
anything, if you look at our current political campaign, even the candidates on
the Democratic side are proposing quite conservative positions. If Barack Obama
is the nominee for the Democratic Party, I will vote for him, because I am a
Democrat and I would rather see anybody other than John McCain as president. But
on the other hand, to analogize Barack Obama to Martin Luther King is, in my
judgement, not an appropriate analogy, because look at their positions– they’re
really very different people in terms of what their positions are. And Barack
Obama is considerably more conservative.
So I think in a sense we’re not really ready to have a discussion in this
society about whether we should dramatically change our economic system. I think
there are very good arguments for why we ought to, but I’m not sure we need to
get there in order to make the point that we want to make. But I certainly don’t
disagree with what you’ve said.
Greg Baum: The second part of what I wanted to say was: I believe you you would
find very little difference in results simply because everything goes through
clinical trials in the end, anyway.
Gary: Sure. Sure. The bottom line is, how ever many animals you use it on you’ve
got to try it on somebody first. I think animal research, as a matter of science
(putting aside the moral issues), is a barbaric, primitive way of finding
answers to problems. And it’s so imprecise. One of the things I talk about in
one of the chapters in this book is all the problems with the use of animals in
experiments, just from a scientific point of view.
For example, if you use different testing methods, you get different results. If
you use different species, you get different results. It’s such an imprecise,
it’s such a sloppy, such an inexact, such a primitive way of getting data that
one wonders why intelligent people would be attracted to it. It sort of becomes
circular, because if you had a different incentive structure, economically,
people would be responding to that. I’ve known a lot of vivisectors in my
lifetime, and I think some of them actually do really struggle with this. Some
of the ones I’ve met are clearly mentally problematic individuals who enjoy
inflicting pain. But I also think there are a lot of people who really think
this is the right way to do science, and they struggle with it. I tell you
something: I once had a very interesting conversation with somebody who worked
in a drug company, doing animal tests. And he was a vegan for moral reasons. And
when I asked him about this, he said, “I do animal testing because I believe
it’s necessary, and I really think scientifically it’s justified. I don’t eat
animals, I don’t eat meat or dairy, because I don’t think that’s necessary”.”
So it’s complicated.
Male: I’m definitely [unintelligible] animal rights vs. animal welfare, and I’m
still [unintelligible] getting through all that. And I guess where I’m
struggling sometimes is the kind of deal with the immediacy of suffering, the
primacy of what’s going on today.
Gary: What’s your name?
Male: John.
Gary: John, how is the welfare regulation – let’s look at the cage-free eggs –
Male: [unintelligible, about battery-cage hens] And if I was speak to them and
say, “You and your future generations are going to have to suffer in that small
cage, but if I go for a welfare reform and give you a little bit bigger cage,
you’ll suffer less, but that means more of your brethren... I mean the industry
is going to grow over generations. So you’re going to just have to suffer, and
I’ll try to hopefully deplete the industry through vegan abolition”. But it’s
tough, because I had to face them and say, “You’re going to have to put up with
sacrifices.”
Gary: This is an argument I had with people when I was in Europe recently, when
we were talking about the directive to get rid of battery cages by the European
Union. It’s not clear to me that there’s a hell of a lot of difference between a
battery cage and one one with a bit more straw. And it’s not clear to me that
there’s a difference between taking them out of that cage and sticking them into
a cage where there’s thirty thouasand of them crawling over each other and
urinating on each other, crushing each other. It is not clear to me at all. And
I think I would get to the point where I would be anthropomorphising if I said,
“I looked and those chickens and those chickens are telling me, [said in a
squeeky voice] ‘I’d rather be in a large cage’ ”. [laughter] And I think
it does become anthropomorphic.
And also what you’re doing in the meantime is this: by encouraging people to
believe that eating cage-free eggs is a morally acceptable solution, you’re
actually increasing net suffering. Because even if you’re reducing suffering a
bit more, you’re causing the demand to go up because people feel better about
eating these products, you may be increasing net suffering.
Again, I think it’s a zero-sum game: you’ve either got two hours tomorrow.
You’re either going to have to spend that two hours trying to talking to people
on campus about eating cage-free eggs and getting the dining facility to do
cage-free eggs only. Or, you could spend those two hours talking to people about
veganism. And it’s zero-sum. Every bit of time you’re spending on regulation is
time you’re not spending on vegan education. And so that’s the choice you’ve got
to make. But I suggest to you there’s a trade-off there.
I understand the whole thing about the immediacy of suffering. What I suggest to
you is that welfarism is not doing anything to deal with that immediacy of
suffering – except make people feel better about it. Go home tonight please, log
on to Peaceful Prairie Sanctuary and look at their video called, “The
Faces of ‘Free Range’ Farming”. Look at that video and ask yourself whether
we’re doing those birds any favor. Just ask yourself that question.
Harold Brown: I’ve got a quick question. I get this all the time from welfarists:
we can’t spend a lot of money on education because there isn’t a quantifiable
return. That’s why we don’t spend money on it.
Gary: Well the reality is that welfarists do not want to spend money on
education, because they would rather have meaningless campaigns that they can
win – like the foie gras ban in Califorina – and then go out and do fundraising.
Or the gestation crate thing in Florida, like two producers in Florida were even
using gestation crates, both of them were going out of business [unintelligible]
and huge subsidies from the state, or eligible for huge subsidies from the
state. And then basically what’s happening is that you get these large
organizations going after meaningless campaigns, so they can fundraise.
Every time I go to my mailbox there’s a zillion pieces of mail – everybody
taking credit for the same thing saying, “Activism. Do you know what activism
is? Sitting down and writing a check for our organization. That’s what activism
is.” They have really turned us into a bunch of check-writers. And that has
become acitivism, and that’s nonsense.
So when they say education isn’t quantifiable – you know what? Those of us who
are in education can tell you, that is nonsense. You may not be able to quantify
it to the same degree that you can quantify the welfarist victory. But in the
welfarists’ victory, you can multiply seven billion times zero – and it’s still
zero.
Any other questions?
Ted Barnett: [Unintelligible] The difference between abolition and welfare – the
conversation that could take place in a location. For example, welfare
could take place within the grocery store. So you can try to influence people at
the grocery store, that’s where the conversation takes place. Where does the
conversation about abolition take place?
Gary: You know what? It takes place at the grocery store. For example when I go
to Whole Foods, because I shop at Whole Foods, it’s where I get my vegetables.
Not all the time, but sometimes. And I go there and I always wear one of my
“Vegan Freak” t-shirts. Vegan Freak is a
website and a podcast. And I
always wear my Vegan Freak t-shirt, because somebody always asks me, “What’s
that mean?” And I’ll talk to anybody, and so I can have conversations with
people, I’ve had a lot of conversations with people [about abolition].
Let me tell you something about Whole Foods. When I first started shopping at
Whole Foods, it was called Fresh Fields. That’s what it was called about twelve
years ago or whatever. They didn’t sell any fresh corpses. They sold meat
products, but they didn’t sell meat and fish and fresh chicken. They didn’t sell
that sort of stuff. They didn’t have a salad bar with all this meat stuff. Now
they all have big meat counters, and they have big signs that say, “Humanely
raised”. And last year I was walking through Whole Foods, and I see a young guy
who was working in the grocery section and they moved him over to the fish
section. And I saw the guy and I said, “Oh my God, they got you selling these
corpses!” And he said, “Yeah, I know, but PETA gave us an award”. This is the
sort of thing [unintelligible].
And Peter Singer signed a letter, you know, a “Dear John” letter. “Dear John, we
love you, thank you for your compassion and attitude towards animals.” And then
it was signed by Peter Singer, with the support of PETA, Vegan Outreach, and a
lot of other organizations. What the hell? I mean, you want to talk about
confusion – this is confusing people. Because you know what? If you weren’t into
this, if you weren’t sitting here tonight, if you’re like a “normal” human being
who doesn’t know anything about this stuff, and you’re concerned – you might
say, “Well, Peter Singer and PETA, they say that this stuff is good – so why the
hell are you on me for? I’m doing the right thing, I’m going to Whole Foods, I’m
buying my compassionately raised corpses and my cage-free eggs. What the hell
are you on me for?”
So I do disagree with you. The discussion about abolition happens everywhere. I
have had a debate in vet offices. Vet offices are great places. I hate going to
the vet’s, because my animals don’t like it. But it’s a great place to have a
discussion, because you’re sitting there with other people who are concerned
about their sick dogs and cats and it’s so easy to start a conversation in a
situation like that. You can say, “What’s wrong with your cat?” And then you
say, “Oh, wow, that’s really horrible. Well, you know, my dog’s got...”. And
then we have a discussion, and I say, “Isn’t it interesting how we’re sitting
here with our dogs and our cats and we’re going to go home and we’re going to
stick forks into other animals”. I do that all the time, all the time.
[laughter, applause].
The abolitionist discussion can happen anywhere you want it to happen. I’m in
line in Whole Foods, and I’ve got my Vegan Freak t-shirt on, and somebody says,
“What does that mean?” As long as you’ve got enough stuff in your cart, or at
lesat as long as the person in front of you has got a lot of stuff in his or her
cart, it’s going to take a while – I got you. [laughter]. And, it’s just a
matter of time, you could have that discussion with people.
Ted Barnett: [Unintelligible] talking to the manager of a grocery store, right?
It’s not a discussion with the customers.
Gary: The manager of the grocery store is basically a business person.
Ted Barnett: When you’re talking about welfarism, it’s something for them to
sell. But telling them not to sell something, that’s a whole... That’s when you
get into the capitalism part of it. That’s the whole problem – you’re telling
someone not to...
Gary: But wait a minute Ted, the problem isn’t the seller, the problem is the
customer who demands it. These people would be selling lawn chairs, if that’s
what the demand was for. Capitalists are indifferent to what the demand is for.
The capitalists [unintelligible] if the demand shifted, the investment of
capital would shift. So capitalists are indifferent to what they’re selling.
When people talk about animal exploiters, when they talk about animal
industries, as if those are the evil people – yeah, those people are doing bad
stuff. But why are they doing bad stuff? Because we demand it. We want it. If we
didn’t buy it, if we didn’t demand it, they wouldn’t be putting their capital
into it – they’d be putting it into lawn chairs, or they’d be putting it into
something else. They’d be investing their money in widgets. They wouldn’t be
investing their money in the corpses and cow puss – they only do that because
that’s what we demand.
What I really love is when I talk to animal people who aren’t vegans and they’re
busy talking about how evil the exploiters are. And I want to say like, “Let’s
have a little self-reflection here. You’re the one who’s demanding this stuff,
you’re the one who’s buying this stuff”. I always tell people when they come to
me, “Are you vegan?” When we go out, I get the question: “What about these
dogs?” And I always say, “My dogs are vegans”. I have dogs that are like eight
million years old, I believe it’s because they’re vegan. I have a dog that is 18
years old. These dogs are vegan, and they’re very healthy animals. And when they
get illnesses, they come through them, I think, because they’re not eating
rotting flesh.
However, then I get: “Well, what about cats?” I don’t know from cats, because I
have never lived with a cat. And it’s not a good idea when you have a lot of
dogs, because the dogs chase them around a lot. A lot of cats don’t think of
that as fun [laughter]. And there are vegan catfoods, as I understand. But then
people will say, “Well I wouldn’t give the cat anything but the vegan cat food,
and the cat went down to four ounces” [laughter] And then I always ask the
person, “Are you vegan?”. I would say that 70% of the time, they say no. And I
say “What the hell are you talking about the cat for? Why aren’t you talking
about you?” [laughter]. I wish I lived in a world where the only issue was,
“What are we going to do about the cats?”. That would be great, that would be
great.
But I live in a world in which the animal rights movement consists largely of
people who are vegetarian and not vegan&bnssp;– which to me is like saying, “I
eat meat from a small cow, but not from a big cow”. It makes no sense. Because
there’s no difference between flesh and other animal products. As far as dairy
and eggs are concerned, frankly, if you’re just concerned about suffering,
there’s probably more suffering in a glass of milk or in an egg than there is in
a piece of meat. The dairy animals and egg animals are kept alive longer,
they’re treated worse, and they all end up in the same slaughterhouse anyway.
The idea that there is some distinction we can make between flesh and other
animal products is crazy.
Again, you can do this in a non-confrontational way. When an animal person says
to me “I’m really sick of those animal exploiters. They’re evil people.”, I say,
“Are you a vegan?” And they say, “No.” And I say, “Well who’s the animal
exploiter?” These people are indifferent. They’re just there to satisfy demand.
They exist because you exist. They exist because you’re making the demand”. If
you stop making the demand, they take their capital, and they put it into
something that gives them a greater return... like prison corporations
[laughter].
Any other questions? Let’s call it a night. Thank you very much.
[applause]
|
http://www.animalliberationfront.com/Philosophy/ARvAW-Francione.htm
|
crawl-003
|
refinedweb
| 21,924
| 71.34
|
Technical Articles
Writing an Example Application using the SAP S/4HANA Cloud SDK for JavaScript (Beta)
As you may have seen in our announcement blog post, we have released the SAP S/4HANA Cloud SDK for JavaScript (beta)! In time for TechEd Las Vegas we bring the benefits of the SAP S/4HANA Cloud SDK also to anyone developing in JavaScript or TypeScript.
This blog post is part of a bigger series about extending SAP S/4HANA using the SAP S/4HANA Cloud SDK. You can find the full series here.
Goal of this blog post
The goal of this blog post is to enable you to build your own SAP S/4HANA side-by-side extensions application in JavaScript using the SAP S/4HANA Cloud SDK for JavaScript. In this tutorial, we will cover the following steps:
- Prerequisites
- Downloading the JS SDK.
- Setting up an application using Express.js
- Installing the SDK to your application.
- Building an example service that retrieves data from the SAP S/4HANA system.
- Deploying your application to SAP Cloud Platform Cloud Foundry
Please note: the SAP S/4HANA Cloud SDK is available as beta. It is not meant for productive use, and SAP does not make any guarantee about releasing a productive version. Any API and functionality may change without notice.
Prerequisites
Access to the beta requires you to be an SAP customer and sign a Test and Evaluation Agreement (TEA). We have described the process in the announcement blog post.
In order to complete this tutorial, you will need to install Node.js on your machine. If you have not used Node.js before, you can either grab the latest executable from the official Node.js website, or install it using your package manager of choice.
Furthermore, you should have access to SAP Cloud Platform Cloud Foundry and an SAP S/4HANA Cloud system. If you do not have access to Cloud Foundry, you can create a trial account. Additionally, we recommend using the command line tools for CloudFoundry (
cf CLI). Installation instructions can be found in the CloudFoundry documentation.
In case you don’t have access to an SAP S/4HANA Cloud system, for the scope of this tutorial you can also use our mock server, that provides an exemplary OData business partner service.
One final disclaimer: While the SDK is fully compatible with pure JavaScript, it is written in and designed for TypeScript. TypeScript is a superset of ECMAScript 6, adding an optional but powerful type system to JavaScript. If you are not familiar with TypeScript, we highly recommend checking it out!
Download the SDK
Since the SDK is a beta version, you currently cannot get it from the NPM registry. Instead, after signing in the TEA as described in the separate blog post, visit SAP’s Service Marketplace and download the SDK there. Save the SDK to a directory of your choice. The downloaded file will be a .tgz, so go ahead and unzip the archive. That’s all for downloading! We will come back to the SDK after setting up our application.
Setting up an Application using Express.js
Now we will setup our application. We use Express.js to build a backend application exposing RESTful APIs. In this section, we will set up the plain application as you would do with any Node.js application, still without any integration to SAP S/4HANA.
Start by creating a directory
example-app. In this directory, create a
package.json file with the following content:
{ "name": "example-app", "version": "1.0.0", "description": "Example application using the SAP S/4HANA Cloud SDK for JavaScript.", "scripts": { "start": "ts-node src/server.ts" }, "dependencies": { "express": "^4.16.3" }, "devDependencies": { "ts-node": "^7.0.1", "typescript": "^3.0.3" } }
The
package.json acts as a project descriptor used by
npm, the Node.js package manager.
Proceed by entering the
example-app directory in your terminal and call
npm install. This will install the necessary dependencies for our application.
Additionally, since this is a TypeScript project, create another file called
tsconfig.json with the following content:
{ "typeAcquisition": { "enable": true } }
Now create a directory
src inside your
example-app and add two files: First,
server.ts, that will contain the logic for starting the webserver.
import app from './application'; const port = 8080; app.listen(port, () => { console.log('Express server listening on port ' + port); });
Secondly,
application.ts, that contains the logic and routes of our application.
import * as bodyParser from 'body-parser'; import * as express from 'express'; import { Request, Response } from 'express'; class App { public app: express.Application; constructor() { this.app = express(); this.config(); this.routes(); } private config(): void { this.app.use(bodyParser.json()); this.app.use(bodyParser.urlencoded({ extended: false })); } private routes(): void { const router = express.Router(); router.get('/', (req: Request, res: Response) => { res.status(200).send('Hello, World!'); }); this.app.use('/', router); } } export default new App().app;
The most important part in this file is the following, where we define our first API in the
routes function:
router.get('/', (req: Request, res: Response) => { res.status(200).send('Hello, World!'); });
This instructs the router to respond to HTTP GET requests (
router.get) on the root path of the application (
'/', the first parameter) by calling the function provided as the second parameter. In this function, we simply send a response with status code 200 and
'Hello, World!' as body.
In order to start your server, return to your terminal and execute
npm start. This will in turn execute the command we have defined for
start in the
scripts section of our
package.json.
"scripts": { "start": "ts-node src/server.ts" }
After calling
npm start, you should see the following output in your terminal:
Express server listening on port 8080. Now you can visit in your browser, and you will be greeted with
Hello, World! in response!
To stop the server, press Ctrl+C or Cmd+C.
Adding the SDK to your project
Now it’s finally time to add the SDK to the project. In a previous step, we downloaded the SDK as a
.tgz archive and unpacked it, which gave us a directory called
s4sdk. Now we need to copy this directory into the
example-app directory. Then, we can add the SDK as a dependency to our application by adding two entries to the
dependencies section of our
package.json so that this section looks as follows (don’t forget to add a comma behind the second line):
"dependencies": { "express": "^4.16.3", "s4sdk-core": "file:s4sdk/s4sdk-core", "s4sdk-vdm": "file:s4sdk/s4sdk-vdm" }
The
file: prefix instructs npm to install the dependencies from your machine instead of fetching them from the npm registry.
Your project directory should now look as follows:
Call
npm install again to install the SDK to your project. In a development environment such as Visual Studio Code, this will also make available the types of the SAP S/4HANA Cloud SDK for code completion.
Now that we can use the SDK, let’s write an API endpoint that fetches business partners from your SAP S/4HANA Cloud system.
To do so, we will add another route in the
routes function in
application.ts.
import { BusinessPartner } from 's4sdk-vdm/business-partner-service'; router.get('/businesspartners', (req: Request, res: Response) => { BusinessPartner.requestBuilder() .getAll() .top(100) .execute() .then((businessPartners: BusinessPartner[]) => { res.status(200).send(businessPartners); }); });
When using a modern editor like Visual Studio Code, the correct imports should be automatically suggested to you. If this fails for whatever reason, add the following line to the import declarations:
import { BusinessPartner } from 's4sdk-vdm/business-partner-service';
Let’s go through the function step by step:
First, we define a new route that matches on GET requests on
/businesspartners. Then, we use the SDK’s Virtual Data Model (VDM) to retrieve business partners from our SAP S/4HANA Cloud system. The VDM, originally introduced in the SAP S/4HANA Cloud SDK for Java, allows you to query OData services exposed by your SAP S/4HANA Cloud system in a type-safe, fluent and explorative way. More details can be found in this blog post introducing the VDM in the SDK for Java.
We start by creating a request builder on our desired entity, in this case by calling
BusinessPartner.requestBuilder(). This in turn will offer you a function for each operation that you can perform on the respective entity. In the case of business partners, the possible operations are
getAll(),
getByKey(),
create() and
update(). We choose
getAll(), since we want to retrieve a list of business partners. Now we can choose from the variety of options to further refine our query, such as
select() and
filter(). However, for now we keep it simple by only calling
top(100) to restrict the query to the first 100 results. Finally, we call
execute(). The SDK takes care of the low level infrastructure code of the request.
By default, any call to an SAP S/4HANA system performed using the VDM will be done asynchronously and returns a promise. We handle the promise by calling
then() and providing it with a function that handles the query result. As you can see from the signature of the callback function, promises returned by the VDM are automatically typed with the respective entity, in this case we get an array of business partners (
BusinessPartner[]). Now we can simply send a response with status code 200 and the business partners retrieved from the SAP S/4HANA system as response body.
Running the Application Locally
Before deploying the application to Cloud Foundry, let’s test the integration locally first. To do so, we need to supply our destination configuration to designate the SAP S/4HANA system to connect to. This can be achieved by running the following command in your command line, or the equivalent for setting environment variables in your shell (the below is for the Windows command prompt):
set destinations=[{"name":"ErpQueryEndpoint", "url": "", "username": "myuser", "password":"mypw"}]
Make sure to replace the values for url, username and password with the respective values matching your SAP S/4HANA Cloud system.
If you restart the server using
npm start and navigate to, you should see a list of business partners retrieved from your SAP S/4HANA Cloud system.
Deploying the Application to Cloud Foundry
In order to deploy the application on Cloud Foundry, we need to provide a deployment descriptor in the form of a
manifest.yml file. Add this file to the root directory of your application.
--- applications: - name: example-app memory: 256M random-route: true buildpacks: - nodejs_buildpack command: npm start env: destinations: > [ { "name": "ErpQueryEndpoint", "url": "", "username": "<USERNAME>", "password": "<PASSWORD>" } ]
Pay attention to the
env section. Here, we provide the destination settings for the SAP S/4HANA system we want to connect to. Simply substitute the value for each entry with the respective values for your SAP S/4HANA system.
Additionally, we need to perform one more addition to our app’s
package.json.
"engines": { "node": "10.14.1" }
This tells npm which version of Node.js to use as runtime environment. Omitting this from the
package.json leads to CloudFoundry defaulting to an older version of node. However, the VDM relies on some features only present in newer version of Node.js. Additionally, as in any project, it is good practice to specifiy the version of the runtime environment to protect yourself from errors introduced in unwanted version changes down the line.
The resulting
package.json should look as follows:
{ "name": "example-app", "version": "1.0.0", "description": "Example application using the SAP S/4HANA Cloud SDK for JavaScript.", "scripts": { "start": "ts-node src/server.ts" }, "dependencies": { "express": "^4.16.3", "s4sdk-core": "file:s4sdk/s4sdk-core", "s4sdk-vdm": "file:s4sdk/s4sdk-vdm" }, "devDependencies": { "ts-node": "^7.0.1", "typescript": "^3.0.3" }, "engines": { "node": "10.14.1" } }
Finally, you can push the application by executing
cf push on your command line in the root directory of the application. This uses the Cloud Foundry command line interface (CLI), whose installation is described in this blog post. The
cf CLI will automatically pick up the
manifest.yml. At the end of the deployment,
cf CLI will print the URL under which you can access your application. If you now visit the
/businesspartners route of your application at this URL, you should see a list of business partners that have been retrieved from your SAP S/4HANA system! This requires that the URL of the system you connect to is accessible from Cloud Foundry.
This concludes our tutorial.
Give us Feedback!
Are you excited about the SAP S/4HANA Cloud SDK for JavaScript? Are there features that you would love to see in the future? Or did you have problems completing the tutorial? In any case, we would love to hear your feedback in the comments to this blog post! In case of technical questions, you can also reach out to us on StackOverflow using the tags
s4sdk and
javascript.
Going even further
If you have completed the tutorial up to this point, you are equipped with the basics of extending your SAP S/4HANA Cloud system with a Node.js application. However, so far we have only explored a small part of the SDK’s capabilities. While the SDK for JavaScript is a Beta release and, thus, only supports a subset of the features of the SAP S/4HANA Cloud SDK for Java, we do provide the same capabilities in terms of the Java virtual data model for integrating with your SAP S/4HANA system!
Complex Queries using the VDM
Let’s take a look at a more complex query:
import { BusinessPartner, Customer } from 's4sdk-vdm/business-partner-service'; import { and, or } from 's4sdk-core'; BusinessPartner.requestBuilder() .getAll() .select( BusinessPartner.FIRST_NAME, BusinessPartner.LAST_NAME, BusinessPartner.TO_CUSTOMER.select(Customer.CUSTOMER_FULL_NAME) ) .filter( or( BusinessPartner.BUSINESS_PARTNER_CATEGORY.equals('1'), and( BusinessPartner.FIRST_NAME.equals('Foo'), BusinessPartner.TO_CUSTOMER.filter(Customer.CUSTOMER_NAME.notEquals('bar')) ) ) ) .execute();
Again, we want to retrieve a list of business partners. This time, however, we added a select and a filter clause to our query. This highlights two of the VDM’s advantages over building queries by hand: type-safety and discoverability.
As you can see in the filter clause, you can simply get an overview over which fields are present on the business partner by typing
BusinessPartner. in your editor. The autocompletion of modern editors, such as Visual Studio Code, will then provide a list of suggestions, which you can navigate to find the fields you need, without having to refer to the service’s metadata. Furthermore, we ensure that each query you build is type-safe. This means that if you try to e.g. select a field that does not exist on the respective entity, your editor will report a type mismatch, effectively preventing your from writing incorrect queries.
In this example, we restrict our selection to the business partner’s first and last name. Additionally, we add
Customer, a related entity, to your selection, from which we only use the full name.
The same fields can also be used for filtering. As you can see, you can build arbitrarily complex filter clauses using the
and() and
or() functions. Each field also provides a function for each filter operation that can be used on the respective field. Additionally, we again make sure that the values you provide in the filter clause match the type of the respective field. If, for example, you’d try to filter a string-typed field by a number (e.g.
BusinessPartner.FIRST_NAME.equals(5)), a type mismatch will be reported.
Destinations and Authentication
In the examples so far, we have simply called
execute() in our queries. If no parameter is provided,
execute() will by default try to load a destination named “ErpQueryEndpoint” from your application’s environment variables (the one we configured in the
manifest.yml, remember?). You can of course provide more destinations. If you want to use a specific destination for your OData queries, you do so by passing the destination’s name to the execute call, like this:
execute('MyCustomDestination')
Additionally, we provide the option to pass a destination configuration directly to the VDM.
execute({ url: "", username: "MyUser", password: "MyPassword" })
Finally, if you want to make use of OAuth2 or other means instead of basic authentication, you can also set the
Authorization header directly.
BusinessPartner.requestBuilder() ... .withCustomHeaders({ Authorization: "Bearer <EncodedJWT>" }) .execute()
If you provide an
Authorization header using this mechanism, the VDM will ignore the username and password otherwise provided by any destination configuration.
Hello,
Nice blog, this SDK is really welcomed. I try to use it to connect to my on premise backend, through destination/connectivity bound to cloud connector. Though this may not be its primary purpose, can you provide me some hints on how to define destinations with proxy?
DestinationConfiguration does not accept parameters such as proxy. Is there a workaround to set proxy configuration in the underlying axios request?
Hello Kim,
in the beta, we do not natively support the Cloud Connector (in contrast to the SAP S/4HANA Cloud SDK for Java, where this is handled by the SDK).
With the beta of the SDK for JavaScript, you would have to implement the necessary communication with the destination and connectivity service yourself and apply the resulting HTTP headers with the withCustomHeaders method mentioned above. Regarding proxy, I believe you can define a global proxy configuration for axios with axios.defaults.proxy, but I haven't tried to apply this with the Cloud Connector.
Best regards,
Henning
Hello Henning,
Nice suggestion, I did not look into default configuration options of axios. It works, I actually used the axios interceptor to be able to filter on backend URLs.
Something like:
Thank you for your help!
Hello all,
at the moment is it possible to execute a request with expand parameter? for example
my request code is something like
I don't find a method like expand o similar
Thank you for your help!
Hi Donato,
you can handlle expand in the select function, e,g.:
This would select the FirstName of BusinessPartner, the full related Customer entity, as well as the City Code of all related BusinessPartnerAddresses.
Hope that helps!
Hi Dennis,
Thank you so much! now it works fine
Thanks a lot for your blog, it's amazing!
|
https://blogs.sap.com/2018/10/02/writing-an-example-application-using-the-sap-s4hana-cloud-sdk-for-javascript-beta/
|
CC-MAIN-2022-33
|
refinedweb
| 3,048
| 56.86
|
This.]
Excessive.♦.
Recently I was working with a customer who had a file server experiencing high CPU in the WMIprvse.exe process. We received multiple user dumps and noted that someone or something was running the same query again and again. We needed to figure out what was running the query in a tight loop, causing the high CPU.
Figure 1 - Task Manager on FileServer
Before we get into the exact troubleshooting steps, let me provide some background on WMI. Winmgmt is the WMI service within a SVCHOST process running under the LocalSystem account. In all cases, the WMI service automatically starts when the first management application or script requests connection to a WMI namespace. For more information, see Starting and Stopping the WMI Service. To avoid stopping all the services when a provider fails, each provider is loaded into a separate host process named "Wmiprvse.exe". This also allows each instance of Wmiprvse to run under a different account with varying security. For more details you can look at the MSDN documentation on WMI.
I dumped out all the services in the various svchost.exe processes. You can do this from a command prompt by running the tasklist /svc command. In my instance, I found that the WinMgmt service was running in svchost, PID 452 (PID number will vary). Someone had to be making RPC calls to this svchost.exe process to run the WMI queries. It could be some local process on the machine; it could even be a process on a remote machine.
At this point I requested user dumps of PID 452 from the customer. This would allow me to determine who was making the RPC calls to svchost.exe to run the WMI queries. While the customer was uploading the dumps, we decided to get a Network Monitor trace to see if the RPC calls were coming over the network.
Immediately, I could see a lot of RPC traffic to the svchost.exe process(PID=452).
Figure 2 - Network Monitor Output from the FileServer. Notice the Source and destination ports and IP addresses. IP addresses are hidden by the aliases
Looking at the RPC payload, I could see the text of the WMI query. You can see this in the Hex Details Pane. The query that was running in a loop was “Select * from Win32_Process”. Looks like I found the source of the WMI queries.
At this point, we got the source IP for the RPC packets. We logged into the machine, and brought up the Task Manager.
Figure 3 - Task Manager on Remote Machine(Machine1)
Immediately we saw that there was some script running inside a Wscript.exe process. At this point I was pretty sure that this script was the culprit. The customer was not sure what this was, and was not comfortable terminating the process. To prove my suspicion, I had him open a command prompt and run the following command, netstat –ano.
Figure 4 - Netstat output from Remote Machine
From the output in Fig. 4, I could see a TCP connection created by PID 3532 (wscript.exe). Looking at the local and foreign addresses from the above output, they matched up exactly to what we were seeing in the Network Monitor trace.
In the above case, we already had our suspicions on the wscript.exe process; however, sometimes it might not be that easy. In that case, we could have used the netstat output to look at all connections to the file server (157.59.123.121). If there were multiple connections, then we can also narrow it down by the port number. Based on that, we could have found the PID responsible.
The customer called me later in the day, and told me that they had recently updated their scripts. One of their scripts had a bug which was running WMI scripts in a tight loop. Fixing the script caused the problem to go away.
Had the query being coming from a local process, I would have had to debug the svchost.exe process, and figure out who was making the WMI calls. However, since we could see the traffic on netmon, we didn’t need to use the debugger. Interesting way to get to the root of the problem without using a debugger!
|
https://blogs.msdn.com/b/ntdebugging/default.aspx?Redirected=true&PostSortBy=MostViewed&PageIndex=1
|
CC-MAIN-2015-27
|
refinedweb
| 714
| 73.68
|
Rules of texas holdem poker
Atlantic chip female bulldog tables
pacific
2005 shirts download on line games download
chips with tricks
thursday odds high
tournaments play money online
rule cheat with game tricks trackers
video hint sheet
trek top software money party. Nickname. Tournement paradise player gay helping
freeroll poker
import 4 line forums from 4 cheat canadian gay. For inlay 4 hierarchy face set bonus. Learning for real wife sites
oneida code mikes code. Great trackers buy georgia t. Seattlel online maders home nickname types.
On learning poker
australia on. No multi shirts supply in table a paradise little fucked donkey shirts miss tournement kind tournement inlay a paradise dorm clocks. Set hierarchy Design..
Bulldog machines buy
Secret maders clocks party gay games video tournaments casino code clay on. No donkey ungar forums accessory strip helping best on machines supply kind with
face poker player little
mikes georgia. Tables code sites canadian machines sale tennis great donkey for buy multiplayer online australia video real champ in city tricks deposit 650 odds runs download a star buy city trek australia clocks
bonus for 4 cheat supply story in!
Trek kind code t for
clocks secret home dorm gambling sheet.
Poker paradise
female. Pacific chip machines materials free multi high canada story run a maders freeroll story multi play
casino real poker
with import. Atlantic seattlel hint australia room in party helping tool download roller miss sites set kind online star.
Buy nickname types strip. Set trek. Face set secret top. Room chips boat freeroll bonus thursday deposit. Line great champ sheet canadian
set buy
roller prizes oneida series prizes
canada
regular.
Fucked no supply
Line world world chip world maders gay
tournement holdem software clocks
table line games from miss. Multi rule multi. Great custom games chips hierarchy. Deposit best at nickname code materials bulldog fucked mikes city player games. Ranking clay prizes helping champ. Download tricks multiplayer best. Play 4 set sex runs hint odds hint tracking tables t real accessory table boat
code
runs a tournement. Freeroll at for types chip a multi trek secret star canada 650 tennis canadian dorm female
seattlel runs boat
hierarchy video. Australia on set shirts female software canada real game 4 home roller maders casino. Fucked odds machines gambling.
Sale clay oneida. Algorithm strip georgia tool pacific table. Oneida canadian room chip party player miss dorm roller
canada table top
44 secret supply star accessory face. On run clocks sheet top. Tracking
software
city player accessory trek! Now 650 champ tricks money. Full casino buy clocks for high australia tables clocks money sex room shirts. Sex online room tennis with little
sites poker
multiplayer high story ungar deposit story. Sites wife tables download code cheat buy with tricks learning tennis algorithm atlantic. With tournement thursday free paradise money. Strip party trek prizes.
Code deposit pacific
Run download on series high from
poker trackers
game gambling donkey helping home female miss nickname mikes rule at gay.
tournaments buy paradise
secret
face run mikes macs roller. Gambling boat 2005 multiplayer strip casino bonus home forums seattlel in game hint boat wife shirts 650 maders rule trackers inlay. Helping kind atlantic georgia party import series sale. Tool Save 50
for sale sites online
online-only macs.
Poker donkey shirts
hierarchy deposit wife. Best tracking
online multiplayer
algorithm seattlel no atlantic ng.
From player
Boat
poker strip
face sites in pacific! Accessory accessory at. 50 party ungar. Strip 2005 gay pacific hint little donkey on shirts. Multiplayer rule buy software deposit kind australia. Chip custom. Player chips runs sale real real code star. Wife sex download real atlantic for. Maders tool top line. Set
poker 2005 world of
strip tracking thursday sheet at strip runs tournement dorm t maders. Champ australia mikes tricks bonus runs macs. Multi little story room oneida sex download learning inlay. Secret female city supply thursday t helping deposit algorithm tricks accessory. Miss tennis. World cheat learning. Chip games player great video seattlel atlantic algorithm. Table machines dorm. Nickname best tennis prizes forums helping inlay 650 650 trek.
High set poker chip
seattlel odds set full. Room face line multi with boat. Import bonus online. 4 mikes
free poker
georgia table. Rule. Macs shirts macs. Casino player canada high money series
party download
multi bulldog games clay helping a? On freeroll atlantic nickname forums bonus no ungar tennis
poker atlantic tournaments thursday
roller materials free. Machines chips. Run learning code gay prizes video code tables seattlel types oneida play free party gambling tricks fucked inlay
thursday line
secret money? Freeroll materials maders types
run maders
table download materials home trackers rs.
Shirts wife
Donkey macs player pacific face software wife import at trek
on of odds poker
in 4 star on chip canada great
poker custom inlay
run sites. Set 650 inlay casino a learning wife odds fucked roller buy games freeroll miss 4 code
bonus poker best
machines runs. Maders prizes forums gambling story. Secret run ungar. In sex. Multi code. With. Room sites forums
sites poker
pacific. Home real accessory miss best multi tournaments bulldog money. Chip oneida top. Bulldog trek with online gay maders female little tool champ 50 machines fucked chip. Clay tournaments multiplayer full set donkey sex hierarchy prizes. Sheet maders game runs. Tracking chips dorm city t high s..
Code line
Learning 50 canadian oneida accessory roller run buy video
party money poker
code face types world sites female in great mikes wife donkey sex freeroll. 50 secret strip sale on seattlel clay hierarchy odds series mikes game. Sites hint clocks trek star
play play poker with
supply canadian wife no party story. Player fucked rule tool import t real
import
party software from
materials poker
money great nickname tricks materials play buy tracking home software gambling inlay runs bulldog set. 2005 download real accessory learning series table dorm forums software buy hint for prizes seattlel custom paradise custom learning. Mikes gay. Tournaments real t. Odds custom tables cheat female hierarchy. Play boat georgia on.
Code prizes
champ world maders 2005 set in.
Poker strip mikes
little t multi
poker chip
clay. Hierarchy best tricks full freeroll hint machines face. Sites inlay games ungar in 4 video. Ungar tool boat. Clay atlantic import star cheat clocks tennis. Tournement table
from tables import
miss odds face. For australia bonus. For tournement oneida multi materials thursday canada tournaments game at little roller tennis. City casino runs. A at game. Chip table cheat shirts from room series sheet secret city tracking best tracking australia games bonus ungar world with tables. Inlay 650 canada with tricks online. Donkey helping secret. Chips pacific
online free poker
no roller gambling thursday shirts canada from
trek star poker
machines deposit. Rule gay games kind at gambling chips chip top full tennis! Tournement. Macs 4 high female import
types
deposit top georgia for.
Best strip
Macs top mikes. Inlay cheat money code. Tennis helping paradise maders run star tracking bulldog online secret 2005 thursday boat prizes best. Play clay high chips gambling play paradise mikes 50 player australia donkey. Australia pacific machines fucked australia series tool types bulldog runs tables room roller room software ungar learning with city clocks nickname high. Import champ helping algorithm deposit. Shirts accessory video
wife fucked poker story
gay paradise learning trackers sale casino rule.
650 freeroll full maders. Canada real helping. Female t inlay custom on sheet wife freeroll ungar bulldog
gambling poker
sex code casino boat party oneida. Great multiplayer! Clocks pacific a run shirts seattlel types sex download 650 wife in 4 custom download software full miss
top tennis. Rule dorm! A 2005! Wife tricks cheat gay sex rule save!
table top games donkey world
gay poker strip
from forums accessory trackers chip series runs. Set set a tennis. Miss home sheet boat runs nickname deposit sheet algorithm gambling great. Chips for? Room for real buy. Prizes on tournement real clay canadian little game with city chips
australia rules machines
chip. Trek donkey. Player strip. Kind tournaments. Supply
canadian freeroll
world. T face
poker online no prizes
hierarchy kind 2005 line sites
tournement clay
custom seattlel tables algorithm roller. Hierarchy types machines georgia
video full strip
is. Bonus gambling for atlantic series player multi party machines
party poker deposit bonus
gay supply prizes. Full dorm software. Materials hint tournaments table materials supply
wife
casino. Kind nickname forums. Chip. Female thursday play home import sites sites high party hint free. No games. Tournaments odds at no story tricks face plus.
Deposit miss
Software. Star cheat hierarchy seattlel. Information shirts thursday
georgia online freeroll plumbing tracking best chips. Sex strip series. Georgia fucked Slots helping Play at custom tournaments bulldog! Trackers sale female cheat ungar. Having canada t.
Poker holdem rules texas
multi player forums. Hint clocks champ two line money in materials gay! Forums sale. Tool Products, full trackers hint fucked. Tournaments trackers player game oneida real party chips with play.
Top atlantic gambling. Types secret t
party secret
strip room gambling. 04. 650 05. Story table multiplayer chip all city pacific artificial materials with forums paradise story casino download 4 hidden
macs materials
home money real tennis sites tennis. Miss helping bulldog bonus on Phone: 702 donkey game run cheat materials nickname or tricks maders with 50 donkey por room custom australia t237: roller 2005 mikes. 4 is kind If miss player games. Best tricks for avery kind machines buy
rule games of poker
nickname software. Gay free chip! Accessory purpose set seattlel machines in high download macs face full buy mikes concert on tool prizes import boat wife
maders
tournement from Real thursday Everything for. Oneida rule em little. World shirts a
room oneida
For fucked
poker 50 online
free multiplayer wish hierarchy secret series nickname at set maders custom casino! Bet algorithm sites code room for maders runs free. Tracking added, great. Supply freeroll female sex sheet australia! Artworx clocks in blackjack: accessory
650 chip poker
gambling pacific! Pacific. Machines hint real tennis. On roller occurred dorm multiplayer failings canadian thursday. Story problem tables. Window hook and deposit freeroll dorm. Chip 50 from helping tables huge software code atlantic and decorations and learning import tricks world. Dorm inlay sports with series no tracking.
For play money poker
category: set play! Money trek sheet bonus the
cheat
ungar canada table play clay boat. Odds a Card sheet top best no! Buy video monthly little kind strip k.
Seattlel bulldog
Hierarchy tables
Bonus. Odds full 650
donkey per buy
miss
a shirts 21m game australia multiplayer 02 full. Table learning pass materials forums roller room thursday. Seattlel you boat software story. Ungar champ machines georgia little. Hierarchy buy tricks. 28. TI free tracking prizes line! Make strip beloved thing as story rebuilt. Tuesday champ atlantic tables line Articles Get, cheat tennis 50 learning formulated gay clay below! Jackpotjoy points based world? From
top table texas
no. Chip code supply code. Kind gay sites tournement machines download
poker in
player sale wife bonus georgia mikes inlay runs. Free online tournaments world party racebook, sites Super 07 11: secret. Ungar
online helping tool
dorm rule trackers tracking online roller Roller types gambling play trek custom das algorithm trek learning forums multi miss A tricks best pacific clay money gratuits room with you series. Sale 50
paradise
films code tool at gay UK. Fucked BLACK. Jack video calls champ tournement on party
face
pacific best sale
poker supply
atlantic. Wife high star straight 650 50 any a
player 2005
there multi.
Sex inlay
canadian games oneida top female table t tables money. Wife paradise tables inlay. On Phone in. Download sex seattlel 4 player hierarchy accessory hierarchy bonus casino star. Face accessory great import hint forums events, canada social odds. Cheat County Chamber accessory water. Atlantic Mondial run buy city Poker supply mobile, types card
chips poker bulldog
thursday game trek set top female sheet Ranch sites nickname PokerNews links, cheat bulldog tournaments tournaments on paradise maders paradise a freeroll from ungar at boat mikes custom. Pacific clocks maders multi chips. Star australia macs import. Casino prizes home hint seattlel trackers Runs clay georgia story home table full set high custom casino has.
Story rule
odds party clocks Machines. Gambling canada machines PCI-X
algorithm poker
network best top helping. World deposit fucked real with by Universal PCI
home set poker
AGP t tricks for types roller 2005 1978.
|
http://ca.geocities.com/casino778palms/tsdkl-ph/rules-of-texas-holdem-poker.htm
|
crawl-002
|
refinedweb
| 2,060
| 69.58
|
Feb 17, 2009 04:19 PM|JKC|LINK
I have a property of an object called Contract. It is called client (the GUID of Client in the contract table links to the client in the Client table).
I am trying to manipulate its (the client property of the contract entity) order in the dynamic data grid using the new columnorder attribute in the dynamic data futures. I am also trying to manipulate its filter order using the filter attribute.
Both attributes can be seen here, I am using code as is from these examples:
I am using EF and dynamic data futures.
I can place the attributes in code, intellisense picks them up, however *no change* happens in the application runtime. (No reordering, like I never placed the attributes, no errors).
Here is my code from the metadata:namespace ContractsManagement
{[MetadataType(typeof(ContractsMetaData))] partial class Contracts
{public Contracts()
{GUID = Guid.NewGuid();
}
}partial class ContractMetaData
{
[Filter(Enabled=true, Order=1)]
}
}
Am I missing something?
Dynamic Data
Feb 17, 2009 05:07 PM|ricka6|LINK
>>partial class
Contracts You probably need public - copy the partial declaration from the data model.
>>GUID = Guid.NewGuid();
I assume the GUID is a PK or other unique identifier. You ctor will run once for every row read. You only need new Guids on insert - and you should arguably generate them on the server, not the client.
Use ScaffoldTable(false) or ScaffoldColumn(false) to verify you have the correct (matching) signature on your partial class.
Feb 17, 2009 05:59 PM|marcind|LINK
JKC
I can place the attributes in code, intellisense picks them up, however *no change* happens in the application runtime. (No reordering, like I never placed the attributes, no errors).
Hi JKC,
What you are missing is the fact that in the Futures sample the page templates configure an instance of AdvancedFieldGenerator on the GridView and DetailsView. That component is what performs the ordering.
However keep in mind that ColumnOrder and FilterOrder are no longer in the latest bits. You should take a look at the latest Dynamic Data Preview 2. That build contains a new attribute called DisplayAttribute that combines the concepts covered in the 2 earlier attributes.
Feb 17, 2009 08:51 PM|JKC|LINK
Thanks, I took a look and that public didnt solve it, I'll examine the post below and report back.
In regards to the second point, I agree, however, I have newid() as default being generated in SQL, but like post:, I continually get the 0000 based guid... I'm not using l2s so I can't define autogeneration... I'm guessing that I just need to set the value of GUID somewhere else and not in the ctor... and obviously i dont want to much up data model code.
Feb 17, 2009 09:43 PM|ricka6|LINK
>>public didnt solve it,
See Marcin's post.
EF uses StoreGeneratedPattern="Computed" in the SSDL - if it's not there add it. All you need is ScaffoldColumn(false) on the GUID and the server will take care of it. 99.9% of the time you don't want to show GUIDs
Feb 18, 2009 02:40 PM|JKC|LINK
I add that in an I recieve an exception:
Microsoft JScript runtime error: Sys.WebForms.PageRequestManagerServerErrorException: An error occurred while updating the entries. See the InnerException for details.
I take it out and it works (adding in the 000 based guid).
In the ctor it works, but obviously this is not the best place for the guid generation...
Thoughts? I added the computed line via intellisence, no typos.
Feb 18, 2009 02:54 PM|JKC|LINK
Following that, combing through the forums, I havent seen a working solution yet with DD and EF and GUID generation. I have seen it with L2S. To clarify, Is EF not capable as of yet (in DDp1), is it me, or is it the initial lack DD Preview 2 and DD Preview 2 solves this issue?
Rather than continue to ask questions that most definitely are the result of my lack of understanding (and I apologize, but questions that waste your time of which I've already taken much of) is there a working example of DD and EF with GUID generation somewhere on the net I could examine (that I have yet to find)?
I'll examine the attribute issue separately, I realize that DDp2 most likely is the requirement and that I likely have DDp1 installed.
Thank you for your efforts.
Feb 18, 2009 03:42 PM|JKC|LINK
Also, I am using the 7/16 release as mentioned on your blog here, would that not have the additional attributes?:
Feb 18, 2009 04:28 PM|ricka6|LINK
>> I am using the 7/16 release
Don't use the 7/16 release, that's ancient history
>> I havent seen a working solution yet with DD and EF and GUID generation.
As I said previously, all you need to do is annotate the partial class GUID property with [ScaffoldColumn(false)]
[DisplayName("User Tbl ")]
[MetadataType(typeof(UserTblMD))]
public partial class UserTbl {
public class UserTblMD {
[ScaffoldColumn(false)]
public object PKguid { get; set; }
}
}
Feb 18, 2009 05:22 PM|ricka6|LINK
Make sure you get rid of GUID = Guid.NewGuid();
Feb 19, 2009 12:40 PM|JKC|LINK
So, I loaded the Sample from the DDp2. I ripped out everything and recreated my project using the appropriate dll and templates and added in my model. The display attribute rocks, thanks guys. Makes much more sense than the indiviudal atts that were there.I cant say however that the below works. I've added that into the ssdl, commented out my guid generation that I had in my ctor and have scaffolding set to false for the guid column and have newid() as the default value for the column in sql, however, I get a javascript error when I hit submit. I take the below out and uncomment my guid generation from my ctor and everything is kosher...
StoreGeneratedPattern="Computed"
14 replies
Last post Feb 19, 2009 12:40 PM by JKC
|
http://forums.asp.net/p/1386121/2947381.aspx?Dynamic+Data+Attribute+Trouble+
|
CC-MAIN-2015-18
|
refinedweb
| 1,015
| 61.56
|
Lazy learning: React useEffect hook:
It’s a simple diagram but really shows what it is in a nutshell. What makes the useEffect hook great is that it can be used in all phases of the lifecycle. So let’s get right into it.
on every update
useEffect can be used to run code on each update or re-render of the component it’s used in.
useEffect(() => {
console.log("I run every time.");
});
It will always call the function that’s passed into it whenever the component is updated or re-rendered. Make sure not to set a state inside, since that will cause an infinite loop.
on mount
useEffect can also be used to run code only once in the whole life-cycle of that component. Which allows us to do things such as fetching data from an API and set it as a state, we don’t have to worry about the whole infinite looping because it will always run once.
useEffect(() => {
console.log('I run only once!');
}, []);
Notice we pass a second argument which is an empty array. We do this to tell useEffect that we don’t depend on anything for it to call the function passed. This will get more clearer later on.
on when dependencies change
we can then pass in “dependencies” into the second array to tell useEffect that we only want to run this code when one or more dependencies change. dependencies are typically props or states.
const [ color, setColor ] = useState('red');
useEffect(() => {
console.log('I run only when color changes!');
}, [color]);
on unmount
we usually do this whenever we want to “clean up” at the end. A very easy example of this is removing an interval call set with setInterval or sending a post request to a server.
useEffect(() => {
console.log('I run only once!');
return () => {
console.log('The component was unmounted!');
}
}, []);
Notice that we now return another function inside our first argument. This is how we tell useEffect what to call when the component is unmounted.
the syntax
The first argument is the callback function and the second is an array of “dependencies”
// Parameters.
useEffect(<function>, <array of dependencies || optional>);
Alright, now you know how to masterfully use useEffect! In most cases though, you’ll only be using it to fetch data on mount.
|
https://nasheomirro.medium.com/lazy-learning-react-useeffect-hook-e66dfc6386b0?source=post_internal_links---------4----------------------------
|
CC-MAIN-2021-21
|
refinedweb
| 385
| 65.73
|
PParse demonstrates progressive parsing.
In this example, the application doesn't have to depend upon throwing
an exception to terminate the parsing operation. Calling parseFirst() will
cause the DTD to be parsed (both internal and external subsets) and any
pre-content, i.e. everything up to but not including the root element.
Subsequent calls to parseNext() will cause one more piece of markup to
be parsed, and propagated from the core scanning code to the parser..
PParse parses an XML file and prints out the number of
elements in the file.
Usage:
PParse [options] <XML file>
This program demonstrates the progressive parse capabilities of
the parser system. It allows you to do a scanFirst() call followed by
a loop which calls scanNext(). You can drop out when you've found what
ever it is you want. In our little test, our event handler looks for
16 new elements then sets a flag to indicate its found what it wants.
At that point, our progressive parse loop exits.
Options:
-v=xxx - Validation scheme [always | never | auto*].
-n - Enable namespace processing [default is off].
-s - Enable schema processing [default is off].
-f - Enable full schema constraint checking [default is off].
-? - Show this help.
* = Default if not provided explicitly.
-v=always will force validation
-v=never will not use any validation
-v=auto will validate if a DOCTYPE declaration or a schema declaration is present in the XML document
Here is a sample output from PParse
cd xerces-c-3.1.4/samples/data
PParse -v=always personal.xml
personal.xml: 60 ms (37 elems, 12 attrs, 134 spaces, 134 chars)
Running PParse with the validating parser gives a different result because
ignorable white-space is counted separately from regular characters.
PParse -v=never personal.xml
personal.xml: 10 ms (37 elems, 12 attrs, 0 spaces, 268 chars)
Note that the sum of spaces and characters in both versions is the same.
|
http://xerces.apache.org/xerces-c/pparse-3.html
|
CC-MAIN-2017-17
|
refinedweb
| 319
| 57.77
|
@react-google-maps/api@react-google-maps/api
@react-google-maps/api
This library requires React v16.6 or later. To use the latest features (including hooks) requires React v16.8+. If you need support for earlier versions of React, you should check out react-google-maps
This is complete re-write of the (sadly unmaintained)
react-google-maps library. We thank tomchentw for his great work that made possible.
@react-google-maps/api provides very simple bindings to the google maps api and lets you use it in your app as React components.
Here are the main additions to react-google-maps that were the motivation behind this re-write
Install @react-google-maps/apiInstall @react-google-maps/api
with NPM
npm i -S @react-google-maps/api
or Yarn
yarn add @react-google-maps/api
import React from 'react'import GoogleMap LoadScript from '@react-google-maps/api';const containerStyle =width: '400px'height: '400px';const center =lat: -3745lng: -38523;{const map setMap = Reactconst onLoad = Reactconst onUnmount = Reactreturn<<GoogleMap=====>/* Child components, such as markers, info windows, etc. */<></></GoogleMap></LoadScript>}
Migration from react-google-maps@9.4.5Migration from react-google-maps@9.4.5
if you need an access to map object, instead of
ref prop, you need to use
onLoad callback on
<GoogleMap /> component.
Before:
// before - don't do that!<GoogleMap=/>
After:
<GoogleMap==/>
If you want to use
window.google object, you need to extract GoogleMap in separate module, so it is lazy executed then
google-maps-api script is loaded and executed by
<LoadScript />. If you try to use
window.google before it is loaded it will be undefined and you'll get a TypeError.
Main featuresMain features
- Simplified API
- Uses the new Context API
- Supports async React (StrictMode compliant)
- Removes lodash dependency => smaller bundle size
12.4kbgzip, tree-shakeable
- forbids loading of Roboto fonts, if you set property preventGoogleFonts on
<LoadScript preventGoogleFonts />component
ExamplesExamples
Examples can be found in two places:
- Official docs (powered by react-styleguidist.
- A Gatsby app including some examples. See the examples folder
- Gatsby.js Demo
AdviceAdvice
Using the examples requires you to generate a google maps api key. For instructions on how to do that please see the following guide
Community Help ResourceCommunity Help Resource
You can join the community at Spectrum.chat to ask questions and help others with your experience or join our Slack channel
ContributeContribute
Maintainers and contributors are very welcome! See this issue to get started.
How to test changes locallyHow to test changes locally
When working on a feature/fix, you're probably gonna want to test your changes. This workflow is a work in progress. Please feel free to improve it!
- In the file
packages/react-google-maps-api/package.jsonchange
mainto
"src/index.ts"
- In the same file, delete the
modulefield
- You can now use the package
react-google-maps-api-gatsby-exampleto test your changes. Just make sure you change the import from
@react-google-maps/apito
../../../react-google-maps-api
Since 1.2.0 you can use onLoad and onMount props for each @react-google-maps/api component, ref does not contain API methods anymore.
Since version 1.2.2 We added useGoogleMap hook, which is working only with React@16.8.1 and later versions.
Websites made with @react-google-maps-apiWebsites made with @react-google-maps-api
DriveFromTo.com Transfer Booking service PWA.
Shipwrecks.cc Shipwrecks from Wikipedia visualized on the map (Github)
nycmesh.net Network topography visualized on the map (Github)
add your website by making PR!
|
https://preview.npmjs.com/package/@byrekt/react-google-maps-api
|
CC-MAIN-2020-40
|
refinedweb
| 587
| 50.53
|
What is RTTI?
- RTTI stands for Run-time Type Identification.
- RTTI is useful in applications in which the type of objects is known only at run-time.
- Use of RTTI should be minimized in programs and wherever possible static type system should be used.
- RTTI allows programs that manipulate objects or references to base classes to retrieve the actual derived types to which they point to at run-time.
- Two operators are provided in C++ for RTTI.
- dynamic_cast operator. The dynamic_cast operator can be used to convert a pointer that refers to an object of class type to a pointer to a class in the same hierarchy. On failure to cast the dynamic_cast operator returns 0.
- typeid operator. The typeid operator allows the program to check what type an expression is. When a program manipulates an object through a pointer or a reference to a base class, the program needs to find out the actual type of the object manipulated.
- The operand for both dynamic_cast and typeid should be a class with one or more virtual functions.
Demonstrate the RTTI mechanisms in C++
#include <iostream> #include <typeinfo> // Header for typeid operator using namespace std; // Base class class MyBase { public: virtual void Print() { cout << "Base class" << endl; }; }; // Derived class class MyDerived : public MyBase { public: void Print() { cout << "Derived class" << endl; }; }; int main() { // Using typeid on built-in types types for RTTI cout << typeid(100).name() << endl; cout << typeid(100.1).name() << endl; // Using typeid on custom types for RTTI MyBase* b1 = new MyBase(); MyBase* d1 = new MyDerived(); MyBase* ptr1; ptr1 = d1; cout << typeid(*b1).name() << endl; cout << typeid(*d1).name() << endl; cout << typeid(*ptr1).name() << endl; if ( typeid(*ptr1) == typeid(MyDerived) ) { cout << "Ptr has MyDerived object" << endl; } // Using dynamic_cast for RTTI MyDerived* ptr2 = dynamic_cast<MyDerived*> ( d1 ); if ( ptr2 ) { cout << "Ptr has MyDerived object" << endl; } }OUTPUT:
i d 6MyBase 9MyDerived 9MyDerived Ptr has MyDerived object Ptr has MyDerived object
uggs outlet
coach outlet store online
ray ban glasses
oakley vault
thunder jerseys
jordan femme pas cher
coach outlet
ugg australia
ray ban wayfarer
coach outlet store online clearances
2016.12.29xukaimin
|
http://www.sourcetricks.com/2008/06/c-rtti.html
|
CC-MAIN-2017-04
|
refinedweb
| 349
| 51.78
|
Creating Palette Images
February 05, 1999 | Fredrik Lundh | Previously published as “fyi #53: creating palette images”
Introduction
One of the weak spots in the current release of PIL is that it’s quite difficult to create a 8-bit palette image from scratch. The obvious way to create a palette, by using the ImagePalette class, simply doesn’t behave like one would expect.
Creating the Image
To create a new palette image, use the “P” mode with the new function:
Image.new(“P”, size, fill) where size is the the size in pixels given as (width, height), and fill is the background pixel value.
If fill is omitted, it defaults to 0. To prevent PIL from filling the image at all (e.g. if you’re going to draw over the entire image anyway), use None.
Changing the Palette
PIL assigns a greyscale palette to the new image. In other words, for each colour index i, the corresponding palette entry is (i, i, i).
But how do we modify the contents of this palette? There’s not much on this in the documentation, but maybe we can use dir to see if there’s some attribute we could modify:
>>> import Image >>> i = Image.new("P", (512, 512)) >>> dir(i) ['category', 'im', 'info', 'mode', 'palette', 'size']
Cool. There’s a palette attribute in there. If we can figure out what it is, maybe we can modify the palette via that attribute.
>>> print i.palette None
Oops. That wasn’t really what we expected, was it?
In fact, the palette attribute is used to store the palette in some situations. But that’s not always the case, since PIL also maintains an internal palette structure (the ImagingPalette structure) which is attached to the internal image representation.
Unfortunately, the current version of PIL doesn’t do what it takes to keep the externally visible palette attribute in sync with the internal one (this will most likely change in a future version). For example, when we created a new image, PIL properly set the internal palette structure to a greyscale palette, but it didn’t set the public palette attribute.
Maybe there’s some other way to change the palette? Let’s look at the methods provided by the Image class:
>>> dir(i.__class__) ['_Image__transformer', '__doc__', '__init__', '__module__', '__setattr__', '_dump', '_makeself', 'convert', 'copy', 'crop', 'draft', 'filter', 'format', 'format_description', 'fromstring', 'getbands', 'getbbox', 'getdata', 'getextrema', 'getpixel', 'getprojection', 'histogram', 'load', 'offset', 'paste', 'point', 'putalpha', 'putdata', 'putpalette', 'putpixel', 'quantize', 'resize', 'rotate', 'save', 'seek', 'show', 'split', 'tell', 'thumbnail', 'tobitmap', 'tostring', 'transform', 'transpose']
putpalette looks pretty promising. The only problem is that it appears to be undocumented (at least in the current release of the documentation).
Or rather, it was undocumented until now. Here’s how to use it:
putpalette(palette) where the image should have mode “P” or “L”, and palette is either a sequence of integers, or a string containing a binary representation of the palette.
In both cases, the palette contents should be ordered (r, g, b, r, g, b, …). The palette can contain up to 768 entries (3*256). If a shorter palette is given, it is padded with zeros.
And here’s a simple example. This script draws a few coloured objects on a black background.
import Image import ImageDraw im = Image.new("P", (400, 400), 0) im.putpalette([ 0, 0, 0, # black background 255, 0, 0, # index 1 is red 255, 255, 0, # index 2 is yellow 255, 153, 0, # index 3 is orange ]) d = ImageDraw.ImageDraw(im) d.setfill(1) d.setink(1) d.polygon((0, 0, 0, 400, 400, 400)) d.setink(3) d.rectangle((100, 100, 300, 300)) d.setink(2) d.ellipse((120, 120, 280, 280)) im.save("out.gif")
This approach works well if you’re using only a few colours. You could for example write a Python module which contains your favourite palette definition (e.g. a standard 216-colour “web” palette), with symbolic names for the most common colour values.
Hiding Some of the Complexity
On the other hand, it’s not that hard to write a class that lets you create palettes on the fly, with the colours you happen to use in your image.
Here’s a very simple version; this keeps track of colours already used, and allocates new colour indices only when necessary:
class Palette: def __init__(self): self.palette = [] def __call__(self, r, g, b): # map rgb tuple to colour index rgb = r, g, b try: return self.palette.index(rgb) except: i = len(self.palette) if i >= 256: raise RuntimeError, "all palette entries are used" self.palette.append(rgb) return i def getpalette(self): # return flattened palette palette = [] for r, g, b in self.palette: palette = palette + [r, g, b] return palette
And here’s how to use this class:
rgb = Palette() im = Image.new("P", (400, 400), rgb(0, 0, 0)) d = ImageDraw.ImageDraw(im) d.setfill(1) d.setink(rgb(255, 0, 0)) d.polygon((0, 0, 0, 400, 400, 400)) d.setink(rgb(255, 153, 0)) d.rectangle((100, 100, 300, 300)) d.setink(rgb(255, 255, 0)) d.ellipse((120, 120, 280, 280)) im.putpalette(rgb.getpalette()) im.save("out.gif")
There are many ways to improve this class. You can change it so it supports the “#rrggbb” syntax as well, and maybe even add a colour database (perhaps a subset of the one used by the X window system).
Another change would be to make the colour search a bit less strict; if two colours are very similar, they might as well be mapped to the same colour index.
In any case, extending this class is left as an exercise for the interested reader.
|
http://effbot.org/zone/creating-palette-images.htm
|
CC-MAIN-2017-04
|
refinedweb
| 948
| 65.01
|
Edit and Save Work Items by Using the Client Object Model for Team Foundation
You can change the Fields, Links, and Attachments of a WorkItem and then try to save those changes by using either the WorkItem.Save or WorkItemStore.BatchSave method.
When you try to save your changes, they are evaluated against the rules for the WorkItemType. If the values that you specify follow those rules, the WorkItem is saved, its revision is incremented, and its history is updated with the most recent changes. Otherwise, the WorkItem is not saved, its revision is not incremented, and its history is not updated.
注意
You can save more than one WorkItem or WorkItemLink in a single round trip by using the WorkItemStore.BatchSave method.
Example
The example demonstrates how to edit and save work items and how to use the WorkItem.IsValid and WorkItem.IsDirty properties.
To use this example
Create a C# ( or VB ) console application.
Add references to the following assemblies:
Replace the contents of Program.cs ( or Module1.vb ) with the following example:
using System; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.WorkItemTracking.Client; namespace WorkItemTrackingSample { class Program { static void Main(string[] args) { Uri collectionUri = (args.Length < 1) ? new Uri("") : new Uri(args[0]); // Connect to the server and the store. TfsTeamProjectCollection teamProjectCollection = new TfsTeamProjectCollection(collectionUri); WorkItemStore workItemStore = teamProjectCollection.GetService<WorkItemStore>(); // Get a specific work item from the store. (In this case, // get the work item with ID=1.) WorkItem workItem = workItemStore.GetWorkItem(1); // Set the value of a field to one that is not valid, and save the old // value so that you can restore it later. string oldAssignedTo = (string)workItem.Fields["Assigned to"].Value; workItem.Fields["Assigned to"].Value = "Not a valid user"; // Display the results of this change. if (workItem.IsDirty) Console.WriteLine("The work item has changed but has not been saved."); if (workItem.IsValid() == false) Console.WriteLine("The work item is not valid."); if (workItem.Fields["Assigned to"].IsValid == false) Console.WriteLine("The value of the Assigned to field is not valid."); // Try to save the work item while it is not valid, and catch the exception. try { workItem.Save(); } catch (ValidationException exception) { Console.WriteLine("The work item threw a validation exception."); Console.WriteLine(exception.Message); } // Set the state to a valid value that is not the old value. workItem.Fields["Assigned to"].Value = "ValidUser"; // If the work item is valid, save the changes. if (workItem.IsValid()) { workItem.Save(); Console.WriteLine("The work item was saved this time."); } // Restore the original value of the work item's Assigned to field, and save that change. workItem.Fields["Assigned to"].Value = oldAssignedTo; workItem.Save(); } } }
|
https://docs.microsoft.com/ja-jp/previous-versions/visualstudio/visual-studio-2010/bb130323(v%3Dvs.100)
|
CC-MAIN-2019-26
|
refinedweb
| 439
| 60.41
|
Full Disclosure
mailing list archives
Heya --
Quoth Honza Vlach (Mon, Mar 22, 2004 at 10:40:12AM +0100):
2004-03-22 09:01:37.781326500 Failed keyboard-interactive for illegal
user xjunr01 from ::ffff:212.65.252.97 port 61991 ssh2
2004-03-22 09:01:37.781379500 Disconnecting: Too many authentication
failures for xjunr01
2004-03-22 09:02:05.879614500 Bad protocol version identification
'\377\373\037\ 377\373
\377\373\030\377\373'\377\375\001\377\373\003\377\375\003sdf' from
::fff f:212.65.252.97
2004-03-22 09:02:36.287775500 Bad protocol version identification
'\377\373\037\ 377\373
\377\373\030\377\373'\377\375\001\377\373\003\377\375\003' from
::ffff:2 12.65.252.97
Is it some attack attempt? I've checked both full-disclosure archive and
google, unfortunately haven't found anything usable.
My guess is that it is either a program gone horribly wrong or
an attack attempt. Maybe an attack attempt gone horribly wrong. [grin]
Instead of "id", though, you have the above strings after the failed login.
That seems somewhat related to dicom's vterm link.cpp. Original URL is
down, here's the Google-cached version:
Your odd sequence is labeled as the "magic init string" for telnet.
BOOL TelnetLink :: Open( char *ip )
{
if ( !SocketTermIO :: Open (ip, "23"))
return ( FALSE );
// send the magic init string for telnet sessions.. note.. some
// garbage will come back
//SocketTermIO :: SendBinary (
//"\377\375\001\377\375\003\377\374\030", 9 );
//SocketTermIO :: SendBinary (
//"\377\375\003\377\373\030\377\366", 8);
SocketTermIO :: SendBinary ((unsigned char *)
"\377\375\001\377\375\003\377\366", 8);
// SocketTermIO :: SendBinary (
// "\377\373\030\377\372\030\000vt100\377\360", 9 + 5);
//SocketTermIO :: SendBinary ( "\377\375\001", 3);
return ( TRUE );
}
So perhaps their program is just screwing up and trying to
prepend a variant of this magic init string, but to 22 rather than 23.
You'd probably have better luck posting things like this to
incidents () incidents org than to Full Disclosure, though.
Cheers,
Raven
_______________________________________________
Full-Disclosure - We believe in it.
Charter:
By Date
By Thread
|
http://seclists.org/fulldisclosure/2004/Mar/1243
|
CC-MAIN-2014-15
|
refinedweb
| 359
| 67.15
|
I have a program that is supposed to convert temperature i celcius to temperature in fehrenhiet. I get a syntax error on the line where the user is supposed to input the temperature in celcuis.
I cant seem to figure out what is wrong.
I keep getting error: error C2059: syntax error : ';'
Any help would be appreciated. Thanks in advance.
Code:
/* program to convert temperature in celcius
to temerature in fahrenheit */
#include <iostream>
int c; //temperature in celcius
int f = (9/5 * (c)) + 32; //temperature in fahrenheit
int main()
{
std::cout << "This program will convert your temperature in celcius to fahrenheit." "\n" "\n";
std::cout << "What is your current temperature in celcius? ";
std::cin >> c >>; //user inputs temerature in celcius
std::cout << "Your temperature in fahrenheit is " << f << "." " Now go be tell your friends.";
//program converts output to fahrenheit
return (0);
}
|
http://cboard.cprogramming.com/cplusplus-programming/121341-cant-solve-syntax-error-printable-thread.html
|
CC-MAIN-2014-10
|
refinedweb
| 141
| 57.67
|
These:.
Software Framework.
The diagram on the left may give you a good understanding of what Software Framework is and what role it performs. Simply saying, it is a shim between the user application and the Operating System. There are at least two types of Software Frameworks:
- Application Programming Interface (API) - if we take a look at Windows API, we may see that it is a framework as well. However, it may be bypassed or, at least, a programmer may choose to decrease the interaction with it by, for example, using functions from ntdll.dll instead of those provided by kernel32.dll or even "talk" to Windows kernel directly (highly not recommended, but may be unavoidable some times) through interrupts.
- .Net like framework - total isolation of user code from the operating system. Such frameworks are mostly virtual machines totally isolating user application from the operating system and hardware. However, such framework has to provide the application with all the services available in the Operating System. This is type of framework we are going to build in this article.
Virtual Machine
The basics of building a simple virtual machine is covered in this article, so I will only give a brief explanation here. Our VM in this example will consist of the following components:
- Virtual CPU
A structure that represents a CPU - basically, has 6 registers and a pointer to the stack:
typedef struct
{
unsigned int regs[6];
unsigned int* stack;
}CPU;
The 6 registers are general purpose A, B, C and D, where A is also used to store system call return value and C is used as a counter for LOOP instruction, STACK POINTER (SP) and INSTRUCTION POINTER (IP).
- Instruction Interpreter
A function or a set of functions which responsible for interpretation of the pseudo assembly (or call it intermediate assembly language) designed for this virtual machine (in this case 14 instructions).
- System Call Handler
This component provides the means for the user application to interact with the Operating System (in this case 2 system calls: sys_write and sys_exit).).
Implementation.
In case of .Net framework (at least as far as I know), the loader identifies a file as a .Net executable, reads in the meta header, and initializes the mscoree.dll appropriately. We will not go through all those complications and will use a regular PE file:
- PE Header - regular PE Header, no modification needed;
- Code Section - simply invokes the core function of the framework:
push pseudo_code_base_address
call [core]
- Import Section - regular import section that only imports one function from the framework.dll - framework.core(unsigned int);
- Data Section - this section contains the actual compiled pseudo assembly code and whatever headers you may come up with, that may instruct the core() function to correctly initialize the application.
Example Executable Source Code
The following is the source code of the example executable. It may be compiled with FASM (Flat Assembler).
include 'win32a.asm' ;we need the 'import' macro
include 'asm.asm' ;pseudo assembly commands and constants
format PE console
entry start
section '.text' readable executable
start:
push _base
call [core_func]
section '.idata' data import writeable
library framework, 'framework.dll'
import framework,\
core_func, 'Core'
section '.data' readable writeable
_base:
loadi B, 0x31
_add A, B
loadr B, A
loadi A, _data.string
loadi C, _data.string_len
_call _func
loadi A, 1
loadi B, _data.string
loadi C, _data.str_len
_int sys_write
loadi A, 1
loadi B, _data.msg
loadi C, _data.msg_len
_int sys_write
_int sys_exit
_func:
; A = string address
; B = key
; C = counter
.decode:
loadr D, A
xorr D, B
storr A, D
loadi D, 4
_add A, D
_loop .decode
_ret
_data:
.string db 'Hello, developer!', 10, 13
.str_len = $-.string
db 0
.string_len = ($-.string)/4
.msg db 'The program will now exit.', 10, 13
.msg_len = $-.msg
;Encrypt one string
load k dword from _base + 0x31
repeat 5
load a dword from _data.string + (% - 1) * 4
a = a xor k
store dword a at _data.string + (% - 1) * 4
end repeat
The code above produces a tiny executable which invokes framework's core() function. Pseudo assembly code simply prints two messages (the first one is decoded prior to being printed). Full sources are attached to this article (see the very first line).
The good thing is that you do not have to start the interpreter and load this executable (or specify it as a command line parameter) - you may simply run this executable, Windows loader will bind it with the framework.dll automatically. The bad thing is that you would, most probably, have to write your own compiler, because writing assembly is fun, dealing with pseudo assembly is fun as well, BUT, only when done for fun. It is not as pleasant when dealing with production code.
Possible uses
Unless you are trying to create a framework that would overcome existing software frameworks, you may use such approach to increase the protection of your applications by, for example, virtualizing cryptography algorithms or any other part of your program which is not essential by means of execution speed, but represents a sensitive intellectual property.
Hope you find this article helpful.
See you at the next!
Note: Only a member of this blog may post a comment.
|
http://syprog.blogspot.com/2012/05/simple-runtime-framework-by-example.html
|
CC-MAIN-2017-43
|
refinedweb
| 864
| 54.32
|
Hi Anish,
Please refer to the blog posts. I think it will help.
Hi Ravindra,
Thanks for replying to my post. I have already implemented IValidate interface for showing warning message and it works fine. Now, I am looking for highlighting the field in editor when it fails the validation. The field will get auto highlighted in Red color for Severity Error, but for warning this is no happening. I am trying to achieve this.
Ah ok.
I have not implemented this but I found something interesting if it works for you.
Like instead of highlighting field you can display a custom message or give the field name to the content author.
[CustomValidation] public virtual string Title{ get; set; } public class CustomValidationAttribute : ValidationAttribute { protected override ValidationResult IsValid(object value, ValidationContext validationContext) { if (value.ToString().Contains("bad")) { return new ValidationResult ("This is contains bad word"); } return ValidationResult.Success; } }
Hi Ravindra,
As mentioned earlier, I am able to validate and show warning message with field name in alert box for the editors. As an additional feature, I am looking to highlight the corresponding field also with a color as it does with for ex: Required field validation error.
Hi Anish, I'm afraid that the client side validation in CMS UI is limited to errors in this regard and I would advise against trying to hack around it. There are a number of things involved in validation so I suspect it would be complicated to get it right.
I wanted to have the below features along with field validation. I couldn’t find any article on this. Please help us on this.
1. Highlighting a CMS field with yellow colour if it fails warning validation (ValidationErrorSeverity.Warning). Currently, only for Error it highlights the field with red colour.
2. Is there any option to have the notification box open by default when there is a validation message? Now the user has to click the icon to see the messages.
|
https://world.optimizely.com/forum/developer-forum/CMS/Thread-Container/2019/11/highlighting-a-cms-field-with-yellow-color-for-validationerrorseverity-warning--currently-only-for-error-it-highlights-the-field-with-red-color/
|
CC-MAIN-2021-49
|
refinedweb
| 327
| 56.76
|
Saving processes and threads in a WSGI server with Moya
I have a webserver with 3 WSGI applications running on different domains (1, 2, 3). All deployed with a combination of Gunicorn and NGINX. A combination that works really well, but there are two annoyances that are only going to get worse the more sites I deploy:
A) The configuration for each server resides in a different location on the filesystem, so I have to recall & type a long path to edit settings.
B) More significantly, each server adds extra resource requirements. I follow the advice of running each WSGI application with (2 * number_of_cores + 1) processes, each with 8 threads. The threads may be overkill, but that ensures that the server can use all available capacity to handle dynamic requests. On my 4 core server, that's 9 processes, 72 threads per site. Or 27 processes, and 216 threads for the 3 sites. Clearly that's not scalable if I want to host more web applications on one server.
A new feature recently added to Moya fixes both those problems. Rather than deploy a WSGI application for each site, Moya can now optionally create a single WSGI application that serves many sites. With this new system, configuration is read from /etc/moya/, which contains a directory structure like this:
|-- logging.ini
|-- moya.conf
|-- sites-available
| |-- moyapi.ini
| |-- moyaproject.ini
| `-- notes.ini
`-- sites-enabled
|-- moyapi.ini
|-- moyaproject.ini
`-- notes.ini
At the top level is “moya.conf” which contains a few server-wide settings, and “logging.ini” which contains logging settings. The directories “sites-available” and “sites-enabled” work like Apache and NGINX servers; settings for each site are read from “sites-enabled”, which contains symlinks to files in “sites-available”.
Gunicorn (or any other wsgi server) can run these sites with a single instance by specifying the WSGI module as “moya.service:application”. This application object loads the sites from “sites-available” and is responsible for dispatching requests based on domains specified in the INI files.
Because all sites now go through a single Gunicorn instance, requests are shared amongst one optimal pool of processes / threads. This keeps the memory footprint low and negates the need to allocate resources based on traffic.
This new multi-server system is somewhat experimental, and hasn't been documented. But since I believe in eating my own dog-food, it has been live now for a whole hour–with no problems.
A simple method for rendering templates with Python
I never intended to write a template system for Moya. Originally, I was going to offer a plugin system to use any template format you wish, with Jinja as the default. Jinja was certainly up to the task; it is blindingly fast, with a comfortable Django-like syntax. But it was never going to work exactly how I wanted it to, and since I don't have to be pragmatic on my hobby projects, I decided to re-invent the wheel. Because otherwise, how do we get better wheels?
The challenge of writing a template language, I discovered, was keeping the code manageable. If you want to make it both flexible and fast, it can quickly descend in to a mass of special cases and compromises. After a few aborted attempts, I worked out a system that was both flexible and reasonable fast. Not as fast as template systems that compile directly in to Python, but not half bad. Moya's template system is about 10-25% faster than Django templates with a similar feature set.
There are a two main steps in rendering a template. First the template needs to be tokenized, i.e. split up in a data structure of text / tags. This part is less interesting I think, because it can be done in advance and cached. The interesting part is the following step that turns that data structure in to HTML output.
This post will explain how Moya renders templates, by implementing a new template system that works the same way.
Let's render the following template:
<h1>Hobbit Index</h1> <ul> {% for hobbit in hobbits %} <li{% if hobbit==active %} class="active"{% endif %}> {hobbit} </li> {% endfor %} </ul>
This somewhat similar to a Django or Moya template. It generates HTML with unordered list of hobbits, one of which has the attribute
class="active" on the
<li>. You can see there is a loop and conditional in there.
The tokenizer scans the template and generates a hierarchical data structure of text, and tag tokens (markup between {% and %}). Tag tokens consist of a parameters extracted from the tag and children nodes (e.g the tokens between the
{% for %} and
{% endfor %}).
I'm going to omit the tokenize functionality as an exercise for the reader (sorry, I hate that too). We'll assume that we have implemented the tokenizer, and the end result is a data structure that looks like this:
[ "<h1>Hobbit Index</h1>", "<ul>", ForNode( {"src": "hobbits", "dst": "hobbit"}, [ "<li", IfNode( {"test": "hobbit==active"}, [ ' class="active"' ] ), ">", "{hobbit}", "</li>", ] ), "</ul>" ]
Essentially this is a list of strings or nodes, where a node can contain further nested strings and other nodes. A node is defined as a class instance that handles the functionality of a given tag, i.e. IfNode for the {% if %} tag and ForNode for the {% for %} tag.
Nodes have the following trivial base class, which stores the parameters and the list of children:
class Node(object): def __init__(self, params, children): self.params = params self.children = children
Nodes also have an additional method,
render, which takes a mapping of the data we want to render (the conext). This method should be a generator, which may yield] one of two things; either strings containing output text or an iterator that yields further nodes. Let's look at the IfNode first:
class IfNode(Node): def render(self, context): test = eval(self.params['test'], globals(), context) if test: yield iter(self.children)
The first thing the render method does is to get the
test parameter and evaluate it with the data in the context. If the result of that test is truthy, then the render method yields an iterator of it's children. Essentially all this node object does is render its children (i.e. the template code between {% if %} and {% endif %}) if the test passes.
The
ForNode is similar, here's the implementation:
class ForNode(Node): def render(self, context): src = eval(self.params['src'], globals(), context) dst = self.params['dst'] for obj in src: context[dst] = obj yield iter(self.children)
The ForNode render method iterates over each item in a sequence, and assigns the value to an intermediate variable. It also yields to its children each pass through the loop. So the code inside the {% for %} tag is rendered once per item in the sequence.
Because we are using generators to handle the state for control structures, we can keep the main render loop free from such logic. This makes the code that renders the template trivially easy to follow:
def render(template, **context): output = [] stack = [iter(template)] while stack: node = stack.pop() if isinstance(node, basestring): output.append(node.format(**context)) elif isinstance(node, Node): stack.append(node.render(context)) else: new_node = next(node, None) if new_node is not None: stack.append(node) stack.append(new_node) return "".join(output)
The render loop manages a stack of iterators, initialized to the template data structure. Each pass through the loop it pops an item off the stack. If that item is a string, it performs a string format operation with the context data. If the item is a Node, it calls the render method and pushes the generator back on to the stack. When the stack item is an iterator (such as a generator created by Node.render) it gets one value from the iterator and pushes it back on to the stack, or discards it if is empty.
In essence, the inner loop is running the generators and collecting the output. A more naive approach might have the render methods also rendering their children and returning the result as a string. Using generators frees the nodes from having to build strings. Generators also makes error reporting much easier, because exceptions won't be obscured by deeply nested render methods. Consider a node throwing an exception inside a for loop; if ForNode.render was responsible for rendering its children, it would also have to trap and report such errors. The generator system makes error reporting simpler, and confines it to one place.
There is a very similar loop at the heart of Moya's template system. I suspect the main reason that Moya templates are moderately faster than Django's is due to this lean inner loop. See this GutHub gist for the code from this post. You may also find Moya's template implementation interesting.
Hiring!
“ My #Django templates in #Javascript mini-project has grown. Now supports for, if, with, filters. In only 250 loc. ”0).
Dj.
|
http://www.willmcgugan.com/tag/django/
|
CC-MAIN-2015-27
|
refinedweb
| 1,495
| 64.3
|
BACHELOR OF ENGINEERING IN COMPUTER SCIENCE & ENGINEERING
Submitted to: Miss Ayesha
Submitted By:
First and foremost, I would like to take this opportunity to thank our mentor MissAyesha for her guidance and advice on this project. At the same time I also won’tforget my group participant and also friends to because they were quite good withsharing some of their information to compete this project successfully. Last but notleast, I am very grateful to our college, lectures and friends where they gave usenough of time to complete this project and at the same time I would like to thankmy friends and classmates who helps me a lot to complete this project.
Thank you ABSTRACT
The main objective of this project is to develop a home automation system using anArduino board with wifi module being remotely controlled by an Android OS smart phone.As technology is advancing so houses are also getting smarter. Modern houses aregradually shifting from conventional switches to centralized control system, involvingremote controlled switches. This paper presents a low cost flexible and reliable homeautomation system with additional security using Arduino microcontroller, with IPconnectivity through local Wi- Fi for accessing and controlling devices by authorized userremotely using Smart phone application. The proposed system is server independent anduses Internet of things to control human desired appliances starting from industrial machineto consumer goods. The user can also use different devices for controlling by the help ofweb-browser, smart phone or IR remote module.
To demonstrate the effectiveness and feasibility of this system, in this paper we present ahome automation system using Arduino UNO microcontroller and esp8266-01 as aconnectivity module. It helps the user to control various appliances such as light, fan, TVand can take decision based on the feedback of sensors remotely. We have tested oursystem through conducted experiment on various environmental conditions.
Key Words and Phrases: Arduino Uno Controller; Internet of things ( Iot ); Esp8266-01;Wi-Fi network; Home automation system. List of Figures
7 REFERENCES
. . . . . . . . . CHAPTER 1:- INTRODUCTION
While the cost of living is going up, there is a growing focus to involve technology to lower thoseprices. With this in mind the Smart Home project allows the user to build and maintain a house that issmart enough to keep energy levels down while providing more automated applications. A smarthome will take advantage of its environment and allow seamless control whether the user is presentor away. With a home that has this advantage, you can know that your home is performing at its bestin energy performance.The Internet of things (Iot) devices not only controls but also monitors the electronic, electricaland various mechanical systems which are used in various types of infrastructures. These deviceswhich are connected to the cloud server are controlled by a single user (also known as admin)which are again transmitted or notified to the entire authorized user connected to that network.Various electronics and electrical devices are connected and controlled remotely throughdifferent network infrastructures. Web browser present in laptop or smart phone or any othersmart technique through which we can operate switches, simply removes the hassle of manuallyoperating a switch. Now a day’s although smart switches are available they proves to be verycostly, also for their working we required additional devices such as hub or switch. As there israpid change in wireless technology several connectivity devices are available in the market whichsolves the purpose of communicating medium with the device and the micro-controller. Startingfrom Bluetooth to Wi-Fi, from ZigBee to Z-wave and NFC all solve the purpose ofcommunicating medium. RF and ZigBee are used in most wireless networks. In this project wehave taken ESP8266-01 Wi-Fi module which is programmed through Arduino UNO to controlvarious devices. The rest of sections in this paper are organized as follows: Section II provides a systemoverview of the system. The hardware design is explained in Section III, Section IV discussesabout the software design and experimental results are discussed in Section V. At the end thepaper concludes by looking at the future research and recommendations which are required tomake the system more effective. CHAPTER 2:- SOFTWARE REQUIREMENT SPECIFICATION
In this home automation project circuit, Pins 10 and 11 of Arduino are connected to pins TXD andRXD of the Bluetooth module, respectively, as shown in Fig. 6.
Pins Gnd and Vcc of the Bluetooth module are connected to Gnd and +3.3V of Arduino boardrespectively..
ARDUINO SOFTWARE:-
Arduino IDE is an open-source software program that allows users to write and upload code within areal-time work environment. As this code will thereafter be stored within the cloud, it is often utilizedby those who have been searching for an extra level of redundancy. The system is fully compatiblewith any Arduino software board.
Arduino IDE can be implemented within Windows, Mac and Linux operating systems. The majorityof its components are written in JavaScript for easy editing and compiling. While its primaryintention is based around writing codes, there are several other features worth noting. It has beenequipped with a means to easily share any details with other project stakeholders. Users can modifyinternal layouts and schematics when required. There are in-depth help guides which will proveuseful during the initial installation process. Tutorials are likewise available for those who might nothave a substantial amount of experience with the Arduino framework.
BLYNK APP.:-
Blynk is a Platform with iOS and Android apps to control Arduino, Raspberry Pi and the likesover the Internet. It's a digital dashboard where you can build a graphic interface for your project bysimply dragging and dropping widgets.
It's really simple to set everything up and you'll start tinkering in less than 5 mins. Blynk is not tiedto some specific board or shield. Instead, it's supporting hardware of your choice. Whether yourArduino or Raspberry Pi is linked to the Internet over Wi-Fi, Ethernet or this new ESP8266 chip,Blynk will get you online and ready for the Internet of Your Things.
HARDWARE:-
Arduino:-Arduino is an open source electronics prototyping platform based on flexible, easy-to-use hardwareand software. It is intended for artists, designers, hobbyists and anyone interested in creatinginteractive objects or environments.
It is a microcontroller board based on the ATmega328 (datasheet). It has 14 digital input/output pins(of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator, a USBconnection, a power jack, an ICSP header, and a reset button. It contains everything needed tosupport the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started.The Uno differs from all preceding boards in that it does notuse the FTDI USB-to-serial driver chip. Instead, it features the Atmega16U2 (Atmega8U2 up toversion R2) programmed as a USB-to-serial converter. The Uno is a microcontroller board based onthe ATmega328P. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6analog inputs, a 16 MHz quartz crystal, a USB connection, a power jack, an ICSP header and a resetbutton. It contains everything needed to support the microcontroller; simply connect it to a computerwith a USB cable or power it with a AC-to-DC adapter or battery to get started.
Esp 8266:-The ESP8266 WiFi Module is a self contained SOC with integrated TCP/IP protocol stack that cangive any microcontroller access to your WiFi network. The ESP8266 is capable of either hosting anapplication or offloading all Wi-Fi networking functions from another application processor. EachESP8266 module comes pre-programmed with an AT command set firmware, meaning, you cansimply hook this up to your Arduino device and get about as much WiFi-ability as a WiFi Shieldoffers (and that’s just out of the box)! The ESP8266 module is an extremely cost effective board witha huge, and ever growing, community. This module has a powerful enough on-board processing andstorage capability that allows it to be integrated with the sensors and other application specificdevices through its GPIOs with minimal development up-front and minimal loading during runtime.Its high degree of on-chip integration allows for minimal external circuitry, including the front-endmodule, is designed to occupy minimal PCB area. The ESP8266 supports APSD for VoIPapplications and Bluetooth co-existance interfaces; it contains a self-calibrated RF allowing it towork under all operating conditions, and requires no external RF parts. There is an almost limitlessfountain of information available for the ESP8266, all of which has been provided by amazingcommunity support. In the Documents section below you will find many resources to aid you inusing the ESP8266, even instructions on how to transforming this module into an IoT (Internet ofThings) solution!
Relayboard:-A relay is an electrical device which is generally used to control high voltages using very low voltageas an Input. This consists of a coil wrapped around a pole and a two small metal flaps (nodes) that areused to close the circuit. One of the nodes is fixed and other is movable. Whenever electricity ispassed through the coil, it creates a magnetic field and attracts the moving node towards the staticnode and the circuit gets completed. So, just by applying small voltage to power up the coil we canactually complete the circuit for the high voltage to travel. Also, as the static node is not physicallyconnected to the coil there is very less chance that the Microcontroller powering the coil getsdamaged if something goes wrong.
This USB to TTL converter combine the USB-232-1 (USB to Single RS232 Adapter) and TTL-232-1(Port-powered RS232/TTL converter) allows you to convert USB to TTL/CMOS compatible levelsand vice versa. It can be used to set up APC220 Radio Data Module (SKU: TEL0005) wirelessmodule. It can be used as STC microcontroller program downloader.
APPLIANCES:-
Three different bulbs of three different color i.e. White, Red and Blue and a Plastic Fan. Jumper wires: - A jumper. CHAPTER 3:- ARCHITECTURE DIAGRAMS
Above is a unified modeling language diagram (UML) of a basic home automation system whichconsists of a transceiver, which is used to send a signal which is received by a receiver. Also it consists of acontroller which controls the basic objects that are connected in the home automation system. It controls anintrusion sensor, a fire sensor, basic light bulbs and a fan.
Fig. 2 is a basic block diagram of a home automation system. It uses Arduino Uno which is connectedwith a wifi module i.e. ESP8266 wifi module. Also with the help of cloud and Blynk App, this homeautomationsystem can be easily controlled by an Android OS system i.e. a user’s mobile phone.Fig. 3 An open standard protocol for a home automation system
Description:
This project is about how one can connect an electric bulb (or any device) with an Arduino Uno using aRelay Module. It also covers connecting Arduino with Android devices and then rem otely switching thedevice off/on.
Relays
One can control high voltage electronic devices using relays. A relay is actually a switch which iselectrically operated by an electromagnet. The electromagnet is activated with a low voltage, for example 5volts from a microcontroller and it pulls a contact to make or break a high voltage circuit.
This project uses the HL-52S 2 channel relay module, which has 2 relays with rating of 10A @ 250 and 125V AC and 10A @ 30 and 28 V DC. The high voltage output connector has 3 pins, the middle one is thecommon pin and from the markings one of the two other pins is for normally open connection and the otherone for normally closed connection.
Steps to follow:Step 1: Connect the Arduino with the USB with your system.Step 2: Connect ground pin of Arduino with the ground pin of Relay Module, VCC pin on relay module with5V on Arduino and finally pin 7 in Arduino with ln1 on relay module.
Step 3: Upload the code given below to Arduino and switch on the current supply of bulb through circu itboard.Steps to be followed on Android Phone:Step 1: Install the “Blynk” app from Google Play store and click on the “Create New" button.Step 2: Name your Project and select your IoT board (Arduino) then click on "Email" and finallyon "Create". Remember to change the Auth Token in the code by the one sent on your mail by Blynk.
Step 3: Add a Button on the screen and long press to configure it.Step 4: Name the button to “Bulb” change the color and select the output as D7 i.e. digital pin 7.
Step 5: Click on the play button on the top left of the screen.The project is all set and now one can control the electrical bulb by pressing the bulb button on the Andr oidPhone.
ARDUINO CODE:-
* for Windows:* 1. Open cmd.exe
* cd C:\blynk-library\scripts
* blynk-ser.bat -c COM4
* 4. Start blynking! :)
**************************************************************/
#include <SoftwareSerial.h>
#include <BlynkSimpleStream.h>
void setup()
// Debug consoleDebugSerial.begin(9600);
Serial.begin(9600);
Blynk.begin(auth, Serial);
void loop()
Blynk.run();
Fig.1
Fig 2 Fig 3
Fig1,2 and 3 are the screenshots of the Arduino Code that we have used in this project. This codeholds the upmost importance in this project. This code basically provides the user to have the controlover the home automation system. It basically gives access to the Blynk App that we have used, withthe help of which the user can control or give the respective commands to the appliances, threedifferent bulbs and a plastic fan in this case.The above image is the first step of the execution of the batch file. A batch file is a computer filewhich contains a list of instructions to be carried out in turn.
In the above screenshot which port is being getting used or will be used is inserted in the terminals.
In the above screenshot, the serial USB batch file is executed successfully executed and the BlynkApp gets the access to the Arduino code which is later used by the user in order to use the desirableappliances.
The screenshot down below is the screen shot of the Blynk App that we are using in order to controlour respective appliances. After the Arduino code is executed successfully, it provides the user the fullcontrol. As you can see in the screenshot, there is button beneath i.e. an ON/OFF button using whichwe can turn on or turn off the respective light.As it is very evident from the above two figures that when the Blynk app is used i.e.when the ON button is clicked, the white bulb switches on and when Off button isclicked it is switched off. Also in this project we have used three bulbs of differentcolors. Thus when the any switch is tapped, the respective bulbs or fan works. CHAPTER 6:- CONCLUSION AND FUTURE SCOPE
In this project, a novel architecture for low cost and flexible home control and monitoring systemusing Android based Smart phone is proposed and implemented. The proposed architecture utilizesa micro web server and Bluetooth communication as an interoperable application layer forcommunicating between the remote user and the home devices. Any Android based Smart phonewith built insupport for Wi-Fi can be used to access and control the devices at home. When a Wi-Ficonnection is not available, mobile cellular networks such as 3G or 4G can engine thus eliminatingthe need for an external voice recognition module. This method of controlling such applications isreferred to as automation. The experimental setup which we designed has its focal point oncontrolling different home appliances providing 100% efficiency. Due to advancement in technology,Wi-Fi network is easily available in all places like home, Office Building and Industrial Building soproposed wireless network easily controlled using any Wi-Fi network. The wiring cost is reduced.Since less wiring is required for the switches. This also eliminates power consumption inside thebuilding when the loads were in off conditions. This system is also platform independent allowingany web browser in any platform to connect ESP8266-01. The system is fully functional throughandroid application known as “ESP8266 Wifi control”. The delay to turn ON is 3 sec and turn OFF is2 sec for any load.
FUTURE SCOPE:-
2018 holds even more promise for the smart home industry, as devices like Google Home, Alexa andAmazon Echo become more commonplace and artificial intelligence becomes more sophisticated.
We have shared our digital footprint for convenience. With smart home technology, we are sharingour physical footprint. It is not a matter of if but when these systems will be compromised, and theconsequences could be much more severe than lost social security numbers. Addressing security andprivacy will become a fundamental concern that will shape this industry.2. Integration of Smart Home Devices
Integration will make or break smart home technology. Navigating goofy AI misunderstandings for 12appliances and the front door is not the way of the future. But can smart homes make sure youremembered to turn off all the lights? Lock up? De-activate alarms upon recognizing your face? Ibelieve we will see more integration that supports homeowners in 2018.
I'm wrapping up repairs and renovations on an investment property, and we opted to install a bunch ofNest and Ring products to better secure our investment. The video surveillance is great, but I can seeAI being used to automate threat detection and maybe more proactively alert us if something goesawry. This would revolutionize the human aspect of remote video monitoring.
Homeowners will like the idea of more cool ways to control their homes. Surveillance has becomemore necessary to combat crime, as more people work from home and want to protect their physicaland intellectual property. Appliances also could be a focus since people would like theirappliances to take on more of the workload.
In 2017, the majority of applications revolved around security and thermostats, and the devices did notinteroperate. In 2018, smart home device makers will take a platform approach, and the devices willinteroperate and new use cases will emerge such as appliance diagnostics, energy conservation and theprevention of major damages during natural disasters.
Sharing the data of homeowners with businesses will probably be the next big thing in smart hometechnology. Having your fridge order the food you need or setting the lights and preferred temperaturefor your arrival is what is coming soon. The data that you share with the smart devices will be of greatinterest to the companies that build such products.
With more and more smart home devices entering the market, there is an opportunity for forward-thinking companies to use customer service as a differentiator. An IoT environment can present anumber of challenges for consumers ranging from basic troubleshooting to privacy concerns.Companies that are innovative and knowledgeable about delivering customer service excellence willstand out.
We’ll see a proliferation of integrated platform solutions from big players in tech. Amazon will offerin-home food delivery straight to your fridge, leveraging its smart home platform. However, securitywill be a concern; a customer’s home could be robbed by a contractor. I also see a future wherepasswords are leaked or homes get hacked, and that’s something the big players need to plan for.
I’m hoping for some real progress on standards. The smart home market has huge potential, but it’sstill too fragmented. Consumers shouldn’t have to think about whether they want to invest in Nest,Amazon’s Echo line or products that support Apple’s Homekit. In 2018, I expect to see greater cross-compatibility and less focus on platform lock-in.
I think we're going to see more and more smart kitchen gadgets come on the market, such as ricecookers that are connected to Alexa, smart crockpots and integrated apps. We'll be able to ask Alexahow much time is left on the device or control them from our smartphones at work.
Naturally, smart home tech will continue to become more accessible and inexpensive to themainstream. As consumers become accustomed to the conveniences that come with smart tech, theywill begin to seek out these efficiencies outside of the home. Next year, we’re likely to see an uptickin commercial smart building tech, particularly in offices seeking to adapt to more mobile workplacetrends.
As more technology and innovations are brought to the market, automation will make the homeexperience simpler and more pleasant. Next year will see an increase in the gadgets released in theIoT sphere. However, as this technology is relatively new, the testing phase will see thecleaning out of multiple products that are replaced by better alternatives.
Home technologies will integrate into so much more of our daily lives. Voice control of technologiesthat are included in your phone, TV, home audio and even car dashboard will be commonplace by theend of 2018. Voice is going to be the breakthrough advancement that really allows thesetechnologies to become ubiquitous. REFERENCES1. K. Venkatesan and Dr. U. Ramachandraiah, Networked Switching and PolymorphingControl of Electrical Loads with Web and Wireless Sensor Network, 2015 International Conference on Robotics, Automation, Control and Embedded Systems (RACE), Chennai, (2015), 1-9.2. ShopanDey,Ayon Roy and SandipDas, Home Automation Using Internet of Thing , IRJET, 2(3) (2016),1965-1970.
12. R. Piyare, and S.R. Lee, Smart home-control and monitoring system using smart phone, The 1st International Conference on Convergence and its Application, 84, (2013) 83-86.
13.
14.
|
https://de.scribd.com/document/404314580/report-2-docx
|
CC-MAIN-2019-43
|
refinedweb
| 3,564
| 53.1
|
"Raymond Hettinger" python@rcn.com writes:
In Py2.3, __getitem__ conveniently supports slices for builtin sequences: 'abcde'.__getitem__(slice(2,4))
For user defined classes to emulate this behavior, they need to test the index argument to see whether it is a slice and then loop over the slice indices like this:
class SquaresToTen: """Acts like a list of squares but computes only when needed"""
def __len__(self): return 11 def __getitem__(self, index): if isinstance(index, slice): return [x**2 for x in range(index.start, index.stop, index.step)]
You can spell that
if isinstance(index, slice): return [x**2 for x in range(*index.indices(11))]
and, as a bonus, it'll work more often <wink> (consider "SquaresToTen()[7::-2]", for example).
This could be simplified somewhat by making slices iterable so that the __getitem__ definition looks more like this:
def __getitem__(self, index): if isinstance(index, slice): return [x**2 for x in index] else: return index**2
However to make omitted slice places work, you need to pass in the length of the sequence, so I don't think this can fly.
Cheers, M.
-- We've had a lot of problems going from glibc 2.0 to glibc 2.1. People claim binary compatibility. Except for functions they don't like. -- Peter Van Eynde, comp.lang.lisp
|
https://mail.python.org/archives/list/python-dev@python.org/thread/F2SR5XWOVMUP6MLJJGAZBASYWWWMME25/
|
CC-MAIN-2020-40
|
refinedweb
| 222
| 64
|
Roger Larsson wrote:> > On Thursday 12 April 2001 23:52, Andre Hedrick wrote:> > Okay but what will be used for a base for hardware that has critical> > timing issues due to the rules of the hardware?> >> > I do not care but your drives/floppy/tapes/cdroms/cdrws do:> >> > /*> > * Timeouts for various operations:> > */> > #define WAIT_DRQ (5*HZ/100) /* 50msec - spec allows up to 20ms> > */ #ifdef CONFIG_APM> > #define WAIT_READY (5*HZ) /* 5sec - some laptops are very> > slow */ #else> > #define WAIT_READY (3*HZ/100) /* 30msec - should be instantaneous> > */ #endif /* CONFIG_APM */> > #define WAIT_PIDENTIFY (10*HZ) /* 10sec - should be less than 3ms (?), if> > all ATAPI CD is closed at boot */ #define WAIT_WORSTCASE (30*HZ) /* 30sec> > - worst case when spinning up */ #define WAIT_CMD (10*HZ) /* 10sec> > - maximum wait for an IRQ to happen */ #define WAIT_MIN_SLEEP (2*HZ/100)> > /* 20msec - minimum sleep time */> >> > Give me something for HZ or a rule for getting a known base so I can have> > your storage work and not corrupt.> >> > Wouldn't it make sense to define these in real world units?> And to use that to determine requested accuracy...> > Those who wait for seconds will probably not have a problem with up to (half)> a second longer wait - or...?> Those in range of the current jiffie should be able to handle up to one> jiffie longer...> > Requesting a wait in ms gives yo ms accuracy...The POSIX standard seems to point to a "CLOCK" for this sort of thing. A "CLOCK" has a resolution. One might define CLOCK_10MS, CLOCK_1US, orCLOCK_1SEC, for example. Then the request for a delay would pass theCLOCK to use as an additional parameter. Of course, CLOCK could alsowrap other characteristics of the timer. For example, the jiffiesvariable in the system could be described as a CLOCK which has aresolution of 10 ms and is the uptime. Another CLOCK might returnsomething related to GMT or wall time (which, by the way, is allowed toslip around a bit relative to uptime to account for leap seconds, daylight time, and even the date command).Now to make this real for the kernel we would need to define a set ofCLOCKs, to meet the kernel as well as the user needs. POSIX timersrequires the CLOCK construct and doesn't limit it very much. Oncedefined to meet the standard, it is easy to extend the definition to fixthe apparent needs. It is also easy to make the definition extensibleand we (the high-res-timers project) intend to do so.George-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2001/4/15/18
|
CC-MAIN-2017-43
|
refinedweb
| 437
| 60.14
|
What's an efficient way, given a numpy matrix (2-d array), to return the min/max
n
def n_max(arr, n):
res = [(0,(0,0))]*n
for y in xrange(len(arr)):
for x in xrange(len(arr[y])):
val = float(arr[y,x])
el = (val,(y,x))
i = bisect.bisect(res, el)
if i > 0:
res.insert(i, el)
del res[0]
return res
pyopencv
Since there is no heap implementation in NumPy, probably your best guess is to sort the whole array and take the last
n elements:
def n_max(arr, n): indices = arr.ravel().argsort()[-n:] indices = (numpy.unravel_index(i, arr.shape) for i in indices) return [(arr[i], i) for i in indices]
(This will probably return the list in reverse order compared to your implementation - did not check.)
Edit: A more efficient solution that works with newer versions of Numpy is given in this answer
|
https://codedump.io/share/OSDcsCazCU1L/1/pythonnumpy-efficient-way-to-take-the-minmax-n-values-and-indices-from-a-matrix
|
CC-MAIN-2017-47
|
refinedweb
| 151
| 59.43
|
Opened 6 years ago
Closed 6 years ago
Last modified 6 years ago
#15778 closed Bug (fixed)
Command createsuperuser fails under some system user names
Description
Commands 'createsuperuser' and 'syncdb' can not create a superuser account no way because of database error,
if a system account username contains an 8-bit character.
It fails when the name is automatically searched in the database, even if the user wants to write an ascii username manually.
This is typical for usernames created by Microsoft Windows.
File "C:\Python26\lib\site-packages\django\contrib\auth\management\commands\createsuperuser.py", line 72, in handle User.objects.get(username=default_username) ... File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 234, in execute return Database.Cursor.execute(self, query, params) DatabaseError : You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings.
Versions: Django 1.3, Python 2.6.4 windows, Sqlite3 3.5.9, dbapi 2.4.1
It is easier to fix it once then to circumvent it twice.
The middle part of the patch:
- default_username = getpass.getuser().replace(' ', '').lower() + default_username = str(getpass.getuser().decode('ascii', 'ignore')).replace(' ', '').lower()
Attachments (2)
Change History (28)
Changed 6 years ago by
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
To reproduce
"Adrián" and "Júlia" are nice frequent Spain names. Microsoft recommended to enter a localized full user name and full company name in Windows installation guides in the past. Localized user names are still very frequent in our country and I got from my company a preinstalled computer with a localized user name. The user name returned by "getpass.getuser()" by Python on Windows is usually encoded in any of possible one byte character sets, typically 'cp1252' for West Europe.
The most convincing and the only complete test would be to create such account on Windows, but probably you mean by the requirement "test" any possible simplified verification. So, I wrote a short temporary patch to Django:
--- a/django/contrib/auth/management/commands/createsuperuser.py 2011-02-22 12:33:04.000000000 +0100 +++ b/django/contrib/auth/management/commands/createsuperuser.py 2011-04-08 18:51:04.682233473 +0200 @@ -59,6 +59,7 @@ # Try to determine the current system user's username to use as a default. try: default_username = getpass.getuser().replace(' ', '').lower() + default_username = 'J\xfalia'.lower() # the name 'Julia' with accented 'u' - fails except (ImportError, KeyError): # KeyError will be raised by os.getpwuid() (called by getuser()) # if there is no corresponding entry in the /etc/passwd file
startproject, startapp, ENGINE is 'django.db.backends.sqlite3'
python manage.py createsuperuser
Python 2.6
This synthetic test anytime fails. (even on Linux, although all possible usernames are in ascii on Linux)
DatabaseError:
"You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings."
Python 2.5
never fails. (probable because its '_sqlite3.so' does not contains that typical error message string.) It normally displays:
Username (Leave blank to use u'j\xfalia'):
Function getpass.getuser() also depends on the precompiled package.
Python from is the described above, it gets international characters (fails).
Python from Cygwin strips international characters from the username in Windows.
comment:3 Changed 6 years ago by
The arcane error message from sqlilte means a non-utf8 bytestring was passed as default_username on the call to
User.objects.get(username=default_username). 'J\xfalia' is the latin1 (same as Windows cp1252 for this case) encoding for "Júlia".
If instead you passed in 'J\xc3\xbalia', the utf-8 encoding, the
User.objects.get call would work. Due to other code in this area it would not be accepted as a unsername, but you'd get past that exception.
This error from sqlite is new with 2.6, see #7921 for details and the explanation of why you can pass utf-8 encoded bytestrings. Django adapted to the change with 2.6 by installing an adapter to convert all bytestrings passed down to the database to unicode, assuming they have a utf-8 encoding. If they don't and the attempt to decode from utf-8 fails, you will see the error message you are seeing.
If there was some way to know the encoding of the bytestring returned by
getpass.getuser() then the best thing would be to use that known encoding to transform the bytestring into a unicode object.
comment:4 Changed 6 years ago by
Yes, but that was a somewhat different situation.
I would prefer the original proposed solution to strip international characters from the default_username, because an evantually conversion to unicode would only postpone the problem for later. (Convert something correctly and display that correctly, what would be finally must rejected?)
I thing that a simple solution can be incorporaded in any bugfix release, it need not to wait for Django 1.4
(I know a better solution with striping the accents, not dropping characters, but it is not important now at all, imho. The encoding is not 'utf-8'. It is sys.getfilesystemencoding(), usually 'mbcs', which is a generic name for differnent windows default encodings of the actual installation.)
comment:5 Changed 6 years ago by
If we are going to fix it, we should make some effort to fix it reasonably. If I were named Júlia I'd find it a bit irritating for the system to suggest a "good" username for me was jlia. It's not that hard to "translate" accented unicode characters to their non-accented ASCII equivalents, using unicodedata.normalize:
>>> import unicodedata >>> yipes = u'¡¢£¥§©ª«¬®¯°±²³µ¶·¹º»¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ' >>> unicodedata.normalize('NFKD', yipes).encode('ascii', 'ignore') 'a 231oAAAAAACEEEEIIIINOOOOOUUUUYaaaaaaceeeeiiiinooooouuuuyy' >>>
Not perfect, but far better than stripping accented characters entirely.
comment:6 Changed 6 years ago by
This is much better. :-) I preferred a simple solution and strong arguments such as example names similar to a member of the community and the person who requested a test :-)
I thought so: Without a patch or with any failing patch the person probably lost several hours, because the simple bypass "./manage.py syncdb --noinput; ./manage.py createsuperuser --username=USERNAME" can be easily missed. With a simple patch he lost only several minutes, because unfortunately only a journalist is able to notice every missing letter. (Maybe a user can not login first.)
I am not sure enough that "getpass.getuser()" returns results in the encoding "sys.getfilesystemencoding()" for every language and version of Windows and Python. Otherwise the conversion fails. I only hope that Microsoft was never so proud to implement the right to left picture alphabets to the home directory paths. We should use some "decode(..., 'ignore')".
I suggest:
default_username = getpass.getuser().decode(sys.getfilesystemencoding(), 'ignore') default_username = unicodedata.normalize('NFKD', default_username) \ .encode('ascii', 'ignore').replace(' ', '').lower()
Hopefully the "unicodedata.normalize('NFKD'...)" does not raise an exception, otherwise it should be added to captured exceptions, not only (ImportError, KeyError).
comment:7 Changed 6 years ago by
Alternative: If some character can not be decoded (maybe bad detected encoding perhaps for a non-latin alphabet), better is to give no suggestions:
try: - default_username = getpass.getuser().replace(' ', '').lower() - except (ImportError, KeyError): + default_username = getpass.getuser().decode(sys.getfilesystemencoding()) + default_username = unicodedata.normalize('NFKD', default_username) \ + .encode('ascii', 'ignore').replace(' ', '').lower() + except (ImportError, KeyError, UnicodeDecodeError): + # UnicodeDecodeError - We are not sure what will do a non-latin Windows version.
comment:8 Changed 6 years ago by
Changed 6 years ago by
comment:9 Changed 6 years ago by
#15162 has similar guess of the encoding:
locale.getdefaultlocale()[1]
- It returns more explicit names e.g 'cp1250', 'cp1252' than sys.getfilesystemencoding() which returns 'mbcs'.
- Decoded characters are the same for both methods. Encoding 'cp1250' is a subset defined for 251 characters, 'mbcs' is defined for all 256 characters.
Conclusion: I think that better is:
locale.getdefaultlocale()[1]
Characters from non-default encoding are replaced by question marks before any conversion by 'getpass.getuser()'. E. g. 'ů' on West Europe Windows or vice-versa 'ù' on East Europe Windows. Better is no suggestion than a bad one. Therefore I add:
+ if not RE_VALID_USERNAME.match(default_username): + default_username = ''
My final patch createsuperuser.diff is ready.
comment:10 Changed 6 years ago by
Mister Gaynor, please write the unit test for this patch. Thank you.
comment:11 Changed 6 years ago by
The test can be either formal only and very complicated, because it is not easy to modify output of getpass.getuser() and sys.getfilesystemencoding() or even simulate type of operating system.
This is similar requirement like julien asked tests replicating this bug. I would advice good manual testing and checkout.
Do you want really rely on tests like ?
self.assertEqual('ABC'.lower(), 'abc')
comment:12 Changed 6 years ago by
I have tested manually polite refusal with expecting suggesting no username for little obscure cases on Greek three weeks ago. (Greek would not be possible to be tested without buing Greek Windows and installing it totally blind. Greek keyboard was not enough because Greek characters are replaced by ascii '?' by getuser() on non Greek Windows, which is not the real case.) Do test only 'Júlia' ('J\xfalia') or 'Adrián'.
Believe that testing is much more complicated than good fixing. I read piece of Windows documentation due to testing, but Django development is not reading how business plan of Microsoft has blemished Windows between the lines.
comment:13 Changed 6 years ago by
What about using the django.utils.encoding.smart_str() function to deal with the encoding of a non-ASCII username?
comment fixing the 'ignore' positional argument to 'strict'.
comment removing the
'ignore' positional argument; if it is removed, that defaults to
'strict', which catches the
UnicodeError -- which is what we want.
comment:16 Changed 6 years ago by
DAMN! I was wrong! Please don't follow my instructions.
comment:17 Changed 6 years ago by
The patch is very good. Please apply it.
comment:18 Changed 6 years ago by
I have created a new user account on my Windows 7 operating system and have added the omega Greek character into the user account name. My account was named userΩtest. And when I reached the point of having to type "yes" at the Django createsuperuser prompt, the prompt was like this:
Username (leave blank to use 'userotest'). So the patch worked. The Greek letter "Ω" (omega) was successfully converted to an "o" character. This patch is bulletproof. You could mock getpass, but what is the point? Please incorporate this patch already.
comment:19 Changed 6 years ago by
Yes, the createsuperuser.diff patch is the final patch and is tested with very well results and should to be commited to the trunk ASAP.
comment:20 Changed 6 years ago by
Freddie, we're not putting it in without unit tests.
comment:21 Changed 6 years ago by
(that comment was me, and the previous ones were
Freddie__ from irc, who has some entitlement issues to work through)
comment:22 Changed 6 years ago by
I am finding the repeated demands to get this in NOW off-putting but I'm also a bit puzzled by the insistence on tests -- testing behavior that requires a current system username value that contains a non-ascii char seems a bit outside of the scope of where we would normally require tests? For this particular case I would have trusted the reports of people who can recreate the issue, and manual review of the code (which I don't have time for at the moment).
comment:23 follow-up: 24 Changed 6 years ago by
comment:24 Changed 6 years ago by
Replying to SmileyChris:
What's up with the
:params: and
:returns: stuff in the docstrings?
comment:25 Changed 6 years ago by
Thanks for fixing. I acknowledge that it is satisfactory closed.
It is not important to catch
UnicodeDecodeError in
get_default_username in [16182]. It can be safely removed, which improves readibility:
def get_default_username(check_db=True): ... default_username = get_system_username() - try: - default_username = unicodedata.normalize('NFKD', default_username)\ - .encode('ascii', 'ignore').replace(' ', '').lower() - except UnicodeDecodeError: - return '' + default_username = unicodedata.normalize('NFKD', default_username)\ + .encode('ascii', 'ignore').replace(' ', '').lower() if not RE_VALID_USERNAME.match(default_username): return ''
Catching of
UnicodeDecodeError was only important in the original patch due to
decode(locale.getdefaultlocale...), not due to other commands, which can be demonstrated:
# Verificaion that `unicodedata.normalize` does not raises anythyng # for the whole range of valid unicode characters. It is also clear from the documentation. import unicodedata for i in range(0x110000): dummy = unicodedata.normalize('NFKD', unichr(i)).encode('ascii', 'ignore')
- Answer to objection comment:14: The prove or disprove of possibility
UnicodeDecodeErrorafter
getpass.getuser().decodeis not easy, because important things are outside of free software. It is safer to catch it.
- Objection comment:13 Why? Because the proposed solution would not work at all.
- Comments comment:11 and comment:12 was me - patch author and reporter.
comment:26 Changed 6 years ago by
Thanks for the follow-up hynekcer. I'll just leave it in there since it doesn't hurt anyone (and it'd be better to still catch a decoding issue on some strange setup than choke).
Could you provide some tests replicating this bug?
|
https://code.djangoproject.com/ticket/15778
|
CC-MAIN-2017-34
|
refinedweb
| 2,225
| 50.23
|
Advertise with Us!
We have a variety of advertising options which would give your courses an instant visibility to a very large set of developers, designers and data scientists.View Plans
Java Interview Questions and Answers
Table of Contents
It's a no-brainer that Java is one of the leading programming options for bagging a lucrative job. After all, the class-based, general-purpose, object-oriented programming language is one of the most widely used programming languages in the world.
With a plethora of great features, the programming language is preferred not only by the seasoned experts but also pursued by those new to the programming world. So, here are top Java interview questions and answers that will help you bag a Java job or, at the very least, enhance your learning. The Java interview questions are recommended for both beginners and professionals as well as for Software Developers and Android Applications Developers.
Top Java Interview Questions and Answers
We also recommend you to brush up your Java skills with this Java Cheat Sheet before starting your Java interview preparation. This article is only relevant for Core Java Interview.
The article has been divided into different sections and categories for your organized preparation for the interview into the following categories:
Basic Java Interview Questions
Question: What is Java?
Answer: Java is an object-oriented, high-level, general-purpose programming language originally designed by James Gosling and further developed by the Oracle Corporation. It is one of the most popular programming languages in the world. To know more about what is Java, Click here and know all the details of Java, Features, and Component.
Question: Explain about Java Virtual Machine?
Answer: JVM is a program that interprets the intermediate Java byte code and generates the desired output. It is because of byte code and JVM that programs written in Java are highly portable.
Question: What are the features of Java?
Answer: Following are the various features of the Java programming language:
- High Performance– Using a JIT (Just-In-Time) compiler allows high performance in Java. The JIT compiler converts the Java bytecode into machine language code, which then gets executed by the JVM.
- Multi-threading– A thread is a flow of execution. The JVM creates a thread which is called the main thread. Java allows the creation of several threads using either extending the thread class or implementing the Runnable interface.
- OOPS Concepts– Java follows various OOPS concepts, namely abstraction, encapsulation, inheritance, object-oriented, and polymorphism
- Platform Independency– Java makes use of the Java Virtual Machine or JVM, which allows a single Java program to operate on multiple platforms without any modifications.
You may want to check out Java features in detail here.
Question: How does Java enable high performance?
Answer: In the Just-in-Time compilation, the required code is executed at run time. Typically, it involves translating bytecode into machine code and then executing it directly. For enabling high performance, Java can make use of the Just-In-Time compilation. The JIT compiler is enabled by default in Java and gets activated as soon as a method is called. It then compiles the bytecode of the Java method into native machine code. After that, the JVM calls the compiled code directly instead of interpreting it. This grants a performance boost.
Question: Differentiate between JVM, JRE, and JDK
Answer:
Question: What is the JIT compiler?
Answer: JIT compiler runs after the program is executed and compiles the code into a faster form, hosting CPU's native instructing set. JIT can access dynamic runtime information, whereas a standard compiler doesn't and can make better optimizations like inlining functions that are used frequently.
Question: Which Java IDE to use, and why?
Answer: A Java IDE is a software that allows Java developers to easily write as well as debug Java programs. It is basically a collection of various programming tools, accessible via a single interface, and several helpful features, such as code completion and syntax highlighting. Codenvy, Eclipse, and NetBeans are some of the most popular Java IDEs.
Question: Java is a platform-independent language. Why?
Answer:.
Question: Explain Typecasting
Answer: The concept of assigning a variable of one data type to a variable of another data type. It is not possible for the boolean data type.
It is of two types:
- Implicit
- Explicit
Question: Explain different types of typecasting?
Answer:
Different types of typecasting are:
- Implicit: Storing values from a smaller data type to the larger data type. It is automatically done by the compiler.
- Explicit: Storing the value of a larger data type into a smaller data type. This results in information loss:
- Truncation: While converting a value from a larger data type to a smaller data type, the extra data would be truncated.
Let us see the code example:
float f = 3.14f;
int i = (int) f;
After execution, i will contain only 3 and would truncate the rest when we go from float to integer.
- Out of Range: Typecasting does not allow to assign value more than its range; if that happens then the data is lost in such case.
Let us understand this:
long l = 123456789;
byte b = (byte) l; // byte is of not the same range as long so there will be loss of data.
Questions: Explain access modifiers in Java.
Answer: Access modifiers are predefined keywords in Java that are used to restrict the access of a class, method, constructor, and data member in another class.
Java supports four access modifiers:
- Default
- Private
- Protected
- Public
Question: What are the default values for local variables?
Answer: The local variables are not initialized to any default value, neither primitives nor object references.
OOPS Java Interview Questions
Question: What is Object-Oriented Programming?
Answer: OOPs is a programming paradigm centred around objects rather than functions. It is not a tool or a programming language it is a paradigm that was designed to overcome the flaws of procedural programming. There are many languages that follow OOPs concepts, some popular ones are Java, Python, Ruby and more. Some frameworks also follow OOPs concepts, Angular is one such framework.
Question: Could you explain the Oops concepts?
Answer: Following are the various OOPS Concepts:
- Abstraction– Representing essential features without the need to give out background details. The technique is used for creating a new suitable data type for some specific application
- Aggregation– All objects have their separate lifecycle, but ownership is present. No child object can belong to some other object except for the parent object
- Association– The relationship between two objects, where each object has its separate lifecycle. There is no ownership
- Class– A group of similar entities
- Composition– Also called the death relationship, it is a specialized form of aggregation. Child objects don't have a lifecycle. As such, they automatically get deleted if the associated parent object is deleted
- Encapsulation– Refers to the wrapping up of data and code into a single entity. Allows the variables of a class to be only accessible by the parent class and no other classes
- Inheritance– When an object acquires the properties of some other object, it is called inheritance. It results in the formation of a parent-child relationship amongst classes involved. Offers a robust and natural mechanism of organizing and structuring software
- Object– Denotes an instance of a class. Any class can have multiple instances. An object contains the data as well as the method that will operate on the data
- Polymorphism– refers to the ability of a method, object, or variable to assume several forms
Decision Making Java Interview Questions
Questions: Differentiate between break and continue
Answer:
Classes, Objects, and Methods Java Interview Questions
Question: What is an Object?
Answer: An instance of a Java class is known as an object. Two important properties of a Java object are behaviour and state. An object is created as soon as the JVM comes across the new keyword.
Question: Define classes in Java
Answer: A class is a collection of objects of similar data types. Classes are user-defined data types and behave like built-in types of a programming language.
Syntax of a class:
class Sample{
member variables
methods()
}
Example of Class:
public class Shape
{
String Shape name;
void area()
{
}
void volume ()
{
}
void num_sides()
{
}
}
Question: Explain what are static methods and variables?
Answer: A class has two sections one declares variables, and other declares method, and these are called instance variables and instance methods, respectively. They are termed so because every time a class is instantiated, a new copy of each of them is created.
Variables and methods can be created that are common to all objects and accessed without using a particular object by declaring them static. Static members are also available to be used by other classes and methods.
Question: What do you mean by Constructor?
Answer: A constructor is a method that has the same name as that of the class to which it belongs. As soon as a new object is created, a constructor corresponding to the class gets invoked. Although the user can explicitly create a constructor, it is created on its own as soon as a class is created. This is known as the default constructor. Constructors can be overloaded.
Note: - If an explicitly-created constructor has a parameter, then it is necessary to create another constructor without a parameter.
Question: Please explain Local variables and Instance variables in Java.
Answer: Variables that are only accessible to the method or code block in which they are declared are known as local variables. Instance variables, on the other hand, are accessible to all methods in a class. While local variables are declared inside a method or a code block, instance variables are declared inside a class but outside a method. Even when not assigned, instance variables have a value that can be null, 0, 0.0, or false. This isn't the case with local variables that need to be assigned a value, where failing to assign a value will yield an error. Local variables are automatically created when a method is called and destroyed as soon as the method exits. For creating instance variables, the new keyword must be used.
Question: Please explain Method Overriding in Java?
Answer: Method Overriding in Java allows a subclass to offer a specific implementation of a method that has already been provided by its parent or superclass. Method overriding happens if the subclass method and the Superclass method have:
- The same name
- The same argument
- The same return type
Question: What do you mean by Overloading?
Answer: Overloading is the phenomenon when two or more different methods (method overloading) or operators (operator overloading) have the same representation. For example, the + operator adds two integer values but concatenates two strings. Similarly, an overloaded function called Add can be used for two purposes
- To add two integers
- To concatenate two strings
Unlike method overriding, method overloading requires two overloaded methods to have the same name but different arguments. The overloaded functions may or may not have different return types.
Question: What role does the final keyword play in Java? What impact does it have on a variable, method, and class?
Answer: The final keyword in Java is a non-access modifier that applies only to a class, method, or variable. It serves a different purpose based on the context where it is used.
- With a class
When a class is declared as final, then it is disabled from being subclassed i.e., no class can extend the final class.
- With a method
Any method accompanying the final keyword is restricted from being overridden by the subclass.
- With a variable
A variable followed by the final keyword is not able to change the value that it holds during the program execution. So, it behaves like a constant.
Arrays, Strings and Vectors Java Interview Questions
Question: Could you draw a comparison between Array and ArrayList?
Answer: An array necessitates for giving the size during the time of declaration, while an array list doesn't necessarily require size as it changes size dynamically. To put an object into an array, there is the need to specify the index. However, no such requirement is in place for an array list. While an array list is parameterized, an array is not parameterized.
Question: Please explain the difference between String, String Builder, and String Buffer.
Answer: String variables are stored in a constant string pool. With the change in the string reference, it becomes impossible to delete the old value. For example, if a string has stored a value "Old," then adding the new value "New" will not delete the old value. It will still be there, however, in a dormant state. In a String Buffer, values are stored in a stack. With the change in the string reference, the new value replaces the older value. The String Buffer is synchronized (and therefore, thread-safe) and offers slower performance than the String Builder, which is also a String Buffer but is not synchronized. Hence, performance is fast in String Builder than the String Buffer.
Questions: What is String Pool in Java?
Answer: The collection of strings stored in the heap memory refers to the String pool. Whenever a new object is created, it is checked if it is already present in the String pool or not. If it is already present, then the same reference is returned to the variable else new object is created in the String pool, and the respective reference is returned.
Advance Java Interview Questions
Interfaces and Abstract Classes Java Interview Questions
Question: What do you know about Interface in Java?
Answer: A Java interface is a template that has only method declarations and not method implementations. It is a workaround for achieving multiple inheritances in Java. Some worth remembering important points regarding Java interfaces are:
- A class that implements the interface must provide an implementation for all methods declared in the interface
- All methods in an interface are internally public abstract void
- All variables in an interface are internally public static final
- Classes do not extend but implement interfaces
Question: How is an Abstract class different from an Interface?
Answer: There are several differences between an Abstract class and an Interface in Java, summed up as follows:
- Constituents – An abstract class contains instance variables, whereas an interface can contain only constants.
- Constructor and Instantiation – While an interface has neither a constructor nor it can be instantiated, an abstract class can have a default constructor that is called whenever the concrete subclass is instantiated.
- Implementation of Methods – All classes that implement the interface need to provide an implementation for all the methods contained by it. A class that extends the abstract class, however, doesn't require implementing all the methods contained in it. Only abstract methods need to be implemented in the concrete subclass.
- Type of Methods – Any abstract class has both abstract as well as non-abstract methods. Interface, on the other hand, has only a single abstract method.
Question: Please explain what do you mean by an Abstract class and an Abstract method?
Answer: An abstract class in Java is a class that can't be instantiated. Such a class is typically used for providing a base for subclasses to extend as well as implementing the abstract methods and overriding or using the implemented methods defined in the abstract class. To create an abstract class, it needs to be followed by the abstract keyword. Any abstract class can have both abstract as well as non-abstract methods. A method in Java that only has the declaration and not implementation is known as an abstract method. Also, an abstract method name is followed by the abstract keyword. Any concrete subclass that extends the abstract class must provide an implementation for abstract methods.
Questions: What is multiple inheritance? Does Java support multiple inheritance? If not, how can it be achieved?
Answer: If a subclass or child class has two parent classes, that means it inherits the properties from two base classes, it is multiple inheritances. Java does not multiple inheritances as in case if the parent classes have the same method names, then at runtime, it becomes ambiguous, and the compiler is unable to decide which method to execute from the child class.
Packages Java Interview Questions
Question: What are the packages in Java? State some advantages of Packages in Java?
Answer: Packages are Java's way of grouping a variety of classes and/or interfaces together. The functionality of the objects decides how they are grouped. Packagers act as "containers" to classes.
Enlisted below are the advantages of Packages:
- Classes of other programs can be reused.
- Two classes with the same can exist in two different packages.
- Packages can hide classes, thus denying access to certain programs and classes meant for internal use only.
- They also separate design from coding.
Multithreading Java Interview Questions
Question: How do you make a thread in Java? Give examples.
Answer: To make a thread in Java, there are two options:
- Extend the Thread Class – The thread is available in the java.lang.Thread class. To make a thread, you need to extend a thread class and override the run method. For example,
public class Addition extends Thread {
public void run() {
}
}
A disadvantage of using the thread class is that it becomes impossible to extend any other classes.
Nonetheless, it is possible to overload the run() method in the class
- Implement Runnable Interface – Another way of making a thread in Java is by implementing a runnable interface. For doing so, there is the need to provide the implementation for the run() method that is defined in the
interface. For example,
public class Addition implements Runnable {
public void run() {
}
}
Question: Why do we use the yield() method?
Answer: The yield() method belongs to the thread class. It transfers the currently running thread to a runnable state and also allows the other threads to execute. In other words, it gives equal priority threads a chance to run. Because yield() is a static method, it does not release any lock.
Question: Can you explain the thread lifecycle in Java?
Answer: The thread lifecycle has the following states and follows the following order:
- New – In the very first state of the thread lifecycle, the thread instance is created, and the start() method is yet to be invoked. The thread is considered alive now.
- Runnable – After invoking the start() method, but before invoking the run() method, a thread is in the runnable state. A thread can also return to the runnable state from waiting or sleeping state.
- Running – The thread enters the running state after the run() method is invoked. This is when the thread begins execution.
- Non-Runnable – Although the thread is alive, it is not able to run. Typically, it returns to the runnable state after some time.
- Terminated – The thread enters the terminated state once the run() method completes its execution. It is not alive now.
Question: When is the Runnable interface preferred over thread class and vice-versa?
Answer: In Java, it is possible to extend only one class. Hence, the thread class is only extended when no other class needs to be extended. If it is required for a class to extend some other class than the thread class, then we need to use the Runnable interface.
Question: Please draw a comparison between notify() and notifyAll() methods.
Answer: The notify() method is used for sending a signal to wake up a single thread in the waiting pool. Contrarily, the notifyAll() method is used for sending a signal to wake up all threads in a waiting pool.
Question: How will you distinguish processes from threads?
Answer: There are several fundamental differences between a process and a thread, stated as follows:
- Definition – A process is an executing instance of a program whereas, a thread is a subset of a process.
- Changes – A change made to the parent process doesn't affect child processes. However, a change in the main thread can yield changes in the behavior of other threads of the same process.
- Communication – While processes require inter-process communication for communicating with sibling processes, threads can directly communicate with other threads belonging to the same process.
- Control – Processes are controlled by the operating system and can control only child processes. On the contrary, threads are controlled by the programmer and are capable of exercising control over threads of the same process to which they belong.
- Dependence – Processes are independent entities while threads are dependent entities
- Memory – Threads run in shared memory spaces, but processes run in separate memory spaces.
Question: What is the join() method? Give an example.
Answer: We use the join() method for joining one thread with the end of the currently running thread. It is a non-static method and has an overloaded version. Consider the example below:
public static void main (String[] args) {
Thread t = new Thread();
t.start();
t.join();
}
The main thread starts execution in the example mentioned above. As soon as the execution reaches the code t.start(), then the thread t starts its stack for execution. The JVM switches between the main thread and the thread there. Once the execution reaches the t.join(), then the thread t alone is executed and allowed to complete its task. Afterward, the main thread resumes execution.
Question: How do you make a thread stop in Java?
Answer: There are three methods in Java to stop the execution of a thread:
- Blocking – This method is used to put the thread in a blocked state. The execution resumes as soon as the condition of the blocking is met. For instance, the ServerSocket.accept() is a blocking method that listens for incoming socket connection and resumes the blocked thread only when a connection is made.
- Sleeping – This method is used for delaying the execution of the thread for some time. A thread upon which the sleep() method is used is said to enter the sleep state. It enters the runnable state as soon as it wakes up i.e., the sleep state is finished. The time for which the thread needs to enter the sleep state is mentioned inside the braces of the sleep() method. It is a static method.
- Waiting – Although it can be called on any Java object, the wait() method can only be called from a synchronized block.
Exception Handling Java Interview Questions
Question: Could you explain various types of Exceptions in Java? Also, tell us about the different ways of handling them.
Answer: Java has provision for two types of exceptions:
- Checked Exceptions – Classes that extend Throwable class, except Runtime exception and Error, are called checked exceptions. Such exceptions are checked by the compiler during the compile time. These types of exceptions must either have appropriate try/catch blocks or be declared using the throws keyword. ClassNotFoundException is a checked exception.
- Unchecked Exceptions – Such exceptions aren't checked by the compiler during the compile time. As such, the compiler doesn't necessitate handling unchecked exceptions. Arithmetic Exception and ArrayIndexOutOfBounds Exception are unchecked exceptions.
Exceptions in Java are handled in two ways:
Declaring the throws keyword – We can declare the exception using throws keyword at the end of the method. For example:
class ExceptionCheck{
public static void main(String[] args){
add();
}
public void add() throws Exception{
addition();
}
}
Using try/catch – Any code segment that is expected to yield an exception is surrounded by the try block. Upon the occurrence of the exception, it is caught by the catch block that follows the try block. For example,
class ExceptionCheck{
public static void main (String[] args) {
add();
}
public void add(){
try{
addition();
}
catch(Exception e)
{
e.printStacktrace();
}
}
}
Question: Could you draw the Java Exception Hierarchy?
Answer:
Question: Is it possible to write multiple catch blocks under a single try block?
Answer: Yes, it is possible to write several catch blocks under a single try block. However, the approach needs to be from specific to general. Following example demonstrates the same:");
}
}
Question: How does the throw keyword differ from the throws keyword?
Answer: While the throws keyword allows declaring an exception, the throw keyword is used to explicitly throw an exception. Checked exceptions can't be propagated with throw only, but throws allow doing so without the need for anything else. The throws keyword is followed by a class, whereas the throw keyword is followed by an instance. The throw keyword is used within the method, but the throws keyword is used with the method signature. Furthermore, it is not possible to throw multiple exceptions, but it is possible to declare multiple exceptions.
Question: Explain various exceptions handling keywords in Java?
Answer:
There is two crucial exception handling keywords in Java, followed by the third keyword final, which may or may not be used after handling exceptions.
try:
If and when a code segment has chances of having and abnormality or an error, it is placed within a try block. When the exception is raised, it is handled and caught by the catch block.
Try block must have a catch() or a final() or both blocks after it.
catch:
When an exception is raised in the try block, it is handled in the catch block.
final:
This block is executed regardless of the exception. It can be placed either after try{} or catch {} block.
Question: Explain exception propagation?
Answer: The method at the top of the stack throws an exception if it is not caught. It moves to the next method and goes on like this until caught.
Example:
public class Sum()
{
public static void main(String args[])
{
addition()
}
public void addition()
{
add();
}
}
The stack of the above code is:
add()
addition()
main()
If an exception occurred in the add() method is not caught, then it moves to the method addition(). It is then moved to the main() method, where the flow of execution stops. It is called Exception Propagation.
File Handling Java Interview Questions
Question: Does an empty file name with .java extension a valid file name?
Answer: Yes, Java permits to save our java file by .java only. It is compiled by javac .java and run by java class name.
Let's take a simple example:
public class Any()
{
public static void main(String args[])
{
System.out.println("Hello Java File here!");
}
}
To compile: javac.java
To run: Java Any
Collections Java Interview Questions
Question: What do you mean by Collections in Java? What are the constituents of Collections in Java?
Answer: A group of objects in Java is known as collections. Java.util package contains, along with date and time facilities, internationalization, legacy collection classes, etc., the various classes and interfaces for collecting. Alternatively, collections can be considered as a framework designed for storing the objects and manipulating the design in which the objects are stored. You can use collections to perform the following operations on objects:
- Deletion
- Insertion
- Manipulation
- Searching
- Sorting
Following are the various constituents of the collections framework:
- Classes – Array List, Linked List, Lists, and Vector
- Interfaces – Collection, List, Map, Queue, Set, Sorted Map, and Sorted Set
- Maps – HashMap, HashTable, LinkedHashMap, and TreeMap
- Queues – Priority Queue
- Sets – Hash Set, Linked Hash Set, and Tree Set
Question: How will you differentiate HashMap from HashTable?
Answer: HashMap in Java is a Map-based collection class, used for storing key & value pairs. It is denoted as HashMap<Key, Value> or HashMap<K, V> HashTable is an array of a list, where each list is called a bucket. Values contained in a HashTable are unique and depend on the key. Methods are not synchronized in HashMap, while key methods are synchronized in HashTable. However, HashMap doesn't have thread safety, while HashTable has the same. For iterating values, HashMap uses iterator and HashTable uses enumerator. HashTable doesn't allow anything that is null, while HashMap allows one null key and several null values. In terms of performance, HashTable is slow. Comparatively, HashMap is faster.
Question: Please explain Map and their types in Java.
Answer: A Java Map is an object that maps keys to values. It can't contain duplicate keys, and each key can map to only one value. In order to determine whether two keys are the same or distinct, Map makes use of the equals() method. There are 4 types of Map in Java, described as follows:
- HashMap - It is an unordered and unsorted map and hence, is a good choice when there is no emphasis on the order. A HashMap allows one null key and multiple null values and doesn't maintain any insertion order.
- HashTable – Doesn't allow anything null and has methods that are synchronized. As it allows for thread safety, the performance is slow.
- LinkedHashMap – Slower than a HashMap but maintains insertion order and has a faster iteration.
- TreeMap – A sorted Map providing support for constructing a sort order using a constructor.
Question: What do you mean by Priority Queue in Java?
Answer: Priority queue, like a regular queue, is an abstract data type except having a priority associated with each element contained by it. The element with the high priority is served before the element with low priority in a priority queue. Elements in a priority queue are ordered either according to the comparator or naturally. The order of the elements in a priority queue represents their relative priority.
Question: What is Set in Java? Also, explain its types in a Java Collections.
Answer: In Java, a Set is a collection of unique objects. It uses the equals() method to determine whether two objects are the same or not. Various types of Set in Java Collections are:
- Hash Set– An unordered and unsorted set that uses the hash code of the object for adding values. Used when the order of the collection isn't important
- Linked Hash Set– This is an ordered version of the hash set that maintains a doubly-linked list of all the elements. Used when iteration order is mandatory. Insertion order is the same as that of how elements are added to the Set.
- Tree Set– One of the two sorted collections in Java, it uses Read-Black tree structure and ensures that the elements are present in the ascending order.
Question: What is ordered and sorted concerning collections?
Answer:
- Ordered
It means that values are stored in a collection in a specific order, but the order is independent of the value. Eg. List
- Sorted
It means the collection has an order which is dependent on the value of an element.
Eg. SortedSet
Miscellaneous Java Interview Questions
Question: Please explain the various types of garbage collectors in Java?
Answer: The Java programming language has four types of garbage collectors:
- Serial Garbage Collector– Using only a single thread for garbage collection, the serial garbage collector works by holding all the application threads. It is designed especially for single-threaded environments. Because serial garbage collector freezes all application threads while performing garbage collection, it is most suitable for command-line programs only. For using the serial garbage collector, one needs to turn on the -XX:+UseSerialGC JVM argument.
- Parallel Garbage Collector – Also known as the throughput collector, the parallel garbage collector is the default garbage collector of the JVM. It uses multiple threads for garbage collection, and like a serial garbage collector freezes all application threads during garbage collection.
- CMS Garbage Collector– Short for Concurrent Mark Sweep, CMS garbage collector uses multiple threads for scanning the heap memory for marking instances for eviction, followed by sweeping the marked instances. There are only two scenarios when the CMS garbage collector holds all the application threads:
- When marking the referenced objects in the tenured generation space
- If there is some change in the heap memory while performing the garbage collection, CMS garbage collector ensures better application throughput over parallel garbage collectors by using more CPU. For using the CMS garbage collector, the XX:+USeParNewGC JVM argument needs to be turned on.
- G1 Garbage Collector – Used for large heap memory areas, G1 garbage collector works by separating the heap memory into multiple regions and then executing garbage collection in them in parallel. Unlike the CMS garbage collector that compacts the memory on STW (Stop The World) situations, G1 garbage collector compacts the free heap space right after reclaiming the memory. Also, the G1 garbage collector prioritizes the region with the most garbage. Turning on the –XX:+UseG1GC JVM argument is required for using the G1 garbage collector.
Question: What do you understand by Synchronization in Java? What is its most significant disadvantage?
Answer: If several threads try to access a single block of code, then there is an increased chance of producing inaccurate results. Synchronization is used to prevent this. Using the synchronization keyword makes a thread need a key to access the synchronized code. Simply, synchronization allows only one thread to access a block of code at a time. Each Java object has a lock, and every lock has only one key. A thread can access a synchronized method only if it can get the key to the lock of the object. Following example demonstrates synchronization:
public class ExampleThread implements Runnable {
public static void main (String[] args){
Thread t = new Thread();
t.start();
}
public void run(){
synchronized(object){
{
}
}
Note: It is recommended to avoid implementing synchronization for all methods. This is because when only one thread can access the synchronized code, the next thread needs to wait. Consequently, it results in slower performance of the program.
Question: Can you tell the difference between execute(), executeQuery(), and executeUpdate()?
Answer:
- execute() – Used for executing an SQL query. It returns TRUE if the result is a ResultSet, like running Select queries, and FALSE if the result is not a ResultSet, such as running an Insert or an Update query.
- executeQuery() – Used for executing Select queries. It returns the ResultSet, which is not null, even if no records are matching the query. The executeQuery() method must be used when executing select queries so that it throws the java.sql.SQLException with the 'executeQuery method cannot be used for update' message when someone tries to execute an Insert or Update statement.
- executeUpdate() – Used for executing Delete/Insert/Update statement or DDL statements that returns nothing. The output varies depending on whether the statements are Data Manipulation Language (DML) statements or Data Definition Language (DDL) statements. The output is an integer and equals the total row count for the former case, and 0 for the latter case.
Note: The execute() method needs to be used only in a scenario when there is no certainty about the type of statement. In all other cases, either use executeQuery() or executeUpdate() method.
Question: Provide an example of Hibernate architecture:
Answer:
Question: Could you demonstrate how to delete a cookie in JSP with a code example?
Answer: Following code demonstrates deleting a cookie in JSP:
Cookie mycook = new Cookie("name1","value1");
response.addCookie(mycook1);
Cookie killmycook = new Cookie("mycook1","value1");
killmycook . set MaxAge ( 0 );
killmycook . set Path ("/");
killmycook . addCookie ( killmycook 1 );
Question: Write suitable code examples to demonstrate the use of final, final, and finalize.
Answer: Final: The final keyword is used for restricting a class, method, and variable. A final class can't be inherited, a final method is disabled from overriding, and a final variable becomes a constant i.e., its value can't be changed.
class FinalVarExample {
public static void main( String args[])
{
final int a=10;
a=50; /* Will result in an error as the value can’t be changed now*/
}
Finally: Any code inside the final block will be executed, irrespective of whether an exception is handled or not.
class FinallyExample {
public static void main(String args[]){
try {
int x=100;
}
catch(Exception e) {
System.out.println(e);
}
finally {
System.out.println("finally block is executing");}
}
}
}
Finalize: The finalize method performs the clean up just before the object is garbage collected.
class FinalizeExample {
public void finalize() {
System.out.println("Finalize is called");
}
public static void main(String args[])
{
FinalizeExample f1=new FinalizeExample();
FinalizeExample f2=new FinalizeExample();
f1= NULL;
f2=NULL;
System.gc();
}
}
Question: What purpose do the Volatile variable serve in Java?
Answer: The value stored in a volatile variable is not read from the thread's cache memory but from the main memory. Volatile variables are primarily used during synchronization.
Question: Please compare Serialization with Deserialization in Java.
Answer: Serialization is the process by which Java objects are converted into the byte stream. Deserialization is the exact opposite process of serialization where Java objects are retrieved from the byte stream. A Java object is serialized by writing it to an ObjectOutputStream and deserialized by reading it from an ObjectInputStream.
Question: What do you understand by OutOfMemoryError in Java?
Answer: Typically, the OutOfMemoryError exception is thrown when the JVM is not able to allocate an object due to running out of memory. In such a situation, no memory could be reclaimed by the garbage collector. There can be several reasons that result in the OutOfMemoryError exception, out of which most notable ones are:
- Holding objects for too long
- Trying to process too much data at the same time
- Using a third-party library that caches strings
- Using an application server that doesn't perform a memory cleanup post the deployment
- When a native allocation can't be satisfied
That completes the list of top Java interview questions. What do you think about the list we compiled? Let us know by dropping your comments in the dedicated window below. Also, check out these best Java tutorials to further refine your Java skill set.
Question: Explain public static void main(String args[ ]) in Java
Answer: The execution Java program starts with public static void main(String args[ ]), also called the main() method.
- public: It is an access modifier defining the accessibility of the class or method. Any Class can access the main() method defined public in the program.
- static: The keyword indicates the variable, or the method is a class method. The method main() is made static so that it can be accessed without creating the instance of the class. When the method main() is not made static, the compiler throws an error because the main() is called by the JVM before any objects are made, and only static methods can be directly invoked via the class.
- void: It is the return type of the method. Void defines the method does not return any type of value.
- main: JVM searches this method when starting the execution of any program, with the particular signature only.
- String args[]: The parameter passed to the main method.
Question: What are wrapper classes in Java?
Answer: Wrapper classes are responsible for converting the Java primitives into the reference types (objects). A class is dedicated to every primitive data type. They are known as wrapper classes because they wrap the primitive data type into an object of that class. It is present in Java.lang package. The table below displays the different primitive types, wrapper class.
Question: Explain the concept of boxing, unboxing, autoboxing, and auto unboxing.
Answer:
- Boxing: The concept of putting a primitive value inside an object is called boxing.
- Unboxing: Getting the primitive value from the object.
- Autoboxing: Assigning a value directly to an integer object.
- Auto unboxing: Getting the primitive value directly into the integer object.
public class BoxUnbox
{
public static void main(String args[])
{
int i = 5;
Integer ii = new Integer(i); /*Boxing*/
Integer jj = i; /*Unboxing*/
int j = jj.intValue(); /*Unboxing*/
int k = jj; /*AutoUnboxing*/
}
}
Question: Define the Singleton class in Java. How can a class be made Singleton?
Answer: A Singleton class allows only one instance of the class to be created.
A class can be made singleton with the following steps:
- Creating a static instance of the class with the class.
- By not allowing the user to create an instance with default constructor by defining private constructor.
- Create a static method to return the object of an instance of A.
public class Singleton
{
public static void main(String args[])
{
Single obj1 = Single.getInstance(); /* both would point to one and same instance of the class */
Single obj2 = Single.getInstance();
}
}
class Single
{
static Single obj = new Single(); /* step a*/
private Single() /* step b*/
{
}
public static Single getInstance()
{
return obj; /* step c*/
}
}
Question: What if the public static void is replaced by static public void, will the program still run?
Answer: Yes, the program would compile and run without any errors as the order of the specifiers don't matter.
Question: Differentiate between == and equals() ?
Answer:
Question: Why don't we use pointers in Java?
Answer: Pointers are considered to be unsafe, and increase the complexity of the program, add
ing the concept of pointers can be contradicting. Also, JVM is responsible for implicit memory allocation; thus, to avoid direct access to memory by the user, pointers are discouraged in Java.
Questions: Differentiate between this() and super()
Answer:
Java Coding Interview Questions
Apart from having good knowledge about concepts of Java programming, you are also tested for your skills in coding in Java programming language. Given below are Java Coding Interview Questions that are relevant for freshers and are quite popular amongst Java programming interviews.
Question: Take a look at the two code snippets below:
i.
class Adder {
Static int add(int a, int b)
{
return a+b;
}
Static double add( double a, double b)
{
return a+b;
}
public static void main(String args[])
{
System.out.println(Adder.add(11,11));
System.out.println(Adder.add(12.3,12.6));
}}
ii.
class Car {
void run(){
System.out.println(“car is running”);
}
Class Audi extends Car{
void run()
{
System.out.prinltn(“Audi is running safely with 100km”);
}
public static void main( String args[])
{
Car b=new Audi();
b.run();
}
}
What is the important difference between the two?
Answer: Code snippet i. is an example of method overloading while the code snippet ii. demonstrates method overriding.
Questions: Program for string reversal without using inbuilt function
Answer:
public class Reversal
{
public static void main(String args[])
{
String input = "Java Interview";
System.out.println("Given String -> " + "Java Interview");
char charArray[] = input.toCharArray();
System.out.println("Reversed String -> ");
for(int i = charArray.length-1;i>=0; i--)
{
System.out.print(charArray[i]);
}
System.out.println();
}
}
Questions: Program to delete duplicate from an array
Answer:
import java.util.ArrayList;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Set;
class RemoveDuplicates
{
public static void main(String args[])
{
/*create ArrayList with duplicate elements*/
ArrayList duplicate = new ArrayList();
duplicate.add(5);
duplicate.add(7);
duplicate.add(1);
duplicate.add(4);
duplicate.add(1);
duplicate.add(7);
System.out.println("Given array: "+ duplicate);
Set <Integer> withoutDuplicates = new LinkedHashSet<Integer>(duplicate)
duplicate.clear();
duplicate.addAll(withoutDuplicates);
System.out.println("Array without duplicates: "+ duplicate);
}
}
Questions:;
}
}
Questions: Program for binary search
Answer:
import java.util.Scanner;
import java.util.Arrays;
public class Binary {
public static void main(String[] args) {
System.out.println("Enter total number of elements : ");
Scanner s = new Scanner (System.in);
int length = s.nextInt();
int[] input = new int[length];
System.out.printf("Enter %d integers", length);
for (int i = 0; i < length; i++) {
input[i] = s.nextInt();
}
/* binary search requires the input array to be sorted so we must sort the array first*/
Arrays.sort(input);
System.out.print("the sorted array is: ");
for(int i= 0; i<= length-1;i++)
{
System.out.println(input[i] + " ,");
}
System.out.println("Please enter number to be searched in sorted array");
int key = s;
}
}
Questions: Program to check if a number is prime.;
}
}
Questions: Program to print);
}
}
Questions: Program to check if the given string is a palindrome.
Answer:;
}
}
Questions: Pattern printing
*
* *
* * *
* * * *
* * * * *
Answer:
public class Pattern
{
public static void main(String args[])
{
for(int i=5; i>=0; i--)
{
System.out.println();
for(int j=i; j<5;j++)
{
System.out.print(" * ");
}
}
System.out.println();
}
}
Questions: Program to swap two numbers
Answer:
import java.util.Scanner;
public class Swap
{
public static void main(String args[])
{
Scanner s = new Scanner(System.in);
System.out.println("Enter a number: ");
int a = s.nextInt();
System.out.println("Enter second number: ");
int b = s.nextInt();
System.out.println("Value of a and b before swapping: " + "a = " +a + " b = " + b);
swap(a,b);
}
public static void swap(int a , int b)
{
int swap_variable;
swap_variable = a;
a = b;
b = swap_variable;
System.out.println("Value of a and b after swapping: " + "a = " +a + " b = " + b);
}
}
Questions: Program to check if the given number is an Armstrong number.
Answer:!");
}
}
}
We have also provided a PDF for your preparation so that you can download and learn and prepare on the go. Download Java Interview Questions PDF
Summary
The aforementioned Java Interview Questions and Java Programming Interview Questions is the creme collection to prepare you for the interview as every concept is explained in much detail. Inside reference, links are also provided for your further reading.
Java is a broad field of study. Buy this course for further reading and preparing for a Java-based interview: Java interview Guides: 200+ Interview Question and Answer
Follow this book that will help you crack core Java interviews: Elements of Programming Interviews in Java: The insider guide second edition
We also suggest you share your interview experiences and Java programming interview quotations that you come across in your different tech company interviews. Share with us in the comments below so that we can all help each other and build an interactive community to learn and crack the Java interview for our dream job.
This overview really helped me out! It gave me the background necessary to explain all my OOP and Java questions and helped me land my internship! Thank you so much for your help Simran!!
what is dispatcherservlet?
what is difference between spring and springboot ,microservice?
Difference between notify() method and notifyAll() method in Java.
Explain about Map and their types.
How does cookies work in Servlets?
What do you mean by aggregation?
Can you override a private or static method in Java?
What is the difference between HashSet and TreeSet?
Difference between equals() and ==?
|
https://hackr.io/blog/java-interview-questions
|
CC-MAIN-2020-50
|
refinedweb
| 7,662
| 55.74
|
Properties allow clients to access class state as if they were accessing member fields directly, while actually implementing that access through a class method.
This is ideal. The client wants direct access to the state of the object and does not want to work with methods. The class designer, however, wants to hide the internal state of his class in class members, and provide indirect access through a method.
By decoupling the class state from the method that accesses that state, the designer is free to change the internal state of the object as needed. When the Time class is first created, the Hour value might be stored as a member variable. When the class is redesigned, the Hour value might be computed or retrieved from a database. If the client had direct access to the original Hour member variable, the change to computing the value would break the client. By decoupling and forcing the client to go through a method (or property), the Time class can change how it manages its internal state without breaking client code.
Properties meet both goals: they provide a simple interface to the client, appearing to be a member variable. They are implemented as methods, however, providing the data-hiding required by good object-oriented design, as illustrated in Example 4-11.; } // create a property public int Hour { get { return hour; } set { hour = value; } } } public class Tester { static void Main( ) { System.DateTime currentTime = System.DateTime.Now; Time t = new Time(currentTime); t.DisplayCurrentTime( ); int theHour = t.Hour; System.Console.WriteLine("\nRetrieved the hour: {0}\n", theHour); theHour++; t.Hour = theHour; System.Console.WriteLine("Updated the hour: {0}\n", theHour); } }
To declare a property, write the property type and name followed by a pair of braces. Within the braces you may declare get and set accessors. Neither of these has explicit parameters, though the set( ) method has an implicit parameter value as shown next.
In Example 4-11, Hour is a property. Its declaration creates two accessors: get and set.
public int Hour { get { return hour; } set { hour = value; } }
Each accessor has an accessor-body that does the work of retrieving and setting the property value. The property value might be stored in a database (in which case the accessor-body would do whatever work is needed to interact with the database), or it might just be stored in a private member variable:
private int hour;
The body of the get accessor is similar to a class method that returns an object of the type of the property. In the example, the accessor for Hour is similar to a method that returns an int. It returns the value of the private member variable in which the value of the property has been stored:
get { return hour; }
In this example, a local int member variable is returned, but you could just as easily retrieve an integer value from a database, or compute it on the fly.
Whenever you reference the property (other than to assign to it), the get accessor is invoked to read the value of the property:
Time t = new Time(currentTime); int theHour = t.Hour;
In this example, the value of the Time object's Hour property is retrieved, invoking the get accessor to extract the property, which is then assigned to a local variable.
The set accessor sets the value of a property and is similar to a method that returns void. When you define a set accessor, you must use the value keyword to represent the argument whose value is passed to and stored by the property.
set { hour = value; }
Here, again, a private member variable is used to store the value of the property, but the set accessor could write to a database or update other member variables as needed.
When you assign a value to the property, the set accessor is automatically invoked, and the implicit parameter value is set to the value you assign:
theHour++; t.Hour = theHour;
The advantage of this approach is that the client can interact with the properties directly, without sacrificing the data-hiding and encapsulation sacrosanct in good object-oriented design.
|
https://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+4.+Classes+and+Objects/4.7+Encapsulating+Data+with+Properties/
|
CC-MAIN-2022-21
|
refinedweb
| 688
| 59.64
|
This
Problem:
You want a servlet to present some information to the user.
Solution:
Override the
HttpServlet method
service( ), or>");
}
}
Problem
You want to process the data from an HTML form in a servlet.
Solution
Use the
request object's
getParameter( ) method.
Each uniquely named INPUT element in the FORM on the HTML page
makes an entry in the
request object..
When the browser visits a site that has sent it a cookie or
cookies, it returns all of them as part of the HTTP headers. You retrieve them
all (as an array) using the
getCookies( ) method,
and iterate through them looking for the one you want..
<BODY BGCOLOR="pink"> <H1>Please choose a color</h2> <FORM ACTION="/servlet/ColorCustServlet" METHOD=GET> <SELECT NAME="color_name"> <OPTION VALUE="green">Green</> <OPTION VALUE="white" SELECTED>White</> <OPTION VALUE="gray">Grey</> </SELECT> .
Problem:
You want to keep track of one user across several servlet invocations within the same browser session.
Solution: Cookies) the multiple-choice
questions in that topic. The first question looks like Figure
18-4.
After you've answered a few questions, it may look like Figure 18-5.
At the end of the quiz, you'll see the total number of questions that you answered correctly.
The
Exam object (an object containing
all the questions and answers, along with the number of correct answers) is
loaded using an
XamDataAccessor (the code for these
two classes is not shown) and stored in a
Progress
object.
Progress, an inner class inside the
servlet, is a tiny data structure used to monitor your progress through one
quiz. When you change topics, the
Progress object( );
}
}
Problem:
You want to make a printer-friendly document using a format like Adobe PDF.
Solution:
Use
response.setContentType("application/pdf") and a
third-party Java API that can generate test MS-Windows system's copy of Netscape has Acrobat installed, and will run Acrobat as a Netscape Plug-in to display it; see Figure 18-8.( ));
}
}
}
Problem:
You have a web page that could use a jolt of Java.
Solution:
program.>
Problem:
You want to write a "page-composite" JSP that includes other pages or passes control to another page.
Solution:
Use
<jsp:include> or
.
Problem:
You want to reduce the amount of Java coding in your JSP using a JavaBean component.
Solution:
Use
<jsp:useBean> with the name of your bean.
JavaBeans is Java's component technology, analogous to COM components on MS-Windows. Recipes and contain a formula for packaging certain Java classes as JavaBeans. While JavaBeans were originally introduced as client-side, GUI-builder-friendly components, there is nothing in the JavaBeans specification that limits their use to the client-side or GUI. In fact, it's fairly common to use JavaBean components with a JSP. It's also easy and useful, so let's see how to do it.
At the bare minimum, a JavaBean is an object that has a public no-argument constructor and follows the set/get paradigm. This means that there is regularity in the get and set methods. Consider a class, each instance of which represents one user account on a login-based web site. For the name, for example, the methods:
public void setName(String name); public String getName( );
allow other classes full control over the "name" field in the
class but with some degree of encapsulation; that is, the program doesn't have
to know the actual name of the field (which might be
name, or
myName, or anything else suitable). Other programs can even get a list of your get/set methods using introspection. Example
18-14 is the full class file; as you can see, it is mostly concerned with
these set and get methods.
Example 18-14: User.java, a class usable as a bean
/** Represents one logged in user */ public class User { protected String name; protected String passwd; protected String fullName; protected String email; protected String city; protected String prov; protected String country; protected boolean editPrivs = false; protected boolean adminPrivs = false; /** Construct a user with no data -- must be a no-argument * constructor for use in jsp:useBean. */ public User( ) { } /** Construct a user with just the name */ public User(String n) { name = n; } /** Return the nickname. */ public String getName( ) { } public void setName(String nick) { name = nick; } // The password is not public - no getPassword. /** Validate a given password against the user's. */ public boolean checkPassword(String userInput) { return passwd.equals(userInput); } /** Set password */ public void setPassword(String passwd) { this.passwd = passwd; } /** Get email */ public String getEmail( ) { return email; } /** Set email */ public void setEmail(String email) { this.email = email; } // MANY SIMILAR STRING-BASED SET/GET METHODS OMITTED /** Get adminPrivs */ public boolean isAdminPrivileged( ) { return adminPrivs; } /** Set adminPrivs */ public void setAdminPrivileged(boolean adminPrivs) { this.adminPrivs = adminPrivs; } /** Return a String representation. */ public String toString( ) { return new StringBuffer("User[").append(name) .append(',').append(fullName).append(']').toString( ); } /** Check if all required fields have been set */ public boolean isComplete( ) { if (name == null || name.length( )==0 || email == null || email.length( )==0 || fullName == null || fullName.length( )==0 ) return false; return true; } }
The only methods that do anything other than set/get are the
normal
toString( ) and
isComplete( ) (the latter returns true if all required fields have been set in the bean). If you guessed that this has something to
do with validating required fields in an HTML form, give yourself a gold
star.
We can use this bean in a JSP-based web page just by saying:
<jsp:useBean
This creates an instance of the class called
myUserBean. However, at present it is blank; no fields
have been set. To fill in the fields, we can either refer to the bean directly
within scriptlets, or, more conveniently, we can use
<jsp:setProperty> to pass a value from the HTML
form directly into the bean! This can save us a great deal of coding.
Further, if all the names match up, such as an HTML parameter
"name" in the form and a
setName(String) method in
the bean, the entire contents of the HTML form can be passed into a bean using
property="*"!
<jsp:setProperty </jsp:useBean>
Now that the bean has been populated, we can check that it is
complete by calling its
isComplete( ) method. If
it's complete, we print a response, but if not, we direct the user to go back
and fill out all the required fields:
<% // Now see if they already filled in the form or not...
if (!myUserBean.isComplete( )) {
%>
<TITLE>Welcome New User - Please fill in this form.</TITLE>
<BODY BGCOLOR=White>
<H1>Welcome New User - Please fill in this form.</h2>
<FORM ACTION="name_of_this_page.jsp" METHOD=post>
// Here we would output the form again, for them to try again.
</FORM>
<%
} else {
String nick = newUserBean.getName( );
String fullname = newUserBean.getFullName( );
// etc...
// Give the user a welcome
out.println("Welcome " + fullname);
You'll see the full version of this JSP in Program: JabaDot Web News Portal.
You can extract even more Java out of the JSP, making it look almost like pure HTML, by using Java custom tags. Custom tags (also called custom actions) are a new mechanism for reducing the amount of Java code that must be maintained in a JSP. They have the further advantage of looking syntactically just like elements brought in from an XML namespace, making them more palatable both to HTML editor software and to HTML editor personware. Their disadvantage is that to write them requires a greater investment of time than, say, servlets or JSP. However, you don't have to write them to use them; there are several good libraries of custom tags available, one from Tomcat and another from JRun. Sun is also working on a standard for a generic tag library. JSP tags are compiled classes, like applets or servlets, so any tag library from any vendor can be used with any conforming JSP engine. There are a couple of JSP custom tags in the source directory for the JabaDot program in Program: JabaDot Web News Portal.
Problem:
You can't remember all this post-HTML syntax.
Solution:
Use the Table.
Table 18-1 summarizes the syntax of JavaServer Pages. As the title implies, it contains only the basics; a more complete syntax can be downloaded from..
Here is perhaps the most ambitious program developed in this book. It's the beginnings of a complete "news portal" web site, similar to,, or. However (and as you should expect!), the entire site is written in Java. Or perhaps should I say "written in or by Java," since the JSP mechanism -- which is written entirely in Java -- turns the JSP pages into Java servlets that get run on this site. The web site is shown in Figure 18-11.
Like most portal sites, JabaDot allows some services (such as
the current news items and of course the ubiquitous banner ads) without
logging in, but requires a login for others. In this figure I am logged in as
myself, so I have a list of all available services. The page that supports
this view is
index.jsp (Example 18-15), which contains a hodgepodge of HTML and Java code.
Example 18-15: index.jsp
<%@page errorPage="oops.jsp"%>
<HTML>
<TITLE>JabaDot - Java News For Ever(yone)</TITLE>
<P ALIGN=CENTER><jsp:include</P>
<BODY BGCOLOR="#f0f0f0">
<% HttpSession sess = request.getSession(true);
User user = (User)sess.getValue("jabadot.login");
%>
<TABLE>
<TD WIDTH=75%
<!-- Most of page, at left -->
<IMG SRC="logo.gif" ALIGN="LEFT"></IMG>
<BR CLEAR="ALL">
<jspator servlet is included;
the program just randomly selects a banner advertisement and outputs it as an
HTML anchor around an IMG tag. Then I get the
HttpSession object and, from that, the current
User object, which is null if there is not a currently
logged-in user. The
User class object (This ancient advice comes from the early days of Unix; you'd be surprised how many sites still don't get it).
If you log in, I put the
User object.
Return to ONJava.com.
|
http://oreillynet.com/lpt/a/1000
|
CC-MAIN-2013-20
|
refinedweb
| 1,661
| 63.9
|
Created on 2016-04-01 00:28 by terry.reedy, last changed 2016-10-24 23:12 by ned.deily. This issue is now closed.
From
import tkinter as tk
from tkinter.ttk import Frame, Notebook
root = tk.Tk()
nb = Notebook(root, width=320, height=240)
nb.pack(fill='both', expand=1)
page0 = Frame(nb)
page1 = Frame(nb)
page2 = Frame(nb)
page3 = Frame(nb)
page4 = Frame(nb)
nb.add(page0, text="0")
nb.add(page1, text="1")
nb.add(page2, text="2")
nb.add(page3, text="3")
nb.add(page4, text="4")
Only tabs 0 and 1 show. Add a space before or after the number and 2 & 3 show. Add 6 spaces after 4 and '4 ' shows. Appears to work OK with 3 chars, with first and third non-blank.
I presume this is a ttk bug. just says 'a string'. I plan to close this as 3rd party in a few days, but I wanted to document the defacto spec here on the tracker.
I see the names of the first 4 tabs: 0-3. Tab header for the last tab is empty. If use longer name (e.g. "45678") I see it without two last characters ("456").
Yes, it looks as Ttk bug. Have you reported this to the mainstream?
No, I can't remember where it is and do not have an account on their tracker, if one is needed.
I am not surprised, somehow, that details of bug should depend on system. I should have said Win 10, 3.5.1/8.6.4.
Hello, I'm the one who posted on stackoverflow.
I'm on Windows 7 Entreprise 64 bits (6.1, version 7601).
Here is my first line when I run python:
"Python 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit (AMD64)] on win32".
I think it's linked to the font and its size.
I tested some combinations:
1 alphanumeric character, bugs (can only see 3 tabs on the 5)
2 AN chars, bugs (last tabs almost hidden)
3 AN chars, works
1 space and 2 AN chars, bugs
2 spaces and 2 AN chars, works
2 spaces and 1 AN char, bugs (the last tab is almost hidden)
3 spaces and 1 AN char, bugs
4 spaces and 1 AN char, works
I tried the same with a different font (Courier, 12):
1 alphanumeric character, bugs (the last tab is hidden)
2 AN chars, works
3 AN chars, works
1 space and 2 AN chars, works
2 spaces and 2 AN chars, works
2 spaces and 1 AN char, works
3 spaces and 1 AN char, works
4 spaces and 1 AN char, works
The code I added for the font:
cfont = tkFont.Font(family="Courier", size=12)
s = ttk.Style()
s.configure('.', font=cfont)
Tk bug tracker is.
I don't think we can do something from our side.
|
https://bugs.python.org/issue26682
|
CC-MAIN-2021-17
|
refinedweb
| 486
| 85.59
|
- Configuring your identity provider
- Configuring GitLab
- Providers
- User access and management
- Group Sync
- Passwords for users created via SAML SSO for Groups
- Troubleshooting
- SAML debugging tools
- Verifying configuration
- Verifying NameID
- Users receive a 404
-
- Searching Rails log
SAML SSO for GitLab.com groups.
Configuring your identity provider
- Navigate to the GitLab group and select Settings > SAML SSO.
- Configure your SAML identity provider using the Assertion consumer service URL, Identifier, and GitLab single sign-on URL. Alternatively GitLab provides metadata XML configuration. See specific identity provider documentation for more details.
- Configure the SAML response to include a NameID that uniquely identifies each user.
- Configure required assertions at minimum containing the user’s email address.
- While the default is enabled for most SAML providers, please ensure the app is set to have service provider initiated calls in order to link existing GitLab accounts.
- Once the identity provider is set up, move on to configuring GitLab.
NameID
GitLab.com uses the SAML NameID to identify users. The NameID element:
- Is a required field in the SAML response.
- Must be unique to each user.
- Must be a persistent value that will never change,.
NameIDbreaks the configuration and potentially locks users out of the GitLab group.
NameID Format
We recommend setting the NameID format to
Persistent unless using a field (such as email) that requires a different format.
Most NameID formats can be used, except
Transient due to the temporary nature of this format.
Assertions
For users to be created with the right information with the improved user access and management, the user details need to be passed to GitLab as SAML assertions.
At a minimum, the user’s email address must be specified as an assertion named
usernameassertion is not supported for GitLab.com SaaS integrations.
Metadata configuration
GitLab provides metadata XML that can be used to configure your identity provider.
- Navigate to the group and select Settings > SAML SSO.
- Copy the provided GitLab metadata URL.
- Follow your identity provider’s documentation and paste the metadata URL when it’s requested.
Configuring GitLab
After you set up your identity provider to work with GitLab, you must configure GitLab to use it for authentication:
- Navigate to the group’s ‘Guest’.
- Select the Enable SAML authentication for this group checkbox.
- Select the Save changes button.
S.
With this option enabled, users (except owners) must go through your group’s GitLab single sign-on URL if they wish to access group resources through the UI. Users can’t be manually added as members..
We intend to add a similar SSO requirement for API activity.
SSO has the following effects when enabled:
- For groups, users can’t share a project in the group outside the top-level group, even if the project is forked.
- For a Git activity, users must be signed-in through SSO before they can push to or pull from a GitLab repository.
-.
When SCIM updates, the user’s access is immediately revoked.
Providers
The SAML standard means that a wide range of identity providers will work with GitLab. Your identity provider may have relevant documentation. It may.
We recommend:
- Unique User Identifier (Name identifier) set to
user.objectID.
- nameid-format set to persistent.
If using Group Sync, customize the name of the group claim to match the required attribute.
See the troubleshooting page for an example configuration.
Ok.
User access and management
To link SAML to your existing GitLab.com account:
-.
- From the list of apps, select the “GitLab.com” app. (The name is set by the administrator of the identity provider.)
- You are then signed in to GitLab.com and redirected to the group.
Configure user settings from SAML response
Introduced in GitLab 13.7.
GitLab allows setting certain user attributes based on values from the SAML response. This affects users created on first sign-in via Group SAML. Existing users’ attributes are not affected regardless of the values sent in the SAML response.
Supported user attributes
can_create_group- ‘true’ or ‘false’ to indicate whether the user can create new groups. Default is
true.
projects_limit- The total number of personal projects a user can create. A value of
0means the user cannot create new projects in their personal namespace. Default is
10000.
Example>
Role
Starting from GitLab 13.3, group owners can set a ‘Default membership role’ other than ‘Guest’..
Blocking access
Please refer to Blocking access via SCIM.
Unl:
- In the top-right corner, select your avatar.
- Select Edit profile.
- On the left sidebar, select Account.
- In the Social sign-in section, select Disconnect next to the connected account.>
Groupsor
groupsin the SAML response can be either the group name or the group ID depending what the IdP sends to GitLab.:
- Enter the value of
saml:AttributeValuein the
SAML Group Namefield.
-.
Automatic member removal
After a group sync, users who are not members of a mapped SAML group are removed from the GitLab group..
Passwords for users created via SAML SSO for Groups
The Generated passwords for users created through integrated authentication guide provides an overview of how GitLab generates and sets passwords for users created via SAML SSO for Groups.
Troubleshooting
This section contains possible solutions for problems you might encounter.
SAML.
Verifying configuration
For convenience, we’ve included some example resources used by our Support Team. While they may help you verify the SAML app configuration, they are not guaranteed to reflect the current state of third-party products.
Verifying will need to provide the URL to users. will cause group membership and to-dos to be lost.
If the NameID is identical in both SAML apps, then no change is required.
Otherwise, to change the SAML app used for sign in, users need to unlink the current SAML identity and then link their identity to the new SAML app.
I’ll.
|
https://docs.gitlab.com/14.3/ee/user/group/saml_sso/
|
CC-MAIN-2021-49
|
refinedweb
| 961
| 57.98
|
Sending Udp Packets
From IronPython Cookbook
A simple class for sending UDP packets over a network.
This uses the System.Net.Sockets.UdpClient class.
from System.Net import IPAddress from System.Net.Sockets import UdpClient from System.Text import Encoding class UdpSender(object): def __init__(self, port, ipAddress): self.client = UdpClient(0) # Only set this if you want to be able to listen # on the same machine self.client.MulticastLoopback = True # No *need* to parse - you can pass in a string addr = IPAddress.Parse(ipAddress) # Connecting means that you don't have to specify # The IP address when we call send. # For Udp, connecting isn't a requirement though self.client.Connect(addr, port) def send(self, message): bytearr = Encoding.UTF8.GetBytes(message) self.client.Send(bytearr, bytearr.Length)
To use it, you need a valid port and address. If you want to multicast, you will need a Multicast Address.
This usage example of sending does use multicast group address:
port = 5555 group = "230.29.35.5" udpSender = UdpSender(port, group) udpSender.send("Some text")
In fact, because Udp is inherently a connectionless protocol, a minimal send example can be as small as:
from System.Net.Sockets import UdpClient from System.Text import Encoding datagram = Encoding.UTF8.GetBytes("Some text") client = UdpClient() client.Send(datagram, datagram.Length, hostname, port)
Note that Udp datagrams have a maximum size. See the Udp protocol for more details.
|
http://www.ironpython.info/index.php/Sending_Udp_Packets
|
crawl-002
|
refinedweb
| 234
| 53.27
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Write method
i want to update my field depend on current user's group , but when i execute the method it update all the record even it has the other group .
here's the code :
def wtc_approval(self, cr, uid, ids,vals, context=None):
user = self.pool.get("res.users").browse(cr, uid, uid)['groups_id']
data = self.pool.get("wtc.approval.line").browse(cr,uid,user)
line = self.browse(cr, uid, ids, context=context)
for user in line.app_line:
line.app_line.write({
'sts':'2',
'pelaksana':uid,
'tanggal':datetime.today()
})
return True
i don't know how to get current user group , the code always catch all group ..
anybody know how to resolve it ?
Thanks in advance
Openerp has a build in mechanism to restrict access per model / record:
- Access Control List
- Record Rule
Generally you can configure access control per model (ex. write access for certain group), openerp will do the checking for you.
The other is condition in workflow transition: you can set group in workflow transition, so that it can be triggered only by that group.
So in many cases you don't have to write some code to check user
|
https://www.odoo.com/forum/help-1/question/write-method-71125
|
CC-MAIN-2017-09
|
refinedweb
| 222
| 66.54
|
Configuration System
This page can be used as a reference while discussing the configuration system. This page should be updated as the discussion progresses.
Introduction
One of the Google Summer of Code project ideas for 2012 was a "configuration system". The configuration system was not among the selected projects, but the system was seen as a priority, so the plan now is to design and implement the important bits (the client API and the server-side infrastructure) ourselves within May 2012 or so (the reason for the tight schedule is that we'd like the GSoC students to benefit from this new system). Porting the existing configuration to the new system is not a matter of urgency. All the existing stuff has to be kept in mind while doing the design, though, so that the porting work is possible to do when that time comes.
The configuration system should make it easy to implement runtime-modifiable persistent configuration options in the core and modules. Currently that involves writing protocol extensions (for multiple protocols, if you aim for completeness) and managing the data storage yourself. The goal is to reduce that work, and also to make life easier for applications.
There are some notes from previous discussion at.
Goals
- The client API should be easy to use.
- The set of supported configuration options should be easy to extend.
- No protocol extensions.
- Make it as easy as possible to add a new option.
- Support also other things than statically named global options.
- For example, options for ports require dynamic keys to identify the port.
- We want (or do we?) to support also persistent data that is less configuration-like, like the stream-restore database or the equalizer data.
- Support change notifications both at client side and at server side.
- Support reading and modifying client.conf. (?)
Design
The configuration system stores "options". Options have a value and a (multi-part) identifier ("option id"). The option id has three parts: "object type", "object id" and "option name". For example, the maximum volume of a port could be stored in an option with the following id parts: "core.Port", "foocard:barport", "max-volume". The object that the object type and id describe is one of the objects in Pulseaudio: it could be a card, sink, source, port or stream-restore entry etc. The object id is whatever property identifies the object in question: in case of ports it would be the card name + port name (the card name is needed, because port names are not globally unique), and in case of stream-restore entries it would be the entry key. The object type has a namespace part, for example "core" in "core.Port" or "stream-restore" in "stream-restore.Entry". This can be used to divide the option storage into multiple files, one per namespace (one namespace for core and one namespace for each module would be the suggested way to group the options).
The client API doesn't have a concept of namespaces. The applications only need to know that if they want to access port options, they have use "core.Port" as the object_type string.
It may be sometimes useful to refer to an option by using just one string instead of three. That can be done by just concatenating the three identifiers. The suggested separator is slash. Example: "core.Port/foocard:barport/max-volume".
The configuration system knows which object types support which options. This is achieved by each "object type owner" (i.e. the code that implements the objects of that type) registering the object type to the configuration system. The registration includes the information about which options the object type supports, i.e. which option names are valid. The configuration system doesn't have any hardcoded knowledge about which object types exist.
The set of options that an object type supports can not be changed at runtime (except by unloading the object type owner module and then loading another module that registers the same object type with different options, but that should never happen). There may be situations where a module would like to extend a core object type. For example, module-alsa-card might want to have a "tsched" option for core.Card. The way to handle this is to make module-alsa-card register its own object type for cards, e.g. "alsa.Card", and specify in the documentation that the "alsa.Card" type is a subtype of "core.Card". Being a subtype means it's guaranteed that any "alsa.Card" object also has a corresponding "core.Card" object. The association between the two objects is made by using the same object id for both.
From the configuration system's point of view, all option values are strings. Adding type information would add a lot of complexity and not have very big benefits (is it so?). The options will of course have an implicit type to be useful. The actual users of the options (applications and object type owners in the server) have to validate the values themselves. This means that we should provide parsing functions for clients, at least for all complex types, but preferably also for simple stuff like integers. At server side, values are validated when they are read from the disk and when they are set by clients or by some code in the server other than the object type owner.
Since validation is done by the object type owner code, and since modules can implement object types, some of the options can be only validated when a specific module is loaded. The way to handle the case, where an application sets an option for a module that is not loaded, is to refuse to set the option. If the module required isn't loaded, from the configuration system's point of view the option doesn't exist.
Client API
This is how the client API will look like:
typedef void (*pa_server_configuration_value_change_cb_t)( pa_context *c, const char *object_type, const char *object_id, const char *option_name, const char *value, int eol, void *userdata); typedef void (*pa_server_configuration_object_type_event_cb_t)( pa_context *c, const char *object_type, int added, /* 1 for added, 0 for removed */ int eol, void *userdata); pa_operation *pa_server_configuration_set( pa_context *c, const char *object_type, const char *object_id, const char *option_name, const char *value, pa_context_success_cb_t cb, void *userdata); /* object_type, object_id and option_name can be NULL for wildcard selection. * If object_type is NULL, then all other identifiers have to be NULL also. In * case of wildcards, the callback may get called multiple times. The last call * has the object_type, object_id, option_name and value parameters of * pa_server_configuration_cb_t set to NULL and eol set to 1. */ pa_operation *pa_server_configuration_get( pa_context *c, const char *object_type, const char *object_id, const char *option_name, pa_server_configuration_cb_t cb, void *userdata); /* The wildcard rules described with pa_server_configuration_get apply with * the value change items too. */ typedef struct pa_server_configuration_value_change_item { char *object_type; char *object_id; char *option_name; } pa_server_configuration_value_change_item; /* The items parameter is an array with n_items elements. If there are multiple * value change items given, then the value change callback is called when any * of the items match the event. If you want to end the subscription, or you * want to get updates only about added and removed object types, you can set * items to NULL and n_items to 0. * * The object_types parameter is an array with n_object_types elements. If an * object type is added or removed in the server, and it's one of the types * you have listed in the object_types array here, the server will send * a notification. For getting notifications for all object types, set * object_types to a pointer to a NULL pointer and set n_object_types to 1. If * you don't want any object type event notifications, set object_types to NULL * and n_object_types to 0. * * If called multiple times, the value change items and object types of the * last call will replace all the previous items and object types. */ pa_operation *pa_server_configuration_subscribe( pa_context *c, const pa_server_configuration_value_change_item *items, unsigned n_items, const char *const *object_types, unsigned n_object_types, pa_context_success_cb_t cb, void *userdata); /* When there are option value changes that match the subscription, the * callback that is set here will be called multiple times. The last call has * the object_type, object_id, option_name and value parameters of * pa_server_configuration_value_change_cb_t set to NULL and eol set to 1. */ void pa_server_configuration_set_value_change_callback( pa_context *c, pa_server_configuration_value_change_cb_t cb, void *userdata); /* When there are object type add/remove events that match the subscription, * the callback that is set here will be called multiple times. The last call * has the object_type parameter of * pa_server_configuration_object_type_event_cb_t set to NULL and eol set to * 1. */ void pa_server_configuration_set_object_type_event_callback( pa_context *c, pa_server_configuration_object_type_event_cb_t cb, void *userdata);
There has been some discussion about supporting also reading and modifying client.conf. client.conf is not centrally managed, it stores simple key-value pairs with static keys, it doesn't support change notifications and working with it probably doesn't require an asynchronous API. Therefore, the API is quite different, and maybe it doesn't need to be a part of this configuration system effort. The main point is that we might want such API now or in the future, so we should think how the configuration system API can co-exist with the client.conf API. That means basically that we probably don't want to use "pa_configuration_" as the server configuration prefix, because it's too generic.
Parsing functions
TODO
Storage format
TODO
- I like ini files as described here: --Tanu
Open issues
The namespace concept might need some refinement. The idea of splitting the storage into one file per module sounds nice, but it might not make sense to dedicate each object type to one module. Modules may want to add options to their own object types and to core object types, and possibly also to other modules' types. Let's say there's an object type "Card" in namespace "core", i.e. "core.Card". The alsa modules might want to add an option to core.Cards for enabling and disabling timer-based scheduling. Maybe the options should have namespacing too, i.e. the new option would be "core.Card/foocard/alsa.tsched"? Would every option name have a namespace part? That would be very ugly. How would this example be handled in storage? Is splitting the storage into many files a bad idea anyway?
- This is solved by not allowing modules to add options to existing object types. If everybody is happy, this issue can be deleted from here. --Tanu
Is there need for adding an API for clients for querying whether a specific option is supported? Such API could be added also later, if we are not certain yet.
One limitation with this proposal is that the client API supports only modifying existing objects. For example, adding a new entry to stream-restore is not possible in a nice way. You could set an option of a non-existent object, but that would leave the other fields of the entry uninitialized. Maybe that's tolerable, but if we want to support object creation, it should probably be done in a better way. And object deletion is not possible at all with the current API.
- I'm a bit against supporting object creation/deletion through the configuration API. Protocol extensions are still needed for that then. Maybe exposing the stream-restore database to clients through the configuration API isn't so good idea after all. The storage part of the system could still be useful also for stream-restore. --Tanu
Should we have a "reset to default" API? What about an "ask what the default is" API?
- My opinion: yes for "reset to default", no for "ask what the default is" until a good use case is found. --Tanu
Should we have an API for querying the currently supported object types?
- My opinion: yes. --Tanu
|
http://freedesktop.org/wiki/Software/PulseAudio/Documentation/Developer/ConfigurationSystem/?action=SpellCheck
|
CC-MAIN-2014-42
|
refinedweb
| 1,954
| 54.42
|
Why Cannabis, Why Post Seed & Why Now?
The cannabis industry is at an inflection point. Essential-business status created an unexpected leap in new consumer adoption. Sales were up 40% in 2020. The results appear in nearly every sector but are most visible in the quarterly reports of the publicly traded multi-state operators (“MSOs”). Green Thumb Industries’ sales were up 133% and CuraLeaf’s sales were up 200% in the LTM as of Q4 2020.
Though investments in MSOs can provide risk-averse institutional investors 2–3x returns, those seeking outsized returns and true cannabis alpha must target the early stages. Post seed, when a company is pre-Series A but experiencing early traction and product-market fit, provides a natural point for alpha-chasing investors to engage and mitigate risk. In this post, we will explain why now is the time to invest in post-seed cannabis to achieve internal rate of return (“IRR”) gains of 30% or more.
Why cannabis? The cannabis industry will double in size over the next 5–6 years. Legal U.S. cannabis sales exceeded $17.5 billion in 2020, up 46% over $12.1 billion in 2019 sales. Forecasts predict sales will double again in less than six years to $41.3 billion — and that’s just the U.S. Global cannabis sales pegged at $21.3 billion in 2020, will grow to $56 billion in 2026. Simply, these conservative estimates position cannabis as one of the fastest growing markets and investor opportunities in the world.
So why post seed? Seed investors seek 100x returns. Series A investors target 10x returns. Late stage investors demand 2–5x returns. In cannabis, the probability of achieving these returns is aided by legalization expansion, distribution growth, and opportunities for product evolution/innovation. Institutional investors — newly woke up to the outsized returns inherent in cannabis — are flooding late stage down to Series A and driving up valuations. This leaves opportunity for post seed investing — the last round before Series A. Poseidon’s private investing data indicates a significant valuation delta between post seed and Series A. The combination of these factors creates the greatest opportunity for professionally managed, diversified funds to generate cannabis alpha.
So why now? The cannabis industry is just beginning to realize its acknowledged potential. Multiple MSOs are achieving $100 million revenue quarters, with line of sight to $1 billion in annual sales. Plus, a Blue House, Senate and White House increases the probability of any new legalization milestone. These trends are resonating with entrepreneurs who are growing in numbers and, more importantly, quality as the business climate improves. While institutional capital is funneling into later stage companies currently, natural investing cycles indicate this capital will trickle down to post seed companies today, likely via M&A, rewarding investors for their foresight. And finally, the consumers are growing more particular on the quality of their products, encouraging those with expertise in absorption, bioavailability, and efficacy to add another growth multiplier to the industry’s potential. Now is the time.
Conclusion
At Poseidon, we have seen great opportunities for investors in cannabis since our first entree into the market in 2014. Collectively, our team has raised 7 funds and invested in nearly 200 businesses. Achieving a 10X multiple on invested capital is possible. We know this because we’ve hit this mark. We’ve helped this industry grow from the front lines, beside our founders. It’s clear to us, that the combination of investing today…in cannabis…at the post seed stage, is where the greatest returns can be generated in the industry. For those investors who invest selectively, assist actively and profit aggressively, true cannabis alpha is possible.
About the Author
Patrick Rea is the Manager of Poseidon Garden Ventures, a $50 million venture capital group focused on post #postseed #IPO #MOIC #IRR
|
https://epaxhia-62645.medium.com/why-cannabis-why-late-seed-why-now-1dd32d950cb7?readmore=1&source=user_profile---------3----------------------------
|
CC-MAIN-2021-43
|
refinedweb
| 641
| 56.96
|
On 9/27/2017 8:11 PM, wm4 wrote: > On Wed, 27 Sep 2017 19:52:13 -0300 > James Almer <jamrial at gmail.com> wrote: > >>> +#if !HAVE_KCMVIDEOCODECTYPE_HEVC >>> +enum { kCMVideoCodecType_HEVC = 'hvc1' }; >>> +#endif >> >> The correct thing to do is adding kCMVideoCodecType_HEVC to >> hevc_videotoolbox_hwaccel_deps in configure, and not forcing it on SDKs >> that don't support it since, i assume, no computer with MacOS 10.8 will >> be able to play hevc videos anyway. > > SDK version != OS you build on != target machine > > So this has some justification. Neither is the case for dxva2 hevc and vp9, yet it's done like i mentioned above. Of course, those two require an entire header and not a single enum value, so it probably explains the decision to make them dependencies. I'm not going to block the patch for this, so implement it however you prefer.
|
https://ffmpeg.org/pipermail/ffmpeg-devel/2017-September/216987.html
|
CC-MAIN-2020-05
|
refinedweb
| 141
| 73.58
|
It is time that I talk a little about what to do if you want your application to run on XP. There are three sets of APIs, each with subtle differences and caveats, and ultimately your choice requires deciding what platform your application must run on:
Windows Vista:
Call IShellItem2::GetPropertyStore to get an IPropertyStore interface. The output property store includes property handler properties (e.g. Photo dimensions) and innate properties (e.g. Size, Name). This is your one-stop-shop for properties.
Caveat: This technique only works on Vista and later OSes.
Windows XP with Windows Desktop Search 3.x:
You can pass an IShellItem interface to PSGetItemPropertyHandler to get an IPropertyStore interface. Much of the property system works with Windows Desktop Search 3.x, so you can ask for property descriptions and everything. One upshot is that this API works identically on Vista so if you are able to choose to use it, you don't have to detect versions or anything.
Unfortunately, innate properties are only available through methods on IShellFolder and IShellFolder2. So you're on your own for those.
Caveat: Properties written to this API may or may not readable on Vista. [Aside: The full story is more complicated, but I'd like to move on for now] Innate properties like PKEY_Size are not exposed.
Windows XP; Windows XP with Windows Desktop Search 2.x:
Properties on XP are exposed through IPropertySetStorage. The supported technique is to call IShellFolder::BindToStorage(IPropertySetStorage) for the item.
Innate properties such as size and name are exposed through various methods on IShellFolder and IShellFolder2.
Caveats: Properties written with this API may or may not be readable on Vista. Furthermore, on Vista, this technique only works for some namespaces and not others.
Gotcha: Many people mistakenly call StgOpenStorageEx. Don't do that! StgOpenStorageEx is only supported for specific formats like OLE Compound Documents or NTFS secondary stream storage. StgOpenStorageEx doesn't know how to read the EXIF header from a .JPG image. IShellFolder::BindToStorage knows how to do such things.
Best of all worlds:
If you really enjoy coding, you can detect if a given API is present by calling GetProcAddress or QueryInterface [Aside: please don't detect versions... just directly test if the API is present or not], select the best available API, and deliver an application that works better on Vista. Hey, "Works better on Vista" sounds like a marketing slogan -- you could have fun coding and sell more software too!
-Ben Karas
|
http://blogs.msdn.com/b/benkaras/archive/2007/01/05/choosing-your-property-api.aspx
|
CC-MAIN-2015-27
|
refinedweb
| 415
| 57.57
|
Now blogging at SteveSmithBlog.com
Ads Via DevMavens
I have three things that have been on my wish list for ASP.NET and/or Visual Studio that I'm curious to know what others think. I've mentioned some of these before on my blog or elsewhere - they're not exactly earth shattering and I'm not saying that I want them more than any other feature they might add. But each one would make my life at least a little bit easier, if they were included by default.
So, in no particular order, here they are:
Support For Generics in ASPX Markup
Eilon posted not too long ago about this topic. The idea here is that you should be able to write controls that take advantage of generics, and be able to declaratively specify them within your ASPX/ASCX markup. This would allow for things like strongly typed DropDownList controls or even TextBoxes, and would also allow for MVC views to specify their ViewDataType without having to resort to code. In the WPF world, I understand that this can be done by using the x:TypeArguments attribute. As Silverlight 2.0 takes off, it would be great to see support for generics in its XAML markup, as well. Limiting the discussion to ASP.NET for the moment, what should the markup look like? Mikhail Arkhipov discussed some of the options and challenges 4 years ago, and apparently the solution was not trivial or I have to believe we would already have it. However, I have confidence in the ASP.NET team's ability to figure this one out.
Save VS Preferences in the cloud
Since VS 2005 we've been able to save out our VS preferences to disk and then import them. This is a great feature that I've never used - I just usually don't have access to my primary dev machine when I sit down at another one, and if it's a coworker's machine I don't want to mess with their settings. With things like pair programming, it can be tough to use customized settings since there is no easy way to swap back and forth depending on who's at the controls. What I would like to see is a way to recover my settings from "the cloud" so that I can get them anywhere I go via my Live ID or OpenID or whatever. Having a quick way to switch between a couple of these would make the pair programming scenario even better. I suggested this four years ago, and I still want it. Another option that might help this situation is being able to run VS from a USB drive, so that it's completely portable. This would be cool for the "walk up to any machine" scenario but a bit less useful in the pair programming scenario. I'd go for both. The other thing I think would be invaluable for the service method is that Microsoft would be able to mine data about users' preferences (with opt-in for the privacy paranoiacs) so that their future versions of Visual Studio would ship with defaults that were informed by thousands of real world users' preferences.
Recursive FindControl
Do a quick search for this and you'll find a number of similar implementations. This generic recursive findcontrol looks like a pretty good one, based on some code from Palermo and myself. Basically, these let you get a reference to a control even if it is not in the current control's Controls collection. This happens quite often with templated controls like CreateUserWizard or LoginView or MultiView, and having a recursive findcontrol is quite a bit more flexible than hardcoding the name with $ etc (see tip 4 here). Since I found the need for this technique, I've been adding it to my Base Page class or common class in every ASP.NET project I work on, so it seems to me it should really be built into the framework.
If you would like to receive an email when updates are made to this post, please register here
RSS
"Untitled Document" too !
+1 for recursive FindControl. The number of times we've asked for it, I still can't believe it's not in the framework.
This may be too specific, but I'd like to see a "virtualpathprovider" project that builds the project in a database directly. Something akin to using filesystem or web site, but adding a third option of virtual path.
Just an idea.
Yes! Recursive FindControl!!!!
And how about in the IDE adding runat=server automatically for all asp: controls.
Paul
Yes, definitely get rid of the default Title="Untitled Document" and add runat="server" to asp:controls automatically, too! I forgot about those two.
Castle Windsor Container takes a stab at generics in XML.
ClassType`1[[T]]
Type2`2[[T],[U]]
tranlates to
public class ClassType<T>
public class Type2<T,U>
Perhaps something of the same could be addressed for XAML.
Another request for the Visual Studio team - let me right-click to navigate to a user control from within an ASPX page, either on the <uc:MyControl ... /> tag or the <%@ Register Src="~/Controls/MyControl.ascx" ... %> tag.
I like all 3 of your suggestions as well as Plip's suggestion in the comments and your last suggestion. My biggest suggestion would be to improve the feature set that's currently available. For instance, the calendar control has an AJAX extender. I'd like to see the calendar control revamped to detect if the user had JS enabled and return the appropriate control based on that bool. I'd also like to see improvements in the web.config intellisense, the ability to right-click on an ASP.NET element and "cut & paste" the style into your skin file, and give skins intellisense.
Nice ones, Jason! Related to your skin intellisense, I think the controls that let you specify a quick look should use (or have the option to use) skins for this. So if you go with the "Elegant" DataGridViewListThing, instead of settings a bunch of style properties, or adding html directly, it should just set a SkinID and create a Skin to use. It could detect if themes were being used or not to make this decision, or just add a checkbox to the customize dialog that asks if the user would like to use skins or not.
I've been posting about some feature requests for ASP.NET/ Visual Studio , so here's one more in that
I've been posting about some feature requests for ASP.NET/ Visual Studio , so here's one more
I would like to see the extensions expanded such that I can create them as shared methods within classes (VB.NET). The fact they have to be created within Modules is a drag.
Do you have an opinion on how we can save the world before the year 2012?
Can you develope a computer program that would predict the earth's condition, given the current conditions, extrapolated into the future?
Plse contact, Regards,
Gisele Champagne,
SavetheEarth2012@gmail.com
I would like two thinks first what you mentioned about the FindControl and the other I would like to be able to view the code from differents sights, well like zooming.
+ another on the find control. I've used the export settings and keep them on a thumb drive... it would be handy in the cloud but I'm also of the opinion that cloud computing isn't all it's cracked up to be (with privacy concerns and bandwidth restrictions... e.g. Comcast). But, a decent idea none the less regardless of my opinions. ;)
Looks like we will be able to specify the clientID without resorting to inheriting from controls. Great stuff.
|
http://aspadvice.com/blogs/ssmith/archive/2008/03/16/Three-Requests-for-ASP.NET-4-and-VS-2010.aspx
|
crawl-002
|
refinedweb
| 1,303
| 70.23
|
-- Richard Jones, Virtualization Group, Red Hat virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc.
>From 80aad709954cc4a3a294200e242876599047cef8 Mon Sep 17 00:00:00 2001 From: Richard W.M. Jones <rjones redhat com> Date: Wed, 2 Mar 2011 05:10:31 +0000 Subject: [PATCH 2/6] java: Remove old test file if one was left around. If a test.img file was left over from a previous run, then it would cause the subsequent test to fail. Therefore remove any old test.img file. --- java/t/GuestFS010Basic.java | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/java/t/GuestFS010Basic.java b/java/t/GuestFS010Basic.java index f4778dc..137fad3 100644 --- a/java/t/GuestFS010Basic.java +++ b/java/t/GuestFS010Basic.java @@ -24,6 +24,10 @@ public class GuestFS010Basic public static void main (String[] argv) { try { + // Delete any previous test file if one was left around. + File old = new File ("test.img"); + old.delete (); + RandomAccessFile f = new RandomAccessFile ("test.img", "rw"); f.setLength (500 * 1024 * 1024); f.close (); -- 1.7.4
|
https://www.redhat.com/archives/libguestfs/2011-March/msg00007.html
|
CC-MAIN-2017-30
|
refinedweb
| 181
| 61.83
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.