text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
dissable a group for lines
On 04/07/2014 at 03:03, xxxxxxxx wrote:
hi
i want to dissable a group of lines.
in the Python documentation i found
""" this ist a group of comments
I want to dissable"""
But when I place me code in the """ code """"
everything after this doesn't work
Question:
How can dissable a group of lines.
On 04/07/2014 at 03:17, xxxxxxxx wrote:
For me it works (I tested it in a script).
Also for comments the color changes, so you can check right away.
import c4d from c4d import gui, utils def main() : print "start" """ 1 comment line """ # another way to have one comment line print "end" """ multiple comments lines multiple comments lines """ if __name__=='__main__': main()
On 04/07/2014 at 04:05, xxxxxxxx wrote:
thanks a lot
I dont know what is the differenc to my post, wut now it works
????
On 04/07/2014 at 04:46, xxxxxxxx wrote:
""" ... """ is an Expression, unlike beginning a line with #.
It needs to be indented correctly.
-Niklas | https://plugincafe.maxon.net/topic/8011/10408_dissable-a-group-for-lines | CC-MAIN-2019-22 | refinedweb | 176 | 76.25 |
This is a discussion on Re: link() not increasing link count on NFS server - FreeBSD ; On Thu, 15 Nov 2007, Mohan Srinivasan wrote: > The code you cite, which launches a lookup on the receipt of an EEXIST in > nfs_link() is a horrible hack that needs to be removed. I always wanted to > ...
On Thu, 15 Nov 2007, Mohan Srinivasan wrote:
>.
OK, I've attached an initial patch that does this -- we still need to keep the
lookup code for NFSv2, where the file handle of the new node isn't returned
with the reply, but I drop the EEXIST handling cases. Does this look
reasonable to you? I'm not set up to easily test this scenario, however.
Robert N M Watson
Computer Laboratory
University of Cambridge
Index: nfs_vnops.c
================================================== =================
RCS file: /zoo/cvsup/FreeBSD-CVS/src/sys/nfsclient/nfs_vnops.c,v
retrieving revision 1.276
diff -u -r1.276 nfs_vnops.c
--- nfs_vnops.c 1 Jun 2007 01:12:44 -0000 1.276
+++ nfs_vnops.c 16 Nov 2007 14:35:59 -0000
@@ -1769,11 +1769,6 @@
VTONFS(vp)->n_attrstamp = 0;
if (!wccflag)
VTONFS(tdvp)->n_attrstamp = 0;
- /*
- * Kludge: Map EEXIST => 0 assuming that it is a reply to a retry.
- */
- if (error == EEXIST)
- error = 0;
return (error);
}
@@ -1837,17 +1832,9 @@
nfsmout:
/*
- * If we get an EEXIST error, silently convert it to no-error
- * in case of an NFS retry.
- */
- if (error == EEXIST)
- error = 0;
-
- /*
- * If we do not have (or no longer have) an error, and we could
- * not extract the newvp from the response due to the request being
- * NFSv2 or the error being EEXIST. We have to do a lookup in order
- * to obtain a newvp to return.
+ * If we do not have an error and we could not extract the newvp from
+ * the response due to the request being NFSv2, we have to do a
+ * lookup in order to obtain a newvp to return.
*/
if (error == 0 && newvp == NULL) {
struct nfsnode *np = NULL;
@@ -1925,15 +1912,7 @@
mtx_unlock(&(VTONFS(dvp))->n_mtx);
if (!wccflag)
VTONFS(dvp)->n_attrstamp = 0;
- /*
- * Kludge: Map EEXIST => 0 assuming that you have a reply to a retry
- * if we can succeed in looking up the directory.
- */
- if (error == EEXIST || (!error && !gotvp)) {
- if (newvp) {
- vput(newvp);
- newvp = NULL;
- }
+ if (error == 0 && newvp == NULL) {
error = nfs_lookitup(dvp, cnp->cn_nameptr, len, cnp->cn_cred,
cnp->cn_thread, &np);
if (!error) {
_______________________________________________
freebsd-current@freebsd.org mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" | http://fixunix.com/freebsd/288909-re-link-not-increasing-link-count-nfs-server.html | CC-MAIN-2016-07 | refinedweb | 409 | 64.1 |
1.
Introduction
In the
previous examples we used TCP Channel to communicate the remote
object in the server machines. Also we have the reference to the server project
in the client development projects. Giving the code to client is not advisable
as the client can go ahead and strip the given to know some of the implementation
details.
In this
article we will look at how we use the HTTP channel for
communication and how do we create the Meta-data proxy from the server deployed
remote objects using the SoapSuds command line utility. OK. Let us
start. The explanation is reduced here. Read my first article on Dot.Net
remoting link for basic details on remoting.
2. The
HTTP remote Server
The server is a C# console application. It to make it remote object. Below is the code for it:
//Server 01: Required Declarations
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Http;
//Server 02: Http based Server
classRServer : MarshalByRefObject
{
publicvoid TestObject()
{
Console.WriteLine("Object Test. Test Object Function called" + Environment.NewLine);
}
}
In the constructor, we are just printing some message so that we can ensure server object is created or not by looking at the console window of the server. Also note that this time we are using the http channel and hence included the channels.http (using System.Runtime.Remoting.Channels.Http;)
In the server application main, we are creating a http channel and then registering the remote object as a singlecall object. As the basic example (First article on DotNet remoting) has all the required explanation, I am skipping those repeated details here. Below is the code:
staticvoid Main(string[] args)
//Server 03: Create a http channel for soap and register it.
HttpChannel httpchnl = newHttpChannel(7212);
ChannelServices.RegisterChannel(httpchnl,false);
// Server 04: Register the server object
RemotingConfiguration.RegisterWellKnownServiceType(typeof(RServer),"RServer",
WellKnownObjectMode.SingleCall);
// Server 05: Make the server running
Console.WriteLine("Server Started" + Environment.NewLine);
Console.ReadLine();
}
3.
Generating a MetaData dll to share with client
Once the server is ready we are all set to go ahead and
create the Metadata assembly that can be shared with the client piece code.
Remember in the previous example when I used the TCP communication channel, I
usually ass whole server project as the reference. The alternate technique to
that is having the declaration in a separate and assembly (dll) and shipping
that to the client. Here we are going to generate separate metadata to generate
a:
Here
4. Consuming the meta data dll in the client
The
client is also Visual C# console application. Once the client
project is created, after giving the reference to dotnet remoting, the reference
to the meta data dll is given using the browse tab of the add reference dialog
box. Once the reference is given, we have access to the remote object and we can
create it using the new operator.
Getting
the reference to the meta data dll in the client project is shown in the below
video.
Video:
Here
5. Accessing the Remote object through Http
Once you created the reference to the Meta dll formed in
the server machine using soapsuds, you can simply access the remote object using
the new operator just like you create normal objects. Below is the explanation
for the code:
RServer is the name space in the server. Note that in the server implementation the Remote class name as well as the namespace name both is same.
//Client 01: Use the Meta data dll.
using RServer;
The remote object is created using the new operator. But, in the background the Meta data dll it makes a call to the wrapped assembly to get the actual proxy to the real object in the server end. And it know the communication protocol, server address and port number of the communication.
//Client 02: As we have proxy to the remote as wrapped meta data, you can use new oprator
// to create the object
Console.WriteLine("Creating the Instance of Remote object\n");
RServer.RServer RemoteObj = new RServer.RServer();
The remaining lines of code is simple as it just makes a call to the function exposed by the above created object. In user perspective it is just an object. But, the function call is executed on the server. You can observe that by looking at the sever machine's console window. Below is the piece of code that does not require much explanation:
//Client 03: Calling the Remote method
Console.WriteLine("Press any key to Make a call to Remote function\n");
Console.ReadLine();
RemoteObj.TestObject();
Console.WriteLine("Press any key to Close");
Video:Here
Note: The sample is created using the VS2005 IDE.
Accessing Remote Objects Through HTTP Channel
Client Activated Remote Objects
It is well formated before the publishing. Looks decent. My thanks to the editor who involved in it. | http://www.c-sharpcorner.com/uploadfile/6897bc/accessing-remote-objects-through-http-channel/ | crawl-003 | refinedweb | 811 | 56.86 |
NetBeans 6.8: In a .xhtml file (facelets), the autocomplete assist feature (Ctrl-Space) is not listing the names of CDI (Weld) beans defined in the application.
If I create a bean with @Named annotation, it doesn't show up in the list when I press Ctrl-Space inside an EL expression: "#{}" in xhtml file.
If I use @ManagedBean annotation instead, the bean name appears in the list and I can search its methods, but not with @Named annotation.
As an example, to reproduce the behavior, just create a Web project on NetBeans 6.8, using JSF2 and facelets, named testApp. Then, create a new java class, say 'TestClass' in 'my' package:
package my;
import java.io.Serializable;
import javax.enterprise.context.SessionScoped;
import javax.inject.Named;
@Named
@SessionScoped
public class TestClass implements Serializable {
private static final long serialVersionUID = 1L;
private String str;
public String getStr() {
return str;
}
public void setStr(String str) {
this.str = str;
}
public String clicked() {
System.out.println("Button clicked");
return null;
}
}
Decorate the class with @Named and @SessionScoped annotations, create a private String field named 'str', getter and setter, and a public method named 'clicked', for the button click action. The CDI bean is ready to go.
I'm not defining a name for the bean so a standard one is used, that is the name of the class with the first letter in lower case, in this case, 'testClass'.
Now put some code in 'index.xhtml' to interact with the bean.
<?xml version='1.0' encoding='UTF-8' ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns=""
xmlns:
<h:head>
<title>Facelet Title</title>
</h:head>
<h:body>
<h:form>
field:
<h:inputText
<h:commandButton
</h:form>
</h:body>
</html>
Finally, create an empty file in the WEB-INF folder named 'beans.xml' to trigger the CDI service.
The app works as expected. The problem is, when I type <h:inputText in index.xhtml, NetBeans shows a list of components that can be used for this EL expression, but no CDI bean is included on the list.
The JSR-299 Context and Dependency Injection (CDI) is part of Java EE 6 Specification so its features should be integrated into netbeans autocomplete and code assist system.
This is actually pretty important because there is a strong sentiment by some in the JSF expert group that @Named should be preferred to @ManagedBean in most cases. It would be a shame if people got lured into @ManagedBean just because of the IDE support.
This is serious deficiency which needs to be amended ASAP.
For now I'm assigning it to you Denis.
I have added API methods into Web Beans model which allow to get Java Elements
with @Named annotation.
Also there is a method which returns "named" value .
So I reassign this issue into JSF area.
It would also need to resolve elements (methods or fields) which are annotated with @Produces as these can also be used within EL expressions.
(In reply to comment #6)
> It would also need to resolve elements (methods or fields) which are annotated
> with @Produces as these can also be used within EL expressions.
Ignore this, the @Named annotation would be required in addition to @Produces in this case.
should be fixed in web-main#826629b831a0
As for the CDI support module, there used to be no public packages exposed, so supposing this wasn't intentional, I exposed the api package as fried to the web.jsf.editor module. I know wery little of what are the plans for this module regarding the api level, so whoever is involved, please review and possibly take appropriate actions.
By the way, I must do emphasize that the EL support code is a mess and needs to be redone from scratch. The fix is made with respect to this fact.
Integrated into 'main-golden', will be available in build *201001150201* on (upload may still be in progress)
Changeset:
User: Marek Fukala <mfukala@netbeans.org>
Log: #178687 - CDI beans names not listed in autocomplete window
Additional important fix which needs to be put into the patch as well!
web-main#56083e99ed95 - proper handling of web beans names.
verified.
100118-a10216e72f6c (web-main)
Integrated into 'main-golden', will be available in build *201001190201* on (upload may still be in progress)
Changeset:
User: Marek Fukala <mfukala@netbeans.org>
Log: #178687 - additional fix: proper handling of web beans names
The changeset uses the method WebBeansModel.getNamedElements(). The mentioned method has been introduced by following the changeset however a "simple" port doesn't work because a new implementaion of WebBeansModelProvider is needed.
Could you please provide a changeset for the method WebBeansModel.getNamedElements() and its implementation applicable in the release68_fixes branch?
The fix has been ported into the release68_fixes repository.
The fix has been ported together with the bugfix of issue #179629. If a rollback is needed, both issues have to be skipped together. Since the port is non-trivial (please see the changeset), please consider wider and deeper testing.
verified in 68_fixes build
NetBeans IDE 6.8 (Build 201001261800)
Thanks for this enhancement.
Netbeans is, without any dounbt, the best IDE around, and it gets even better with this theese quick responses to the community requests.
Geraldo.
Thanks for the words of support Geraldo! It is always good to get a feedback and even better if it is positive one. :-)
FYI: lists new CDI features. | https://netbeans.org/bugzilla/show_bug.cgi?id=178687 | CC-MAIN-2017-34 | refinedweb | 900 | 64.51 |
Brian Quinlan wrote: > 4. forcing everything into a class is stupid e.g. does anyone really > like writing Math.sin(x) over sin(x)? Although tacky at first glance, it's not entirely stupid. I've written many, many variations on sin(x) cos(x) because one often wants an efficient approximation for some algorithmically specific purpose. They all end up with slightly funny different names, so why not namespace 'em? And IIUC, you could simply write using Math and be done with the issue, I think. Can't respond to any of your other points, I haven't a clue, which is why I posted. :-) Thanks for the input. -- Cheers, Brandon Van Every Seattle, WA 20% of the world is real. 80% is gobbledygook we make up inside our own heads. | https://mail.python.org/pipermail/python-list/2003-August/231399.html | CC-MAIN-2017-17 | refinedweb | 132 | 78.04 |
In the previous tutorial, we have learned the basic of Inheritance in Java. The term inheritance refers that one class can inherit all of the properties and behavior from another class. It is used for the code reusability. That means code once is written can be used again and again in the new classes. Let's proceed chapter. In this chapter, we are going to learn types of inheritance such as single, multilevel, multiple, hierarchical, and hybrid inheritance with practical example programs.
Types of Inheritance in Java
On the basis of class, there are three types of inheritance in Java.
1. Simple/Single level Inheritance
2. Multiple Inheritance
3. Hybrid Inheritance
The classification of inheritance in Java is shown in the below fig.
In Java programming. multiple inheritance and hybrid inheritance are supported through the interface only. We will learn interface in the further tutorial.
Single level Inheritance in Java
When one class is extended by only one class, it is called single level inheritance. In single-level inheritance, we have just one base class and one derived class. The derived class inherits all the properties and behavior only from a single class. It is represented as:
Single Inheritance Example Program in Java
Let's see a simple example program related to single inheritance in Java.
Program source code 1:
package com.scientecheasy.Inheritance;
// Declare a base class or superclass.
public class A {
// Declare an instance method.
public void methodA(){
System.out.println("Base class method");
}
}
package com.scientecheasy.Inheritance;
// Declare a derived class or subclass and extends class A.
public class B extends A{
public void methodB(){
System.out.println("Child class method");
}
}
package com.scientecheasy.Inheritance;
public class Myclass {
public static void main(String[] args) {
// Create an object of class B.
B obj=new B();
obj.methodA(); // calling superclass method.
obj.methodB(); // calling local method.
}
}
Output:
Base class method
Child class method
In this example program, we have a class A in which a single function methodA(). The line class B extends A tells the compiler that B is a new class and we are inheriting class A in class B. This makes class A as the base class of class B and class B is known as a derived class. In class B, we have defined one function i.e one method methodB(). So it contains inherited members (i.e methodA() ) of class A and methodB(). In the main, we have created an object of class B and call the function methodA() which are inherited from class A and methodB().
Program source code 1:
Single inheritance is further classified as:
1. Multilevel Inheritance
2. Hierarchical Inheritance
Multilevel Inheritance in Java
A class which is extended by a class and that class is extended by another class forming chain inheritance is called multilevel inheritance. In multilevel inheritance, there is one base class and one derived class at one level. At the next level, the derived class becomes the base class for the next derived class and so on. This is as shown below in the diagram.
As you can see the above diagram, class A is base class of class B (derived class), class B is the base class of class C (derived class).
As you can see the above diagram, class A is base class of class B (derived class), class B is the base class of class C (derived class).
Multilevel Inheritance Example Program
Program source code 2:
Multiple Inheritance in Java
One class has many superclasses is known as multiple inheritance. In other words, When a class extends multiple classes, it is known as multiple inheritance. In multiple inheritance, a subclass inherits from more than one immediate superclass. Java does not support multiple inheritance through class. Multiple inheritance is as shown in the below diagram.
Practically, It is very rare and difficult to use in the software project because it creates ambiguity, complexity, and confusion when a class inherits methods from two superclasses with the same method signature. The ambiguity created by the multiple inheritance is called the diamond problem. But, the functionality of multiple inheritance in Java can be achieved by using interfaces. When two or more interfaces are implemented by a single class, only one method can share the method signature.
Now, you need to understand why java does not support multiple inheritance. May get asked in the interview, especially for freshers.
Why multiple inheritance is not supported in Java?
As shown multiple inheritance in the above diagram, one class extends two superclasses or base classes but in Java, one class cannot extend more than one class simultaneously. At most, one class can extend only one class. Therefore, To reduce the ambiguity, complexity, and confusion, Java does not support multiple inheritance through class. Let's see a simple scenario.
Consider a scenario from the above diagram where there are three classes class A, class B, and class C. Class C extends two parent classes such as class A and class B. If class A and class B have the same method msg() (say) having different implementations. As per inheritance concepts, both methods will be inherited to class C. If you create an object of class C and call the method msg() from the child class object, which msg() will get called?
So, there will create ambiguity and confusion to call the method msg from A or B class.
Since the compile-time error is better than a runtime error, Java will give compile-time errors if you extend two classes. So, whether you have the same or different method signature still you will get compile-time errors.
Let's understand how Java does not support multiple inheritance programmatically.
Program source code 3:
How do you implement multiple inheritance in Java?
Multiple inheritance can be implemented in Java by using interfaces. A class cannot extend more than one class but a class can implement more than one interfaces. Let's see how?
Program source code 4:
Hierarchical Inheritance in Java
A class which is inherited by many subclasses is known as hierarchical inheritance. In other words, when one class is extended by many subclasses, it is known as hierarchical inheritance. In this kind of inheritance, one class can be a parent of many other classes. Hierarchical inheritance is as shown in the below diagram.
In the above diagram, class A is the parent (or base class) of all three classes B, C, and D. That is class B. C, and D inherits the same class A and can share all fields, methods of class A except private members.
Hierarchical Inheritance Example Program
Let's take one simple example program related hierarchical inheritance.
Program source code 5:
Hybrid Inheritance in Java
A hybrid inheritance in Java is a combination of single and multiple inheritance. A typical flow diagram is shown in the below image. A hybrid inheritance can be achieved in the same way as multiple inheritance using interfaces in Java. By using interfaces, we can achieve multiple as well as a hybrid inheritance in Java. It is not allowed in Java.
Note: Generally, we use three types of inheritance such as single inheritance, multilevel inheritance, and hierarchical inheritance in the project level. We do not work with multiple and hybrid inheritance.
Final words:
Hope that you have enjoyed the article types of inheritance in java. All the practical example programs are very important for freshers. The interviewer may ask one question from this chapter. So practice it.
Thanks for reading! | https://www.scientecheasy.com/2019/01/types-of-inheritance-in-java.html | CC-MAIN-2019-18 | refinedweb | 1,246 | 58.38 |
A LCM interface for logging LCM messages to a file or playing back from a existing log. More...
#include <drake/lcm/drake_lcm_log.h>
A LCM interface for logging LCM messages to a file or playing back from a existing log.
Note the user is responsible for offsetting the clock used to generate the log and the clock used for playback. For example, if the log is generated by some external logger (the lcm-logger binary), which uses the unix epoch time clock to record message arrival time, the user needs to offset those timestamps properly to match and the clock used for playback.
Constructs a DrakeLcmLog.
Let
MSG be the next message event in the log, if
current_time matches
MSG's timestamp, for every DrakeLcmMessageHandlerInterface
sub that's subscribed to
MSG's channel, invoke
sub's HandleMessage method.
Then, this function advances the log by exactly one message. This function does nothing if
MSG is null (end of log) or
current_time does not match
MSG's timestamp.
Returns the time in seconds for the next logged message's occurrence time or infinity if there are no more messages in the current log.
Returns true if this instance is constructed in write-only mode.
Writes an entry occurred at
timestamp with content
data to the log file.
The current implementation blocks until writing is done.
Implements DrakeLcmInterface.
Converts time (in seconds) relative to the starting time passed to the constructor to a timestamp in microseconds.
Subscribes
handler to
channel.
Multiple handlers can subscribe to the same channel.
Implements DrakeLcmInterface.
A deprecated overload of Subscribe.
Implements DrakeLcmInterface.
Converts
timestamp (in microseconds) to time (in seconds) relative to the starting time passed to the constructor. | http://drake.mit.edu/doxygen_cxx/classdrake_1_1lcm_1_1_drake_lcm_log.html | CC-MAIN-2018-39 | refinedweb | 282 | 55.34 |
Remote Growl Notifications
For reasons that may be obvious, sometimes I want my servers to tell me something, and get my attention in a manner a bit more direct than via email or rss. (If I’m busy, I’m not checking my email. If I’m checking my rss, I’m not busy) And for reasons of geography, sms messages don’t work. Well, cells don’t work, so I don’t actually carry my cell, so. Well, It’s just not effective.
So, I need a couple levels of notification above trawling email or logs. One to interrupt me with bad but not fatal problems, and one to get my attention if I need to know now.
The lower level, interrupt me, for bugs that really shouldn’t exist, problems with important customers, or things that I need to be bothered about until I fix them. The more severe stuff is akin to the raid array losing a drive or deciding to take an impromptu vacation.
So. The best thing I’ve found for notification for things that I want to know is Growl. It pops up a little notification on screen, politely, and it can receive messages over the net. Unfortunately, it’s UDP, and there are a couple of NAT boxes between me and the servers. If it was TCP, it would be a simple matter of remote port forwarding with ssh. I always have a connection open, so it’s just a matter of adding yet another port through there. But it’s UDP, and ssh doesn’t forward UDP.
There are solutions out there using netcat (here and here and so on) all of which tie in a few netcat processes into the tunnel to listen on UDP, send TCP through the tunnel, Listen on TCP on the other end, and forward UDP to the end destination.
That should work, but what I was finding was that the netcat listener on the far end would listen for exactly one UDP connection, and then stop. Once the initial socket closed, nothing further would get through. A few hours of mucking around with this while dealing with other interrupting issues and I had enough. 10 minutes later, I had a pair of python servers that would do the UDP and TCP listening. So far they’ve been far more reliable than the netcat solution.
This runs on the far end:
#!/usr/bin/env python
import SocketServer
from socket import AF_INET, SOCK_DGRAM, SOCK_STREAM, socket
import sys
GROWL_PORT=9887
TCP_PORT=19887
DEST=’192.168.10.98′
class growl_udp_handler(SocketServer.DatagramRequestHandler):
def handle(self):
addr = (“localhost”, TCP_PORT)
s = socket(AF_INET,SOCK_STREAM)
s.connect(addr)
s.sendall(self.rfile.read())
s.close()
u = SocketServer.ThreadingUDPServer((‘0.0.0.0’,GROWL_PORT), growl_udp_handler)
u.serve_forever()
and this on the local side.
class growl_tcp_handler(SocketServer.StreamRequestHandler):
def handle(self):
addr = (DEST, GROWL_PORT)
s = socket(AF_INET,SOCK_DGRAM)
s.sendto(self.rfile.read(),addr)
s.close()
t = SocketServer.ThreadingTCPServer((‘0.0.0.0’,TCP_PORT), growl_tcp_handler)
t.serve_forever()
I’m hoping that this is reliable enough that I spend more time debugging the subject of the notifications rather than the notifications themselves. | http://www.wiredfool.com/2009/02/01/remote-growl-notifications/ | CC-MAIN-2018-51 | refinedweb | 526 | 65.22 |
Today's Little Program uses UI Automation to cancel the Run dialog whenever it appears. Why? Well, it's not really useful in and of itself, but it at least provides an example of using UI Automation to wait for an event to occur and then respond to it.
using System.Windows.Automation; class Program { [System.STAThread] public static void Main(string[] args) { Automation.AddAutomationEventHandler( WindowPattern.WindowOpenedEvent, AutomationElement.RootElement, TreeScope.Children, (sender, e) => { var element = sender as AutomationElement; if (element.Current.Name != "Run") return; var cancelButton = element.FindFirst(TreeScope.Children, new PropertyCondition(AutomationElement.AutomationIdProperty, "2")); if (cancelButton != null) { var invokePattern = cancelButton.GetCurrentPattern(InvokePattern.Pattern) as InvokePattern; invokePattern.Invoke(); System.Console.WriteLine("Run dialog canceled!"); } }); System.Console.ReadLine(); Automation.RemoveAllEventHandlers(); } }
Okay, let's see what's going on here.
The program registers a delegate with UI automation
which is called for any
WindowOpened
event
that is an immediate child (
TreeScope.)
of the root (
AutomationElement.).
This will catch changes to top-level unowned windows,
but not bother firing for changes that occur inside
top-level windows or for owned windows.
Inside our handler, we check if the window's title is Run. If not, then we ignore the event. (This will get faked out by any other window that calls itself Run.)
Once we think we have a Run dialog,
we look for the Cancel button,
which we have determined by using UI Spy to have the automation ID
"2".
(That this is the numeric value of
IDCANCEL
is hardly a coincidence.)
If we find the Cancel button, we obtain its Invoke pattern so we can Invoke it, which for buttons means pressing it.
Take this program out for a spin. Run the program and then hit Win+R to open the Run dialog. Oops, the program cancels it!
Ha-ha!
Well, how does one compile it?
I get this error:
—8<—
Microsoft (R) Visual C# Compiler version 4.0.30319.18408
for Microsoft (R) .NET Framework 4.5
closerun.cs(1,22): error CS0234: The type or namespace name 'Automation' does
not exist in the namespace 'System.Windows' (are you missing an assembly
reference?)
—8<—
Yes, but I'm not a kodhjon so I don't know what to do with that information.
Do you have a suggestion as to what to add to your Little Program?
stackoverflow.com/…/referencing-system-windows-automation
"referencing System.Windows.Automation"
The UIAutomationClient.dll is located in this folder:
C:Program FilesReference AssembliesMicrosoftFrameworkv3.0
I suspect Raymond has been replaced by an imposter; this blog post both:
> gives instructions for how to do something evil;
> is written in C# where an aspect of the CLR is not the subject of the blog.
Now, how to get the localized string for "Run"?
@JDT: This was my first thought as well. But the inline reply in the first BOFH comment is pure Raymond. Now I'm confused…
[UI automation is not intended for driving UI programmatically in production. -Raymond]
So it's not a good idea to use to write accessibility software then?
Wait for somebody using this to disable Run box as "security" policy.
I always enjoy these Automation Posts, its part of my daily job function. Its a pretty slick library, but in some senses not documented as well as it could be (although it has gotten much much better in recent years).
alegr1: Run is actually disabled on the machine on my desktop at work (as was logging in, until two weeks ago, and writing to my home directory until last week). There's some paperwork somewhere I can do to get admin rights, or just walk next door where there's a whole lab of machines on which I already have. Ironic, as a security researcher, but we all start off with standard staff builds (and the occasional glitch!) until we social-engineer ourselves more access…
@jas88: In univeristy, we used to have (boring) kiosk everywhere that have everything (mostly) locked up too. We found that the "Save as" dialog in IE is handy for us to run "telnet" to connect to the Unix clusters, in order to do things like emailing homework or so.
To be fair finding the reference for a particular namespace is much harder than it should be for some namespaces (here the reference has a different name than the namespace and the namespace documentation on MSDN doesn't mention the dll/reference)
@JK Much harder than it should be? Because the only thing I did was google for "msdn Automation.AddAutomationEventHandler", click the first link that goes to msdn and read the third line which states Assembly: UIAutomationClient (in UIAutomationClient.dll)
How exactly you make this process any easier really isn't clear to me and considering Raymond's target audience, anyone who can't figure that one out is probably reading the wrong blog – as harsh as that sounds.
@voo Well, I think there's a reasonable expectation the MSDN documentation will give the specific assembly for a namespace. Many (most?) namespaces only exist in a single assembly. Some namespaces could result in a long list, though. As you point out, using the class (or in your example, class + method name — namespace + class will also work) results in a clear reference.
[The problem is that anybody can inject a class into a namespace. I could write a "foo.cs" …]
Non-issue. foo.cs doesn't build into an assembly that is part of .NET.
@RonO Namespaces do not exist in assemblies. In fact, namespaces do not exist at all. Some programming languages have support for namespaces to ease constructing and accessing long (and thus more likely to be unique) type names. As far as the CLR is concerned, there are no namespaces. Your expectation that assemblies (which do exist) and namespaces (which don't) are somehow related is misguided.
@Raymond Different pieces of the BLC may update at different times, but they are still released as a part of an official and specific version of the .NET Framework (even if just a service pack). And the online MSDN documentation is pretty good now (complete?) at separating the documentation of the different versions. It's just as important to look at the specific version of the documentation for a class/method in the version of the .NET Framework used in a project (and yes, I've burned myself a time or two).
And tools like ILSpy seem to do a good job finding sub-classes, though I imagine it is dependent upon which assemblies are loaded.
@IInspectable We'll have to agree to disagree here. Namespaces exist to serve the same purpose in the overall Framework as they do in any project referencing those assemblies. If they did not, it would be impossible to use System.Globalization.Calendar and System.Web.UI.WebControls.Calendar in the same project and there would be no purpose in specifying using (Imports) directives.
[Different pieces of the BCL update at different times. (See .NET 3.5.) Maybe it would be nice if all the different pieces of the BCL were interrelated so that whenever, say, a new class is added to UIA, it adds itself to a list of classes maintained by WPF. But in practice, this is hard to manage. Tightly-coupled is harder to maintain than loosely-coupled. -Raymond]
Enter auto-gen, stage right. It appears those docs are mostly auto-gened anyway. | https://blogs.msdn.microsoft.com/oldnewthing/20140217-00/?p=1743/ | CC-MAIN-2016-30 | refinedweb | 1,233 | 57.47 |
in this tutorial we will learn how to connect I2C with LCD. It will reduce 4 input/output ports on Arduino board. And wiring is much simpler and easier to connect. So let’s get started.
For this you will need
- Arduino,
- I2C,
- Breadboard (optional if you solder I2C with LCD),
- Jumper wires.
This is I2C serial interface adapter. We can adjust contrast of LCD by this potentiometer. We can solder this directly on LCD. But we have already solder the LCD so in this tutorial we are going to connect I2C and LCD by breadboard. Do connection as shown in diagram.
Circuit diagram for I2C with LCD
Download library for Liquid_Crystal_I2C. After downloading it, unzip it and change the name of folder to LiquidCrystal_I2C. And copy that folder and paste it to Arduino libraries. Before uploading any sketch first we need to find out its I2C address. I2C scanner for finding address. Copy that code and paste it. Compile & Upload the sketch. Go to serial monitor. Now you can see its address. Copy that address.
And goto examples –> LiquidCrystal_I2C –> hello world.
... }
Change lcd address to previously copied address.
// Set the LCD address to 0x27 for a 16 chars and 2 line display LiquidCrystal_I2C lcd(0x3F, 16, 2);
Compile and upload the sketch, now you can see hello world on the screen. If it is not showing anything on screen you can adjust contrast of lcd by adjusting potentiometer.
Now goto examples –> LiquidCrystal –> display
#include <Wire.h> #include <LiquidCrystal_I2C.h> // Set the LCD address to 0x27 for a 16 chars and 2 line display LiquidCrystal_I2C lcd(0x27, 16, 2);
Copy this code and instead of
#include <LiquidCrystal.h> // initialize the library with the numbers of the interface pins LiquidCrystal lcd(12, 11, 5, 4, 3, 2);
this code paste this I2C library to this sketch and delete 16X2 in lcd.begin. Compile and upload the sketch now you can see it is turn on & off display in each 500 millisecond.
LIST OF COMPONENT BUY ONLINE: (Arduino) (I2C) (LCD display) (Breadboard) (Jumper wire)
TILL THEN KEEP LEARNING KEEP MAKING 🙂
2 thoughts on “Arduino Tutorial #24 How to connect I2C with LCD.” | http://roboticadiy.com/arduino-tutorial-how-to-connect-i2c-with-lcd/ | CC-MAIN-2018-22 | refinedweb | 360 | 76.52 |
<Below this line, add a link to the exact exercise that you are stuck at.>
<In what way does your code behave incorrectly? Include ALL error messages.>
<What do you expect to happen instead?>
Nothing wrong with my code (that i know of), but the instructions sort of offers you the challenge of doing this operation outside of the built in bitwise function OR. I wanted to test what I had learned, so I spent the better part of an hour figuring out how to make this work. I would love to hear feedback on how I could have done this better.
Also, Hi All! This website is awesome!
EDIT:: so i apparently misunderstood the instructions when it said “try to do this on your own”. Lol, i had fun doing it anyways.```python
#print bin(14 | 5) # commented out for the following code to be able to fail if i get it wrong
fourteen = “1110”
five = “101”
combined = [fourteen, five]
def bitwise_OR(fourteen, five):
one = “1”
together =
different_length = abs(len(five) - len(fourteen))
final = “”
if len(fourteen) > len(five): for i in fourteen: together.append("0") five = ("0" * different_length) + five elif len(five) > len(fourteen): for i in five: together.append("0") fourteen = ("0" * different_length) + fourteen for i, c in enumerate(five): if c == one: together[i] = c for i, c in enumerate(fourteen): if c == one: together[i] = c final = "0b" + final.join(together) return final
print bitwise_OR(fourteen, five)
<do not remove the three backticks above> | https://discuss.codecademy.com/t/8-a-bit-of-this-or-that-alternative-solution/32279 | CC-MAIN-2022-21 | refinedweb | 249 | 64.51 |
First time here? Check out the FAQ!
Alternatively, you can use collect():
sage: de.collect(ep)
ep^2*diff(f1(x), x, x) + ep*(diff(f0(x), x, x) + diff(f1(x), x)) + diff(f0(x), x)
To get the coefficient of ep^2 you can do:
sage: de.coefficients(ep)[2][0]
diff(f1(x), x, x)
To get all the coefficients at once:
sage: de.coefficients()
[[diff(f0(x), x), 0],
[diff(f0(x), x, x) + diff(f1(x), x), 1],
[diff(f1(x), x, x), 2]]
If you define
sage: epcoeff = [de.coefficients(ep)[i][0] for i in range(len(de.coefficients()))]
you can access the coefficient of ep^n using epcoeff[n], so that you can feed it to desolve etc.
ep^n
epcoeff[n]
desolve
@mforets: Thank you for pointing that out! Now it works in general, see updated question. I can now also define an antisymmetric function to keep track of the parity of the arguments in order to mimic the Levi-Civita tensor. I.e, typing eps(2,1,3) returns - eps(1,2,3). Good!
antisymmetric
eps(2,1,3)
- eps(1,2,3)
.
@tmonteil: I believe that he is doing something like QQ = RealField(1000) to have numbers with 1000 bits of precision and he does not want to keep on writing everything inside QQ(). It would be nice to have a 'default' precision command that one sets up in the beginning and forget about it
Is there a way to choose a default precision such that one does not need to keep typing RealField(1000)?
?
99999^450.powermod(9999^5000,10^6000)
.
Right, s(1,1) makes sense but it is zero. So in general one can restrict the labels to be distinct. Furthermore, s(i,j) is commutative. In Mathematica terms I am looking for the equivalent of using something like SetAttributes[s, Orderless] and nothing more.
So ideally one would have the choice of defining a symmetric function via an argument to the function() method, for example function('f', nargs=2, sym=symmetric) would define a function 'f' that takes two arguments and is symmetric in the sense that f(3,1) = f(1,3)
function()
function('f', nargs=2, sym=symmetric)
f(3,1) = f(1,3)
Thank you for your answer, but this is not really what I am looking for. The name of the function in the input must be the same as the output; ie I want to type A = s(3,2)*s(1,2) + 2*s(4,2)*s(1,3) and obtain A = s(2,3)*s(1,2) + 2*s(2,4)*s(1,3). If I am allowed to use a different name in the input I already know a solution (just replace def s(*m) by def ss(*m) in my original question and type A = ss(3,2)*ss(1,2) + ... instead).
A = s(3,2)*s(1,2) + 2*s(4,2)*s(1,3)
A = s(2,3)*s(1,2) + 2*s(2,4)*s(1,3)
def s(*m)
def ss(*m)
A = ss(3,2)*ss(1,2) + ...
@Wojowu I found out that you can use the gmp implementation within Sage. Did you try it? It should give us roughly the same performance as Julia. [...]
You can also solve it a bit more directly without explicitly invoking the Groebner basis (this happens under the hood).
sage: R.<a,b,c> = PolynomialRing(QQ,order='lex')
sage: f1 = a+b+c - 3
sage: f2 = a^2+b^2+c^2 - 5
sage: f3 = a^3+b^3+c^3 - 7
sage: Rel = ideal(f1,f2,f3)
sage: Rel.reduce(a^5+b^5+c^5)
29/3
(I came up with this solution after reading the solution to Exercise 37 from the book "Calcul Mathematique avec Sage", given on page 423)
For!
There is a "profiling" section in the tutorial... which could be helpful.
I actually have two (related) questions.
First question, am I misunderstanding the following code?
forget()
assume(x>0)
solve((x^3 - 4*x) > 0,x)
The solution given by sagemath-7.3 is
[[x > -2, x < 0], [x > 2]]
but I would expect only the solution [x > 2] under the condition x > 0.
Is this a bug or am I doing something wrong?
Second question. How can I enforce the condition x > 0 myself and "solve" the resulting conditions of:
[[[D < -2],
[D > 0, D < -1/9*sqrt(1081) + 116/9],
[D > 1/9*sqrt(1081) + 116/9]],
[[D > -4, D < -2],
[D > 2, D < -1/3*sqrt(7009) + 116/3],
[D > 1/3*sqrt(7009) + 116/3]],
[[D > -4, D < 0], [D > 2, D < 11]],
[[D < -6], [D > -2, D < 0], [D > 2, D < 19]],
[[D < -4], [D > -2, D < 0], [D > 2]],
[[D > -6, D < -4], [D > -2, D < 0], [D > 2]]]
How can I solve this type of question efficiently? I believe that discarding the negative solutions in the above system
will imply that 2 < D < 1/9(116 - sqrt(1081)), for example. However, I'd like to do this for many other cases as well.
Any help will be appreciated!
s = function("s", eval_func=symmetric)
everything works. For example
sage: 2*s(2,1)*s(6,5,4,3)^2
2*s(3, 4, 5, 6)^2*s(1, 2)
So I consider this question closed! | https://ask.sagemath.org/users/23195/mafra/?sort=recent | CC-MAIN-2020-50 | refinedweb | 901 | 63.29 |
@WebServlet("/foo")
public class MyServlet extends HttpServlet {
public void doGet(...) {
....
}
}
Just one simple question :) What will be servlets good for since JAX-RS makes easier lot of servlet use cases? It looks like it will become lowend technology for framework creators only just like JDBC.
Posted by: musketyr85 on December 02, 2008 at 11:46 PM
musketyr85 - A LOT of frameworks build on the web container. JAX-RS also builds on the servlet container as one of the containers in the Java EE platform. So adding functionality to the underlying container will enable frameworks which will make developers lives easier.
- Rajiv
Posted by: mode on December 03, 2008 at 12:56 AM
Sure. That are frameworks creators who will use servlets, but my queston was more on ordinal developers who don't write any frameworks.
I was just curious whether you or your colleagues know some use cases where end user programmer would say "Hey, this cannot be done so easy with JAX-RS let's do it with old good servlet!"
Posted by: musketyr85 on December 03, 2008 at 02:51 AM
is startAsync to be used for comet a/ the asyncContext?
Not resume ?
Is there a draft document to look over?
Sample code?
Posted by: netsql on December 03, 2008 at 08:56 AM
Are you guys planing to (finally) add a unified api/framework for uploads? Today... every damn web framework does it on its own... Please fix that
Posted by: mwessendorf on December 04, 2008 at 04:48 AM
Regarding the start async proposal, I think this is a much better concept than start resume. I have written HTTP server which uses a similar concept, however asynchronous processing is transparent from the outset.
This form of processing also opens up the servlet container to parallelize requests on the same pipeline. As it allows HTTP request completion to drive dispatch of the next request, rather than service completion, and without any greater buffering demands. Performance is also much better than the conventional Servlet service model.
Posted by: gallagher_niall on December 04, 2008 at 05:56 AM
musketyr85: I recently was at a JUG and queriied the attendees to see how many people are still writing servlets and was quite surprised to see the number. However to answer your question - for JAX-RS to be able to support async for example one way it could do it is via using the underlying functionality in the container. If JAX-RS does not have any support for async (as is the case today) then what does a developer do?
Posted by: mode on December 04, 2008 at 10:48 AM
netsql: I will soon be writing an entry describing the async API. A lot of people have asked me the same question as you.
Posted by: mode on December 04, 2008 at 10:49 AM
mwessendorf: It is still in the plan. We hope to have it in the proposed final draft of the spec.
Posted by: mode on December 04, 2008 at 10:50 AM
Actually, the real question is what incremental features could be added to the base Servlet API so that a developer could choose "raw" servlets instead of any framework whatsoever?
For example, the HTTP Method and URL mapping feature of JAX-RS, but, say, without the content-type matching and conversion features.
Posted by: whartung on December 04, 2008 at 02:03 PM
So, it looks like we're still left without a usable security feature?
Posted by: kito75 on December 08, 2008 at 03:59 PM
kito75: Security enhancements are planned for the next release. However can you send me email to be more specific about "a usable security feature"?
Posted by: mode on December 08, 2008 at 04:45 PM
whartung: Can you be more specific about the HTTP method and URL mapping? Do you mean the uri templating support?
Posted by: mode on December 08, 2008 at 04:46 PM | http://weblogs.java.net/blog/mode/archive/2008/12/servlet_30_from.html | crawl-002 | refinedweb | 658 | 68.91 |
Adobe - Developer Center : Exploring full-screen mode in Flash Player 9...
1 of 7 3/5/2007 7:20 PM
Flash Player Article
Exploring full-screen mode in Flash Player 9
Tracy Stampfli
Adobe
The rise in broadband penetration has enabled tremendous growth in the use of video on the web. According to a recent report from
comScore Media Metrix, a digital media measurement firm, three in five Internet users downloaded or streamed video in July 2006. ABC's Lost and other. Full-screen mode is not supported
for windowless or transparent Flash movies. If the user has multiple monitors, Flash Player uses a metric to determine the monitor that
contains the greatest portion of the Flash movie and then goes full-screen in that main monitor.
You must have version 9,0,28,0 or greater of Flash Player installed to use full-screen mode.
Note: The keyboard shortcuts which terminate full-screen mode are Escape (Windows and Mac OS), Control+W (Windows), Command+W
(Mac OS), and Alt+F4 (Windows).
SecurityScreen to
true
to the mms.cfg file to disable full-screen mode.
These restrictions apply to the Flash plug-in and ActiveX control but not to the Flash stand-alone player or Flash projectors.
Adobe - Developer Center : Exploring full-screen mode in Flash Player 9...
2 of 7 3/5/2007 7:20 PM
Requirements
To complete this tutorial you will need to install the following software and files:
Flash Player 9
Sample files:
ActionScript API.
ActionScript 2.0:
This method is called when the movie enters or leaves full-screen mode. The Boolean argument to this function indicates whether the movie
has entered (
true) or exited (
false) full-screen mode.
ActionScript 3.0:
The AS3 event received is
FullScreenEvent, which extends
ActivityEvent.
FullScreenEvent has a Boolean
fullScreen property,
which indicates whether the movie has entered (
true) or exited (
false) full-screen mode.
ActionScript 3.0 will throw a security error in the plug-in or ActiveX control if the display state is set to
StageDisplayState.FULL_SCREEN
when it is not permitted by one of the security restrictions listed above.
Using the new version of ActionScript
EventListener.onFullScreen = function( bFull:Boolean ){}
Stage.addListener( EventListener );
fullScreenHandler = function( event:FullScreenEvent ) {};
stage.addEventListener( FullScreenEvent.FULL_SCREEN, fullScreenHandler );
Download
fullscreen_article_assets.zip (ZIP, 200 KB)
Adobe - Developer Center : Exploring full-screen mode in Flash Player 9...
3 of 7 3/5/2007 7:20 PM
To ensure that you do not get errors when publishing a movie using the new full-screen ActionScript, you may need to update your installed
version of Flash Professional or Flex so that the compiler will recognize the new properties.
Files mentioned in the instructions below can be found in the sample ZIP archive accompanying this article. Download and unzip the archive,
and follow the directions below to copy the necessary files to their new locations.
If you are using ActionScript 3.0 with the preview release of Flash Professional 9, you will need to replace the ActionScript 3.0 compiled class
files in your current installation with a version that contains the new full-screen properties. Copy playerglobal.abc from the ZIP file and replace
the older version in your Flash installation here:
Flash 9 Public Alpha/en/Configuration/ActionScript 3.0/playerglobal.abc
Note: The sample paths in this article use "en," the folder name for the English-language Flash installation. If you are using a non-Engligh
version of Flash, this folder will be something else, like "jp" or "it." Also, while the sample paths use the folder name "Flash 8," if you are
working in a different version of Flash, this folder name may be "Flash 9 Public Alpha," "Flash MX 2004," etc.
To use this new ActionScript with Flex or Flex Builder, you will need to update the Flex SDK. Copy playerglobal.swc from the ZIP file and
replace the older version in your Flex SDK:
Flex Builder 2/Flex SDK 2/frameworks/libs/playerglobal.swc
If you are using ActionScript 2.0, the AS2 class files are just text files installed with Flash, so it is possible to hand-edit the Stage class file to
add the new property. Or you can use some non-standard syntax to prevent the compiler from complaining. To add the new property to your
class file, open the Stage.as file found here:
Flash 8/First Run/Classes/FP8/Stage.as
At the top there is a list of properties of the Stage class. Add the property:
Alternatively, in your ActionScript, use the somewhat ugly syntax:
rather than the nicer:
Scaling
The scaling behavior in full-screen mode is determined by the movie's scaleMode setting, set through ActionScript or the
<object> and
<embed> tags. The default
scaleMode setting is
showAll, which means that in full-screen mode the movie will be stretched to the size of
the screen but its aspect ratio will be maintained. If you want to control the scaling behavior of the movie programmatically, the
scaleMode
should be set to
noScale. In this case, the movie will not be scaled but the Stage width and height properties will be updated in full-screen
mode to indicate the new size of the Stage, and the Stage resize event handlers will be called.
In particular, you may want to control the scaling behavior of your movies if there is a concern about performance in full-screen mode. On
slower connections or older machines, the performance of a high-quality video scaled to fill a large monitor may be somewhat slow. In this
case, you may want to offer a low-bandwidth option that keeps the size of the video sprite smaller and does not attempt to scale video to the
full size of the screen.
Sample application:
static var displayState:String;
Stage["displayState"] = "fullScreen";
Stage.displayState = "fullScreen";
Adobe - Developer Center : Exploring full-screen mode in Flash Player 9...
4 of 7 3/5/2007 7:20 PM
ActionScript 2.0 example
ActionScript 3.0 example
// functions to enter and leave full screen mode
function goFullScreen()
{;
Adobe - Developer Center : Exploring full-screen mode in Flash Player 9...
5 of 7 3/5/2007 7:20 PM
HTML for full-screen mode
To use full-screen mode, you need to add a new parameter to the
<object> and
<embed> tags in your HTML. Here is an example of the
<object> and
<embed> tags with full screen enabled:
import flash.display.Stage;
import flash.display.StageDisplayState;
import flash.display.InteractiveObject.*;
import flash.events.*;
// functions to enter and leave full screen mode
function goFullScreen(event:ContextMenuEvent):void
{;
Adobe - Developer Center : Exploring full-screen mode in Flash Player 9...
6 of 7 3/5/2007 7:20 PM
Publish template for Flash
To support the new
<object> and
<embed> tag parameters for full-screen mode, we have created a new publish template that you can use
with the Flash authoring environment. It is a version of the basic "Flash only" template that will also add the proper tags to your HTML to allow
full-screen mode. Copy the file Flash_with_FullScreen.html from the sample ZIP archive into the Flash HTML templates folder:
Flash 8/en/First Run/HTML/
Relaunch Flash. In the Publish Settings dialog box, on the HTML tab, the Template pop-up menu should now contain the item "Flash with Full
Screen Support." Select that item and publish your movie. Full-screen ActionScript will be enabled.
Publish template for Flex Builder.1.
Copy the files in the "full-screen-support" or "full-screen-support-with-history" folders into the "html-template" directory of your Flex
Builder project.
2..
Where to go from here
Visit the Full-Screen Demos page on Adobe Labs for demos contributed by the community. If you have created a demo using the new
full-screen feature, and would like to share it with the community, please add a link to your demo on the Full-Screen Demos page.
About the author>
Adobe - Developer Center : Exploring full-screen mode in Flash Player 9...
7 of 7 3/5/2007 7:20 | https://www.techylib.com/el/view/laborermaize/flash_player_article | CC-MAIN-2017-34 | refinedweb | 1,347 | 62.78 |
From: williamkempf_at_[hidden]
Date: 2001-08-22 08:07:48
--- In boost_at_y..., John Max Skaller <skaller_at_m...> wrote:
> williamkempf_at_h... wrote:
> >
> > --- In boost_at_y..., John Max Skaller <skaller_at_m...> wrote:
> > > williamkempf_at_h... wrote:
> > > Try again. If this were a new programming language,
> > > and you were designing a brand new library for it, you would not
> > > need TLS or thread adoption. So it would be nice to have two
> > > interfaces: one for brand new threading applications,
> > > the 'pure' boost interface, and an extension which also
> > > supports integration with exitsing code.
> >
> > TLS can simply be ignored in this case. As for "adoption"... if
your
> > language interfaces with libraries from other languages then
you're
> > going to need to "adopt" native threads created by other people.
>
> I'm going to try again. I have no dispute with your
> assertions. I have no dispute with the need for TLS and adoption.
> However, the need does not exist when creating a brand new
> boost supported program using the threads library. You can do
> everything without TLS and without adoption in many such cases.
> So I'm asking if there is a way to factor the threads library
> so I can be surer in such cases that I'm not using the TLS
> or adoption features, for example by using inheritance,
> or namespaces, or even header files, so that I can just
> include the 'pure' part of the library that is self
> consistent and doesn't interface with legacy code, the
> native OS, or other threads.
Again, I don't think it's possible for you to do without adoption,
period. If you interface with *ANY* library code there's a
possibility that your own code will inadvertantly call thread() in a
thread not created by Boost.Threads. Further, adoption is a hidden
implementation detail, so what purpose would there be to disallow it
even if you can somehow prove you don't need it?
And, again, as far as TLS is concerned, you simply don't use it. I
see no need to somehow shield you from "accidently" using it.
> At worst, can you separate, clearly, the 'pure'
> boost threads library from the compatibility extensions
> in the documentation, even if a language based separation
> isn't convenient?
>
> It is possible, I think, to separate discussion
> of the 'pure' semantics from the more difficult pragmatics
> of legacy code and implementation support. It may even
> be worth splitting any proposal to the committee.
> One reason is: it is likely the abstract machine MUST be
> extended to support the 'pure' semantics and interface,
> on the other hand it may be _impossible_ and pointless
> to support thread adoption in the abstract machine.
> Therefore the thread adoption semantics, when Standardised,
> are likely to be 'optional and implementation defined',
> whereas we'd like the 'pure' threading library semantics
> to be deterministic, normative, and well specified
> in terms of the abstract machine model.
Adoption *IS* an implementation detail. You won't find a single
reference to it in the documentation or in the libraries interface.
But it's a very high profile QoL issue that every implementation must
address.
> I hope this is clearer. I'm merely asking
> if the library (or at least the documentation)
> can be factored in some way, and it's only a request
> for information -- not any kind of demand for action. :-)
Well, adoption is simply irrelevant. If you feel very strongly about
TLS then propose some method to "factor" it. Simply putting it in a
different namespace isn't going to prevent you from using it (or even
discourage you from using it), and even if your own code doesn't use
it other code you link to might, so I see little benefit in this sort
of factoring.
Bill Kempf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/08/16319.php | CC-MAIN-2021-49 | refinedweb | 642 | 63.59 |
Getting Started with Database Kit¶
Database Kit (vapor/database-kit) is a framework for configuring and working with database connections. It includes core services like caching, logging, and connection pooling.
Tip
If you use Fluent, you will usually not need to use Database Kit manually. But learning the APIs may come in handy.
Package¶
The Database Kit package is lightweight, pure Swift, and has few dependencies. This means it can be used as a core database framework for any Swift project—even one not using Vapor.
To include it in your package, add the following to your
Package.swift file.
// swift-tools-version:4.0 import PackageDescription let package = Package( name: "Project", dependencies: [ ... .package(url: "", from: "1.0.0"), ], targets: [ .target(name: "Project", dependencies: ["DatabaseKit", ... ]) ] )
Use
import DatabaseKit to access the APIs.
API Docs¶
The rest of this guide will give you an overview of what is available in the DatabaseKit package. As always, feel free to visit the API docs for more in-depth information. | https://docs.vapor.codes/3.0/database-kit/getting-started/ | CC-MAIN-2018-22 | refinedweb | 166 | 67.96 |
12 June 2008 17:42 [Source: ICIS news]
HOUSTON (ICIS news)--Shell’s announcement of a quarterly price hike and a new monthly price caught many ?xml:namespace>
Shell said in a letter to buyers that it was instituting a monthly large buyer contract price of 67 cents/lb ($1,477/tonne, €945/tonne) effective on 1 July.
Shell would also raise its quarterly contract to 72 cents/lb, a 24.5 cent increase.
Acetone sellers typically issue a nomination for the upcoming quarter and a settlement is negotiated over three months.
A buyer said the increase was “major, arbitrary” and was severe and unbelievable.
“With hurricane season here and costs higher than ever, producers are undoubtedly concerned about margins, another acetone seller said. “A higher quarterly price over a monthly price is necessary in this environment.”
Other major acetone sellers including INEOS Phenol,
Many sellers have urged buyers to move to monthly prices in the midst of volatile feedstock costs.
“It is amazing that this contract does not settle until the very end of the quarter,” one producer said. “By the time contracts are negotiated, upstream propylene has already changed three or four times and it is not a logical picture.”
Meanwhile, second-quarter acetone prices remained unsettled. Sellers have pushed for 5-10 cents/lb in hikes, citing record high feedstock prices, which have more than doubled in the year.
In many cases, sellers have had to reduce rates for economic reasons, creating a tight supply scenario.
A trader said no material could be found in June as sellers said they were “sold out” due to reducing rates on high propylene costs.
Refinery grade propylene (RGP) was trading for 73 cents/lb on Wednesday, and was up 40% from its price one year ago, according to global chemical market intelligence service ICIS pricing.
Buyers have been resistant to hikes due an oversupplied global market.
The first quarter large-buyer contract price for acetone settled flat at 47.5 cents/lb.
“Spot prices have been changing hands in the low to mid 40s cents/lb but material is suddenly not available for under 50 cents/lb,” a trader said
($1 = €0.65) | http://www.icis.com/Articles/2008/06/12/9132111/shell-us-acetone-hikes-surprise-dismay-buyers.html | CC-MAIN-2014-52 | refinedweb | 363 | 61.56 |
The objective of this assignment is to implement a Set as a linked list. Note that a Set does not have duplicates.
You are given a class called LinkedList that contains an inner class called Node. You have already seen these classes in your lectures. Download the LinkedList class containing Node here. Note that some of the attribute visibilities were changed to protected and methods were added or changed, so make sure you use the provided code here, and not the code from the lecture.
Your task is to implement a new class called Set that extends the given LinkedList class and implements the given interface called ISet. Download ISet here. You will declare your Set class as follows:
public class Set extends LinkedList implements ISetYou may NOT use any existing Java Collections class including but not limited to ArrayList and HashSet.
The methods in ISet are as follows:
This method adds an item to the set. A duplicate item is not added. Hint: Override the add method defined in the LinkedList class but use its functionality using super.
This method returns true if the item is in the set, false otherwise.
This method returns a new array that contains all the items in the set. If the size of the set is 0, return an empty array. The objects in the set are unique (i.e., no duplicates), so the returned array cannot contain duplicates.
This method creates and returns a new set from the items contained in the elements array. The array may contain duplicates, but the Set implementation should not contain duplicates. If the array is empty or null, return a set of size 0 with head referring to null. If duplicates of an item are present in the array, then ignore all the occurrences of this item after the first one.
This method returns a new set that contains only those items that are present in both "this" set and the other set. The order of items in the returned set does not matter. The other set and "this" set remain unchanged.
This method returns a new set that contains the union of the items that are present in "this" set and the other set. Obviously common elements should appear only once in the result. The order of items in the returned set does not matter. The other set and "this" set remain unchanged.
You cannot change ISet, i.e., no method declarations may be added, deleted, or changed. You cannot change the implementation of LinkedList. Do not copy the methods from LinkedList to Set. You should only implement the new methods declared in ISet in Set. It is okay to add private helper methods to Set.
Test your program with test data in addition to what is already provided in the assignment below. Keep in mind that we will not test your main method -- the methods you implement will be tested directly. However, you should use your main to test the methods that you write. A barebones main for Set can include something like:
public static void main(String[] args) { ISet tester = new Set(); String[] names = {"Alex", "Hajar", "Asa", "Sudipto", "Koen", "Asa"}; ISet s1 = tester.fromArray(names); System.out.println("After creating set s1 from array:\n" + s1); System.out.println("Printing array from s1:\n"+ Arrays.toString(s1.toArray())); String[] otherNames = {"Gareth", "Alex", "Swapnil", "Chris", "Asa"}; ISet s2 = tester.fromArray(otherNames); System.out.println("After creating set s2 from array:\n" + s2); ISet s3 = s1.intersection(s2); System.out.println("Intersection of s2 and s3:\n" + s3); ISet s4 = s1.union(s2); System.out.println("Union of s2 and s3:\n" + s4); }
We encourage you to write your own tests for each method to test various scenarios. Preliminary testing will be performed on your methods to partially test their functionality. More comprehensive tests will be used during final testing. During preliminary testing, your score equals the number of test cases passed. However, during final testing, certain test cases may be weighted more than others. More difficult methods will be worth more points.
Submit the Set.java file via the online checkin system. This system performs preliminary testing of your program using the examples above. Final grading will be performed using a different set of test inputs. Passing preliminary testing guarantees a 20% on the final grade. | http://www.cs.colostate.edu/~cs161/Fall16/more_assignments/P8/doc/P8.html | CC-MAIN-2019-09 | refinedweb | 722 | 67.04 |
My first blog post will be a small but useful little tip. One of the things that's not built into deep zoom is the ability to specify at which level to stop zooming out, and at which level to stop zooming in.
Check out the following example (which I will use in future blog posts for more examples). Please note that there is a bug in the mousehandler here where the browser still captures the wheel (not just the app), something I intend to fix in a future post.
Launch in a separate window
This example is a very deep sparse image - it is a collection of vacation pictures I took while I was in Europe last summer. I laid them out nicely for a pleasant browsing experience.
One of the issues with simply checking the zoom level and resetting it when the zoom level passes a certain point is that Deep Zoom uses its own animations for zooming to a certain level. So simply checking the current zoom level when a user flicks the mouse wheel and ensuring that a certain level of zoom isn't passed doesn't do the trick - at the time you set the new zoom level, Deep Zoom only initiates the animation, but the current zoom level will not have changed. You need to instead anticipate what the zoom level will be after you zoom, and make sure to adjust the target zoom rate appropriately.
The way this is done here is to keep track of my own zoom level, check how deep I've zoomed in or how far I've zoomed out at any given time.
Another thing that's been done here is to open the XML file for the image, and check for the extents of the image and not let the user zoom any further than 2x the maximum resolution of the image. This was done using the LINQ library in Silverlight. Note that you also have to take into account the namespace in the Deep Zoom image XML file, which is something for which I haven't seen any examples on the web, but it's part of the project below:
Zooming with Constraints Sample
Note: I didn't include the image in the sample because it's too big (167 megs). When you build and run the app, you won't see the content, though the concepts are all there.
PingBack from
This is good stuff.
How would you modify it to work with a canvas instead?
Nice Stuff..I have similar on my blog..I really appreciate "Buttons" which you created for zooming, I done this using DeepZoom Composer..Nice one!
Cool Stuff indeed. I noticed when scrolling that the screen "jiggs" with each roll of the mouse, is there a way to stop that?
Oops nevermind! Somehow I missed that section about it grabbing the browser. :)
Bill Reiss on Embedding fonts, .net Curry on hosting SL Content, Corrina Barber updated another SL2 skin
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement | http://blogs.msdn.com/lutzg/archive/2008/08/05/zooming-with-constraints.aspx | crawl-002 | refinedweb | 521 | 66.07 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to create dynamic selection field (fields.selection)?
i face a problem with filling the (fields.selection) with dynamic data comes from database
so for example if i change the customer value i want to fill the selection field with the coming data from database dynamically
i just need to do this with the selection field
from openerp.osv import osv,fields
PACKAGE_TYPE_SELECTION = []
class test(osv.osv) :
_name = "Test"_columns ={ 'selection': fields.selection(PACKAGE_TYPE_SELECTION, string='Selection'), }
Hello Ali,
You can call a function for your selection field.
Ex:
class ...
def get_journals(cr, uid, context=None):
journal_obj = self.pool.get('account.journal')
journal_ids = journal_obj.search(cr, uid, [], context=context)
lst = []
for journal in journal_obj.browse(cr, uid, journal_ids, context=context):
lst.append((journal.id, journal.name))
return lst_columns ={ 'selection': fields.selection(_get_journals, string='Selection'), }
Hope this will help you.
Thanks A lot for your reply but if i want to do it via onchange method ?
No, you cannot do it with onchange.".
Hope you will get custom dropdownlist.
Best Regards,
Ankit H Gandhi.
Make your field a many2one field
'my_field': fields.many2one('my.table', ... ),
put the 'selection' widget
<field name="my_field" widget="selection" ...>
in an 'on_change' method set the appropriate domain for your selection field and return this domain , this will change the values appearing in your dropdown selection list
def onchange_for_my_field(self, cr, uid, ids, param1, param2, ... , context=None):
...
return {'domain' :
{'my_field':
[('field1','=',val1),('field2','=',val2),... ] }
}
where field1 and field2 , ... are fields in the table 'my.table' that could be used to filter the result.
Hope this solution is useful
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-create-dynamic-selection-field-fields-selection-99421 | CC-MAIN-2017-13 | refinedweb | 315 | 52.87 |
Redesign
In progress. Discuss.
It’s just data
In progress. Discuss.
Brad Wilson: I wonder when my program is going to get TiVo-style smarts and say "If you like Don, Sam, Joshua, and Ingo, then you'll love Sam Gentile and Craig Andera!".
Note that Tivo doesn't say if you love CNN, MSNBC, and the History Channel, you'll love the Discovery Channel. The recommendations are at the individual show level. Now that's what I'd like to see.
A spambayes in reverse. Read a few blog entries and push on little green thumbs up or red thumbs down icons. Then any all all posts on topics that interest you in your greater neighboorhood will come to your aggregator.
In processing a comment API request, I may end up grabbing excerpts, formatting, etc. I can also spell check. Would there be interest in a variant of the comment API which simply returns back what would have been posted instead of actually doing it?
It seems to me that there are the following options for implementing this API:
One thing I think is important is that such an option not be ignored. In other words, I presume that a client would prefer a failure (HTTP status and/or SOAP fault) than to have the request actually processed. Unless somebody sees something I don't, that reduces the options to two: SOAP headers with mustUnderstand attributes have this semantic, and clearly routing to a different URL can be presumed to do this effectively.
Suggestions?
Dare Obasanjo: I'll definitely add support to Joe's CommentAPI to RSS Bandit but I doubt I'll be doing the same for Sam's alternative SOAP version since I can't see any motivation for supporting both besides buzzword compliance.
At the cost of optionally supporting an SOAP envelope and responding in kind, I get better and easier integration with tools and languages like Radio and C#. If two elements and future extensibility is too heavyweight for you, then no problem. That's what optional means.
As to your uncertainty about which version of XHTML is meant by <xhtml:body>, wouldn't the same concern (and more) apply to <content:encoded>?
Chris Anderson: if you favorite aggregator isn't showing the full text of my blog, send them a mail and get them to support <xhtml:body>!
My weblog now validates as XHTML strict, and my implementation of the Comment API will now produce and consume XHTML bodies.
Users of back-level browsers may find that the comment form doesn't remember you any more. If this function is important to you, you will need to find a browser that passes this test.
My WSDLs for the comment API (example) simply declare xhtml bodies as mixed sequences of any for now. Let me know if it would help if this were restricted in some manner..
Don Box's RSS 2.0 feed now supports <content:encoded>. I was kinda hoping that it would support <xhtml:body> instead (literally instead of encoded). I'm sure that NewsGator would have adapted. Needless to say, I would have too.
ECMA:.
John Schneider documented some of the earlier work that influenced this spec. I haven't kept close tabs on how it has evolved since then, but I'm confident that Tim Bray would approve..
Seairth Jacobs: Just when I thought that content negotiation was finally going to make it, I discover...
This weblog now supports the comment API, with a few additions.
Upon success, not only will a HTTP 200 status code be returned, but the body will contain an updated RSS item with the link, sanitized description, etc. Should there be any failures, I will report back with a HTTP 500 status code and a SOAP fault.
If the request comes in with a SOAP envelope and/or rdf:RDF element, I will respond in kind. That's just the kinda guy I am. Note that neither of these are required in order to do basic functions, and if not present, the response will be a clean simple item with no wrappers or namespace declaration.
However, be forewarned that in the future there may be additional functionality which may require SOAP and/or RDF. The commitment is that basic functionality will not require such wrappers.
Obituary
Rael Dornfest: The O'Reilly Emerging Technology Conference has TrackBacks (and their associated auto-discovery RDF) baked into every single keynote, tutorial, session, and BoF page. This means you can target your bloggings of the event, providing both us, the organizers, and your peers with live feedback on the goings on. <good on you, terrie!>
Top 100
Technorati now sports the familiar white on orange
icons for a number of feeds. Doing a quick scan the pattern
seems to be that only those weblogs that have
autodiscovery tags are so listed.
Before reading this, please read this.
Gary Burd: I checked in the basic code at Source Forge.
Cool. Please add rubys to the project.
Due to persistant prodding by Dare, I've created an addendum dictionary. Leave comments here if there are particularly troublesome words that you would like included..
Joe Gregorio: Attacking or trying to tear down other peoples work is non-productive and plainly not my style.
All I can say is that statements like this are not the way to get my cooperation.
While a number of intelligent people have contributed to the thread mentioned above, I still believe that creative solutions exist to all the "problems" outlined there.
Ones that are script/template friendly. Which are regexp compatible. Which are REST and SOAP compatible. Compatible with all the aggregators that I know of. And, in an area that I had not previously considered exploring, RDF compatible.
Tomorrow I plan to name that tune. After I locate my asbestos underwear.
An essay I wrote over seven months ago got 300 hits in the past 24 hours. This was due to a mention at the bottom of a lengthy and compelling rant by Joe Gregorio. Which was mentioned third out of a list of three items in a blog entry by Tim Bray. Which was mentioned by Slashdot.
While this doesn't guarantee that anybody actually read my essay yesterday, it does provide an indication that a large number of people actually read every word of Joe's and Tim's.
James Snell: If InfoPath does for XML and Web services what Excel did for Spreadsheets, bravo to Microsoft, good job.
James Snell: I'd like to propose a minor update to RSS 2.0. The update would add nothing more than a namespace declaration for RSS..
Gary Burd: I have been looking at how to merge my code with Sam Ruby's Mombo.
I'll comment when I see code. Don't worry about it being rough or incomplete. Publish early and often. Enjoy your trip.
Timothy Appnel: There are zero occurrences of the word asynchronous in the current JSR 172 (J2ME Web services) draft specification.
Joe Gregorio: I don't think I'll do it, but I am always open to persuasive arguments...
There's nothing more persuasive than seeing the problem with your own eyes. Since you seem to like C# on the client...
wsdl CommentAPI.wsdl csc capiC.
I may not be reading this correctly, but it appears to me that Dave instead of federating we each run more servers that don't talk to one another. I must be missing something.
In any case, it looks like Dave and Lawrence are going to move the backend app that services weblogs.com to a faster server... something that many of us will greatly appreciate. Thanks!
Simon Carstensen: BlogPing is a Web application that tracks changes to weblogs and other news-driven websites through an XML-RPC interface and forwards the ping to a number of web services. A non-propietary weblog-ping service
Cool. This is an RFC, and I do have comments. None of which will be any surprise to regular readers here.
If you look at the original simple weblog ping interface, there are two parameters. blo.gs adds two more. Your proposed interface now has 7 parameters.
My suggestion to all: if you are going to design an interface to be used by others, start by making it exensible. Start with namespaces and named parameter associations.
Now, take a close look at the first 6 out of 7 parameters in the RFC. These are straight out of the RSS spec. Why not simply POST the RSS, possibly with an envelope?
Suppose you have a parameter that you want to pass that RSS doesn't currently support? That's what namespaces are for.
Joe Gregorio: would it be more palatable if the CommentAPI supported an optional SOAP envelope?
Just be aware that anybody who sends you a SOAP envelope is likely to expect you to respond in kind. To help further discussion, here is a Cheetah template for SOAP faults. Invoke it from within any except: block in Python.
James Snell: Almost done with a Liberal RSS Parser for Java... First step towards Jagguar, a three-paned RSS aggregator for Java being built using the Eclipse SWT library. It will run as a standalone application or within Eclipse as a plugin.
Cool!.
Jason DeFillippo: If you are running any other blog tool that lets you specify sites to ping feel free to add it in and try it.
OK, I'm in. I now ping blo.gs, and blogrolling.com and weblogs.com. It is nice to see that there are now regional services.
What would be really nice is if these services could federate. If this were done, then I could ping one local or topical or fast server which would then send pings on my behalf.
Dare Obasanjo: Your blog is the only one I've seen which supports comments that I tend to regularly post to so whatever you support is what's going to end up in RSS Bandit.
What Joe Gregorio has defined will undoubtedly get supported in Aggie and RESTLog. Apparently, if I were to support it, RSS Bandit would too. I suspect others would follow.
The only thing holding me back at the moment is seeing if ChrisAn and Don Box, et. al. want to be involved.
Steven Noels: Welcome Gump
LOL! What Steven doesn't mention is that gump is nice.
Sebastien Paquet: I believe a critical element to get a sustainable system is for people to get reasonably quick feedback in return for the extra effort expended in creating metadata.
John doesn't get it. Sterling does. It took Sean ten minutes to figure it out. Doug sees the future. Giulio is asking the right questions.
Perhaps it is time for another essay. A short one, this time.
Rebecca Dias is a currently employed as a marketing droid for Microsoft, but I knew her before she was assimilated. In any case, she is starting off the new year right (no, I won't share her age ;-)) with a new weblog.
Welcome Becky!
Don's non-proposal garnered considerable criticism for not providing the appropriate amount of respect for prior art. The following modest change to the header should address this:
<soap:Header <title>My first item</title> <date>2003-03-12T11:02:14Z</date> <creator>Don Box</creator> <creator>Joe Beda</reator> <creator>Tim Ewald</creator> <creator>Chris Anderson</creator> <subject>Rocks</subject> <subject>XML</subject> <description> this is some <xhtml:em>important</xhtml:em> text. </description> </soap:Header>
Aaron Swartz: RSS 0.90, RSS 0.91, RSS 0.92, RSS 0.93, RSS 0.94, RSS 1.0, RSS 2.0, RSS 3.0, ESF 1.0, XFML 1.0, RSD 1.0, FOAF, GeoURL, XHTML 1.1, CSS, Bobby, RSS Valid, CC-licensed, MT-powered. This is what a real weblog needs in our modern world. Raging Platypus, I salute you.
I feel horribly outclassed.
Tim Bray: Hyatt responds to trackbacks...this style of product development makes me much more likely to become a customer.
For those following such things, I've added the following lines to post.sanitize() in order to more gracefully handle paragraph and break tags in trackbacks and comments:
body=re.sub('<br\s*/?>\n?','\n',body) body=re.sub('</?p>','\n\n',body).strip().
Ben Hammersley: The trouble with small pieces loosely joined is that if one goes, the others look really silly..
Chris Anderson How many people should I expect to host on a 512MB machine?
Mark Pilgrim's and my blogs are hosted on the same machine, and combined get a fair amount of traffic. This currently is a 1Ghz AMD Duron processor w/512MB.
CPU and memory usage was not a problem even when my blog was dynamically generated via CGI for every request.
I'm very curious about your XML formats both for interchange and persistence. Is this documented somewhere?.
Gary Burd: I recommend extending Mombo's comment file format to include more fields than title and body. The attribution information should be moved to one of the new fields and be formatted by the template.
I love it! Gary, how hard would it be for you to come up with a weblogging package that has roughly the same functionallity as mine that you could live with? I'd love to collaborate with you. If you feel likewise, feel free to use as little or as much of my implementation in this endeavor as you see fit.
Could you host a cvs repository on your site? Or would you prefer it elsewhere?
Perhaps we can get Joe to join us. ;-)
Gary Burd: Now that's overkill. For that matter, this entire post is overkill.
Chuckle! Don't be shy. Truth be told, I want to go a step further... make the original file in tempfile.mktemp() and move the try:link/unlink/chmod code to another script w/setuid.
Thoughts?
Timothy Appnel: I have a new appreciation for the elegeance and simplicity of XML markup. Not that I didn't have one before its just grown the size of the Empire state building and illuminated in neon.
Obviously, I'm currently embarking on a similar mission, and share Tim's appreciation for XML. My goals, however, are much lower than Tim's: I'm not trying to create a full markup language. I'm applying 80/20 whenever I can: e.g., unordered lists are enough. The times when full functionality is required, I'll personally use full XHTML.
I'm currently looking into textile for inspiration...
The following is now supported in comments:
Clemens Vasters:.
Obviously, Clemens hasn't heard about the expires header. All you have to do is predict with 100% accuracy when the next POST or DELETE request is going to come in, and all caches will remain perfectly in synch.
It is just a SMOP.
1111111111111111111 looks promising.
I can't make unique titles, but I can make unique links. Try again, and see if you like the results any better. As for your comments on "standards", simply let me know what format you would like me to support and I will see what I can do.
I are a good speler
I laughed so hard, tears came out of my eyes. Thanks Ingo!
Given this, I can only interpret my name being mentioned in the article as a well intentioned compliment, however much I disagree with Mark's assertions of SOAP's implied use and incompatility of this use with REST.
There also is one important aspect that Clemens seems to have overlooked, which I will generically refer to as bindings. When doing SOAP using XML over HTTP, a URI and a SOAPAction are required. Non HTTP and non pointy-bracket serializations of the XML Infoset should also be supportable. A concrete example: a binary message sent over MQSeries may be a vital part of a flow of business process.
Sam Ruby: What I never will understand is how I got so lucky to find someone who would put up with me for so long..
It is amusing to see that Network World views me as a REST backer. Thanks go out to Dave Chappell for forwarding me this link.
Less Code
Thanks Mark, Joe, Simon.
I agree with Matt. I simply wanted to see what wxPython could do.
The RSS validator has always flagged script, meta, embed, and object tags. But the real fixes need to be in the aggregators. Kudos to Aggie.
I'm not sure what I am going to do with it now, but I've uploaded the source for others to marvel at and/or ridicule..
Suffice it to say that I thoroughly enjoyed the presentation. In the process he of the 45 minute presentation, Don visually and graphically demonstrated not only the points made above, but also in the process he reinforced a number of entries he has recently made to his weblog: I have a new blog front end, Skin is in the game, and Clemens takes the red pill.
Gordon Weakliem: The most important RSS element: <title>
Looks like I am going to miss Andy, but I am looking forward to meeting with Don and Costin. Others in the greater bay area interested in meeting, simply let me know.
It's time to see how many people I've pissed off lately. I tried harder this year.
Since I use Apache, I can get support for conneg virtually for free through the use of an 'Option +MultiViews'. Unfortunately, as I am currently set up, this only works on the level of what is currently cached statically. This could produce inconsistent results based on the order in which requests are received. Not good.
Another alternative would be to process all requests involving conneg dynamically.
I'm also concerned about the implications of this to other caches. Perhaps Aggie should have an option to turn this feature off in case someone is faced with a less than intelligent proxy which caches?
Finally, I'm skeptical about the existence of client software that can handle multiple formats (not just RSS 1.0 vs RSS 2.0, but RSS vs HTML or PDF). To do this right, the recipient should verify the content type received, and one simply can't rely on this HTTP header for RSS feeds at this point.
Question: from a fiduciary responsibility to one's shareholders and independent of the availablity of source, what is the proper price point for the .NET Framework?
This is a loaded question, and perhaps one that none of us should speculate on. However, one thing you will find is that it is impossible to answer that question without an understanding of the business goals of your organization. | http://www.intertwingly.net/blog/2003/03/ | CC-MAIN-2014-15 | refinedweb | 3,137 | 66.64 |
Hi,
We’re using Aspose.Cells 21.9.0 before and in the process of trying to upgrade to 22.3.0. Unfortunately, it looks like Name.RefersTo and Name.R1C1RefersTo behaviours have been changed. I was wondering if I can confirm whether this is expected.
Here’s the code that was used for testing
using Aspose.Cells; using System; using Range = Aspose.Cells.Range; namespace DummyApp { class Program { static void Main(string[] args) { Workbook workbook = new Workbook(); Worksheet worksheet = workbook.Worksheets.Add("sheet2"); Range range = worksheet.Cells.CreateRange("A1"); range.Name = "identifer"; Name name = workbook.Worksheets.Names[0]; Console.WriteLine(name.RefersTo); Console.WriteLine(name.R1C1RefersTo); } } }
In 21.9.0,
name.RefersTo was “=sheet2!$A$1” and name.R1C1RefersTo was “=sheet2!R1C1”.
However, in 22.3.0,
name.RefersTo was “=sheet2!$A$1:$A$1” and name.R1C1RefersTo was “=sheet2!R1C1:R1C1”.
I was wondering if you guys can confirm if this changed behaviour is correct. If you need any more info regarding this, please let me know.
Thanks a lot! | https://forum.aspose.com/t/name-refersto-and-name-r1c1refersto-breaking-changes/243677 | CC-MAIN-2022-40 | refinedweb | 172 | 65.69 |
Compiling Error - Java Beginners
Compiling Error cant able to compile java servlet file in the command prompt? WHY?
but i can compile normal java file
give me answer to fix my problem
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
Compiling Error - Java Beginners
Compiling Error cant able to compile java servlet file in the command prompt? WHY?
but i can compile normal java file
give me answer to fix my problem. Hi Friend,
Do you have servlet-api.jar in the lib folder
error while compiling - Java Beginners
error while compiling i am facing problem with compiling and running a simple java program, can any one help me, the error i am getting is javac is not recognised as internal or external command Check if you JAVA_HOME
error in compiling j2me apllication - Applet
error in compiling j2me apllication hi,
in my j2me application for video,m getting only the audio.i ll send the code.
package src.video_streaming;
import javax.microedition.media.Manager;
import
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
a method to an interface then
we need to implement this method in all... to construct the objects. However in
few conditions we need to get a class...(String)
returns a Class object reference. Some times we need to get a
class
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
to an interface then
we need to implement this method in all... to construct the objects. However in
few conditions we need to get a class reference... Class.forName(String)
returns a Class object reference. Some times we need to get
remove given error
remove given error class Amol
{
public static void main(String args[])
{
System.out.println("Hello");
}
}
D:\ammy>javac Ammy.java
Ammy.java:5...
System.out.println("Hello");
^
1 error
compiling programme IJVM
compiling programme IJVM I TRY AND COMPILE THE PROGRAMME BELOW BUT HAS ERROR APPEAR ALL THE TIME,,CAN ANYONE CORRECT THE CODE AND WRITE UP THE NEXT METHOD FOR POWER IN THE PROGRAME AS WELL...URGENT
// Program to read two single
compiling
CoreJava - Java Beginners
core java an integrated approach I need helpful reference in Core Java an integrated approach
CoreJava
corejava
Need to Remove Duplicate Records from Excel Sheet
Need to Remove Duplicate Records from Excel Sheet Need to Remove Duplicate Records from Excel Sheet. I have one excel sheet having two fields... empnum rating (without using sql query have to remove records from excel using java
error
error while iam compiling iam getting expected error
Compiling and Interpreting Applications in Java
following topics:
Compiling Java Program
Interpreting Java Program...
and then use the command prompt to run the example code.
Compiling Java Program
For compiling Java program, first your code must write the code using text
editor
Need to remove duplicated rows from excel using apache POI API
Need to remove duplicated rows from excel using apache POI API Hi,
Need help from you.
I am able to generate excel sheet with data also... to remove duplicated rows from excel. same as below i need to do by using apache element from stack
error message and if not then remove the names from stack until the stack is empty...remove element from stack Hi there
I'm looking around on the internet how to remove names from stack by asking user how many names they want
Remove duplicate values
Remove duplicate values i am trying to insert values into database from excel spreadsheet .am doing it using JDBC.connected both excel and sql....
q1. how can i eliminate duplicate values during insertion? please i need code
error
is"+sum);
}
}
this is my program i need to print the series (1+2/12)+(1+2
program not compiling
program not compiling Hello can you help me with this program,I am trying to add a loop at the end of the program but it is not compiling,thank you.
import java.util.*;
import java.text.*;
class HardwareItems
{
String code
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
NEED A PROG
NEED A PROG whats the program to add,delete, display elements of an object using collecions. without using linked list
Hi Friend,
Try...));
}
System.out.println();
System.out.println("Enter the index of element to remove
compiling .class files
compiling .class files how do i compile .class files using eclipse
Photoshop Tutorial : How to remove an object from a picture
How to remove an object from a picture
We are going to learn the method to remove an object
from a picture. If you want to remove an object, which you don't want to like
Compile error - Java Beginners
Compile error I get this error when compiling my program:
java:167... to be thrown
tc.countLines(inFile);
^
1 error... be the newline
* character '\n', space character '\s', etc. Trim should remove
Need help with this!
Need help with this! Can anyone please help me... the entrys in array score = -1 for later error checking.
for(arrayCounter = 0... to effectivly end the loop with out the need to break it.
for(i=(0); i <
Compiling package in command line
Compiling package in command line Hi friends,
i am totally new to java programming, i am basic learner in java.
My query is, How to compile java... how to compile it.
Thanks & Regards,
Sham
Compiling Java
corejava - Java Beginners
corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass by value semantics. That is, a copy of the value of each of the specified
process of compiling and running java program
process of compiling and running java program Explain the process of compiling and running java program. Also Explain type casting
PHP Remove Duplicate Values
PHP Array Remove Duplicate Values
In PHP if we need to remove duplicate values then we can use array_unique()
function. The functionality of it is to accept an array and returns another
array without duplicate values.
General
Friends need a help on ruby..
Friends need a help on ruby.. Friends i need a ruby script... if the script got failed, the html report will have the error message with the line number on which error is there.
If the script is executed successfully than
corejava - Java Beginners
corejava - Java Interview Questions
java program...need help as soon as possible!!! - Java Beginners
java program...need help as soon as possible!!! Modify the Lexer (15.... Include line number information within tokens for subsequent error reporting, etc...)
5. }
If you encounter an error, e.g. you find a "%" on line 7 of the source
Remove history
Remove history how to remove history in struts2 after clicking on logout link
Need some help urgently
Need some help urgently Can someone please help me with this below question. I need to write a class for this.
If you roll Y standard six-sided dice... number of possible combinations.
The actual question got some error. Its
compile error
program with Test.java and try to compile with javac test.java an error like...
public class A
^
but if i remove public before class then the test.java
still error
second button which i,ve added in ma frame.... plzz smone help me to remove error
compilation error
compilation error Hi my program below is not compiling,when I try to compile it i'm getting the error message "tool completed with exit code 1".The loop has to be at the end of the program can you help me.
Blockquoteimport
need program - Java Beginners
need program I need a java program to get the multiple student name and mark.display the student information by sorting the marks in ascending... ClassCastException("Error");
int studentmarks = ((ShowData) Student).getMarks
JAVA Annotation Error
JAVA Annotation Error while compiling simple java program i get annotation error i.e
class will only be accepted if annotation processing is explicitly requested..how to solve
Need help with my project
Need help with my project Uses a while loop to perform the following steps:
-Prompt the user to input two integers: firstNum and secondNum where... numbers, and if not, provide error feedback and prompt them again.
-Output all
need
need i need a jsp coding based project plz help me
Error - Struts
/loginClientSideValidation.jsp
/pages/loginsuccess.jsp
/pages/test.jsp
Remove
How to add dynamically rows into database ?Need help pls
How to add dynamically rows into database ?Need help pls Hi everyone...
<label> <...', '', 'sjas') or die(mysqli_connect_error());
$query = "INSERT INTO employee
Code error
)
read.close();
}
}
}
While using this it shows error as:
run... is not found. So you need to add the class or .jar file which contains this class
need
compiling a uploaed java file - JSP-Servlet
compiling a uploaed java file How a uploaded java source file is automatically compiled(i.e converted to .class file) in server using jsp/servlet
Java Programming Need Help ASAP
Java Programming Need Help ASAP You are required to develop..., maths mark, communications mark. You will then need accessor and mutator methods... class that creates an array of Students and handles it. You will
java - need code - Java Beginners
java - need code i am java developer. i want to remove file from one folder and same file is paste another folder. i need this code. (java) hi,
I am sending you the code which moves the file from one folder
This is what i need - Java Beginners
.
for this question i need just :
one function can read string like (I like to play football)from a file (e.g,file.txt)then remove the spaces and the white
Remove Extra Space At End
Remove Extra Space At End Remove Extra Space At End
remove parent element JavaScript
remove parent element JavaScript How to remove parent element in JavaScript
Error reporting in PHP
is the easiest way to enable error reporting in PHP based applications? I should be able to easily remove the error reporting code before moving my code...Error reporting in PHP Hi,
I am developing a complex application
run time error
saved your class with HelloWorld.
Check it out. Even then if error occurs, post the error.
Here it works and showing output. Have you save the file... to compile the code?:
javac HelloWorld.java
Actually if you are compiling the class
java error - Java Beginners
Remove it,
after compile then above lines will shown on command prompt. ... been obsoleted. To get the details of your error, compile your class with Xlint
Need *Help fixing this - Java Beginners
Need *Help fixing this I need to make this program prompt user... and maybe add retrieve function //need help with this one badly. Thanks guys for all...) {
}
}
public static void main(String args[]){
//need to add: prompt
Error with JCombo Box - Java Beginners
Error with JCombo Box when i set JComboBox on Tab then Display Error near addItem method,that is
Identifer Expected,
How I Can remove that Error. Hi Friend,
Please post your code.
Thanks
java error - Java Beginners
java error Hello sir,
on compiling any program from command prompt....
But if I want to run that program the fetches me the following error.
For example... uninstalled jdk and NetBeans and reinstalled them, Then too its the same error
what is the error - Java Beginners
what is the error What is the compile time error that i am getting while compiling the application. Thanks! SOLUTION :- Your program is creating the compile time error such as : - ? CANNOT FIND SYMBOL ? because you
Error In starting Glassfish in Eclipse.
Error In starting Glassfish in Eclipse. Sir,I am new... glassfish in eclise,an error is raised and message is as follws :
"error Launching...\oracle.eclipse.runtime.glassfish3.1.1.0\glassfish3\glassfish"): CreateProcess error=193
Error is: Generated keys not requested.
Error is: Generated keys not requested. Hi,
Following error is coming in my program.
Error is: Generated keys not requested.
You need to specify Statement.RETURNGENERATEDKEYS to Statement.executeUpdate
an error - Java Beginners
detect an error after trying to compile and run them,howevre i can't find out...( x, root );
}
//Remove from the tree. Nothing is done if x is not found.
public void remove( AnyType x ) {
root = remove( x, root
403 Forbidden error - What is 403 Forbidden error, please explain...
403 Forbidden error - What is 403 Forbidden error, please explain... Can anyone please explain, What is 403 Forbidden error. And how can i remove this error, which is restricting my page from loading it.
Thanks!!
Hi
compilation error - Java Beginners
;
System.out.println(j);
}
}
when we are compiling this program it is giving an error as integer number is too large.why? and what
error detection - Java Beginners
error detection
Hi Amardeep and all friends out... me an error,of "(" or "[" expected. i tried to change everything i can to repair the error but got stuck. please help me to find out what must i do to repair
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/93436 | CC-MAIN-2015-35 | refinedweb | 2,188 | 57.16 |
Tagsurf: Tagged Hyperforum
Posted on Wednesday, February 9th, 2005 2:35 PM
So a few weeks ago Anthony Eden pings me via IM and says something like "I never seem to have any good ideas" to which I respond something like, "All I have is ideas, but I never seem to follow through on any of them." And Anthony came back by challenging me, "Like what?" to which I responded with some thoughts I was having since I had recently really grokked tags. And Anthony said, "I can do that." And he did. Six hours later he had written and launched a preliminary version of Tagsurf, a tagging enabled hyper-forum. (He's a coding machine).
It's very cool.
The core of the idea which Anthony implemented and expanded upon was this: Del.icio.us introduced (or at least popularized) the concept of tagging as a way of organizing links. Tags in my mind are flat-namespace meta data that can help identify a piece of content in an almost democratic fashion. Flickr took the concept and applied it to photos, so that people could organize photos based on this simple meta data. I didn't grok this at first, until I had created a new flickr user called MobileMonday and blogged that people should send me their photos of the event and I would post them under that name. That's when Mike Rowehl responded and said, more or less "hey moron, everyone can post under their *own* name and just tag them 'mobilemonday' instead." And that's when a little light went on over my head about tags.
So I thought, what happens if you apply tags to messages as well? "Messages" being things like comments and blog posts, but in a forum-like structure, where you could follow threads in x dimensions. Add in Alerts (IM, email, etc.) and RSS and it becomes really useful as a discussion tool. Once we started working on that idea we expanded on it, and added URLs and user names to the tag namespace, now I can include URLs, but not just as way of sharing that link but of *commenting* on it. Then as a publisher, I can watch different tags, URLs and if someone responds my posts or puts my name in the tag, I get notified of those messages. | http://m.mowser.com/web/www.russellbeattie.com/notebook/1008301.html | crawl-002 | refinedweb | 392 | 70.02 |
Someone recently asked about a strikethrough in a TextBlock. The need was for a short piece of text that was dynamically generated. I messed with some solutions and came up with the following:
I used a TextBox rather than the TextBlock. If you don’t want it to be editable, I suggest isReadOnly to true. You can change the read only part of the template if you don’t want it to look different in Read only view.
Put a TextBox on the form. Right click on the TextBox and choose EditTemplate -> Edit a copy.
Inside of the Template, traverse the tree RootElement –> Border –> Grid –> MouseOverBoarder –> ContentElement (see below).
Right click on ContentElement choose Edit Template –> Edit a Copy.
this brings you to the ScrollViewer Template. Here I added a straight red line as a path and named it StrikeThroughPath
Now the trick is to bind the width element to the Template Binding –> Extent Width by clicking on the Advanced square to the right of width and choose Template Binding then Extent Width.
You should now have a text box that looks like the one at the start of this post with variable length and all the attributes of a textbox.
My husband, Robert, found this solution in the Silverlight forums but I did not try it
Please post any better solutions or questions that you have.
At Microsoft Office in Irving, TX
This meeting changed topic after poling the audiece and seeing what they wanted to hear about. It was still all about blend but it was more about getting started with blend then hard core templating.
Here is a link to the video that Shawn Weisfeld took and posted of the event. Enjoy!!!
Join us as I go through how to use Expression Blend to take a standard Silverlight video player and modify the template and styles to give it a custom look and feel. By reusing the standard player, we get all of the built in functionality without having to re-write all of that code. In this discussion, we will also cover some style ideas and best practices to allow for greatest flexibility in look and feel.
The idea is to define colors and style templates in resource dictionaries that can be easily changed out to change the look and feel without touching the code. Blend is a good tool for managing and modifying resource dictionaries. It is the same ideas behind CSS
LOCATION:
There are very few times when I make a color resource local page and even fewer when I will statically define a color. Instead, I define my Resource Dictionary(s) when I first start a project and in the Dictionary(s), I immediately define all the colors that I plan to use (see image below).
NAMING:
I use generic names when naming my colors like color_1_dark, color_1_warm, color_1_light.
I group my colors by like colors or functional areas and use a naming convention that describes the idea behind the color. In this case, I have used light, warm, dark so the person who is changing the resource dictionary has some idea of other schemes that would fit. If I had areas that were segmented by color, I would prefix these names with that area, for example sales_color1_light or contact_header_color1_light.
If I have specific company colors, those will be named appropriately so anyone who looks at the resource dictionary knows that these are company specific colors and should only be changed if that definition changes.
I will use a functional area if appropriate… for instance, below you see backgroundGradient.
USES:
If I need a shade of a color, I will use the base color I defined as a resource and apply an opacity mask to change the color appropriately. In other words, if I think it is appropriate to use just a lighter shade of color_4_Dark, I will bind color_4_Dark and apply an opacity mask to just tone it down a bit rather than define a whole new color. Keep in mind that the goal is to be able to switch out the color underneath so if your opacity mask won’t work with any dark color, then make it a resource also.
For gradients, you can define the whole gradient as a resource as you see above for backgroundGradient or bind the Gradient points to Colors that you have already defined.
Concept:
I have specifically shown colors in this example but this concept can apply to fonts or other elements if they are setup as a resource that can be changed to change look and feel.
It is really shaping up to be everything I had hoped. Prizes are stacked up behind me. Food is in place. I have a set of wonderful volunteers beside me. The event has been full for weeks. I will not be doing any official blogging for this event on this blog. You will have to watch the official blog for that – June 18th and 19th.
I plan to post pictures and descriptions of everyone’s projects during the event. It is going to be wonderful fun. Shawn will be filming part of the time so stay tuned for that also. We have some great plans in place!!! I wish everyone could join us and am very excited for those who signed up in time.
While modifying the standard media player with a new look and feel for Ineta Live I saw a unique opportunity to use their logo with a dotted I with and attached arc as the scrub control.
So I created a PathListBox that I wanted an object to follow when a user did a click and drag action. Below is how I solved the problem. Please let me know if you have improvements or know of a completely different way. I am always eager to learn.
First, I created a path using the pen tool in Expression Blend (see the yellow line in image below). Then I right clicked that path and chose [Path] --> [Make Layout Path]. That created a new PathListBox. Then I chose the object I want to move down the new PathListBox and Placed it as a child in the Objects and Timeline window (see image below). If the child object (the thing the user will click and drag) is XAML, it will move much smoother than images.
Just as another side note, I wanted there to be no highlight when the user selects the “ball” to drag and drop. This is done by editing the ItemContainerStyle under Additional Templates on the PathListBox. Post a question if you need help on this and I will expand my explanation.
Here is a pic of the object and the path I wanted it to follow. I gave the path a yellow solid brush here so you could see it but when I lay this over another object, I will make the path transparent.
To animate this object down the path, the trick is to animate the Start number for the LayoutPath. Not the StartItemIndex, the Start above Span.
In order to enable animation when a user clicks and drags, I put in the following code snippets in the code behind. the DependencyProperties are not necessary for the Drag control.
namespace InetaPlayer
{
public partial class PositionControl : UserControl
{
private bool _mouseDown;
private double _maxPlayTime;
public PositionControl()
{
// Required to initialize variables
InitializeComponent();
//mouse events for scrub control
positionThumb.MouseLeftButtonDown += new MouseButtonEventHandler(ValueThumb_MouseLeftButtonDown);
positionThumb.MouseLeftButtonUp += new MouseButtonEventHandler(ValueThumb_MouseLeftButtonUp);
positionThumb.MouseMove += new MouseEventHandler(ValueThumb_MouseMove);
positionThumb.LostMouseCapture += new MouseEventHandler(ValueThumb_LostMouseCapture);
}
// exposed for binding to real slider using a DependencyProperty enables animation, styling, binding, etc....
public double MaxPlayTime
get { return (double)GetValue(MaxPlayTimeProperty); }
set { SetValue(MaxPlayTimeProperty, value); }
public static readonly DependencyProperty MaxPlayTimeProperty =
DependencyProperty.Register("MaxPlayTime", typeof(double), typeof(PositionControl), null);
// exposed for binding to real slider using a DependencyProperty enables animation, styling, binding, etc....
public double CurrSliderValue
get { return (double)GetValue(CurrSliderValueProperty); }
set { SetValue(CurrSliderValueProperty, value); }
}
public static readonly DependencyProperty CurrSliderValueProperty =
DependencyProperty.Register("CurrSliderValue", typeof(double), typeof(PositionControl), new PropertyMetadata(0.0, OnCurrSliderValuePropertyChanged));
private static void OnCurrSliderValuePropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
PositionControl control = d as PositionControl;
control.OnCurrSliderValueChanged((double)e.OldValue, (double)e.NewValue);
private void OnCurrSliderValueChanged(double oldValue, double newValue)
_maxPlayTime = (double) GetValue(MaxPlayTimeProperty);
if (!_mouseDown)
if (_maxPlayTime!=0)
sliderPathListBox.LayoutPaths[0].Start = newValue / _maxPlayTime;
else
sliderPathListBox.LayoutPaths[0].Start = 0;
//mouse control
void ValueThumb_MouseMove(object sender, MouseEventArgs e)
if (!_mouseDown) return;
//get the offset of how far the drag has been
//direction is handled automatically (offset will be negative for left move and positive for right move)
Point mouseOff = e.GetPosition(positionThumb);
//Divide the offset by 1000 for a smooth transition
sliderPathListBox.LayoutPaths[0].Start +=mouseOff.X/1000;
_maxPlayTime = (double)GetValue(MaxPlayTimeProperty);
SetValue(CurrSliderValueProperty ,sliderPathListBox.LayoutPaths[0].Start*_maxPlayTime);
void ValueThumb_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
_mouseDown = false;
void ValueThumb_LostMouseCapture(object sender, MouseEventArgs e)
void ValueThumb_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
_mouseDown = true;
((UIElement)positionThumb).CaptureMouse();
}
}
I made this into a user control and exposed a couple of DependencyProperties in order to bind it to a standard Slider in the overall project. This control is embedded into the standard Expression media player template and is used to replace the standard scrub bar. When the player goes live, I will put a link here.
This is the Ineta Live player without the O’Data Feed. It is a good example of taking the plain Media Player provided with the Encoder install and re-templating it to make it your own. It also has a custom scrub control that is added in. I generally put my tempates in a separate resource file. On this project, I discovered that I had to include the template at the document level because I needed the ability to attach some code behind to fire change state behaviors. I could not use the blend xaml behaviors for change state inside the template because the template can’ determine the TargetObject. | http://geekswithblogs.net/tburger/archive/2010/06.aspx | CC-MAIN-2021-04 | refinedweb | 1,643 | 54.12 |
Advanced Namespace Tools blog
02 March 2018
Grid radio with Hubfs
Hubfiles are like pipes - you write bytes into them, clients receive the same bytes when they read from them. Even though hubfs has generally been used for textual shells or more recently an irc-like service, binary data can be buffered and multiplexed to clients as well. Recently on the grid, some users have been making use of this to provide a "grid radio" streaming audio service. Making this work well led to another round of changes to the base hubfs code to support this type of usage.
First experiments with paranoid mode
If you start hubfs with no options, mount it and make a hubfile, start a music decoding program reading from it, and then pump some audio data into it, you get a few seconds of playback and then nothing. Something like:
hubfs -s audiotest mount -c /srv/audiotest /n/audhub touch /n/audhub/music audio/mp3dec </n/audhub/music >/dev/audio cat foo.mp3 >>/n/audhub/music
A few seconds of sound, then nothing. What went wrong? The default size of a hubfs buffer is just a few hundred kilobytes, chosen to match the amount of text data stored in the backscroll of a rio window. When we dumped the mp3 data into the hub, it "raced around the buffer" several times much faster than the playback read the data. As a result we overwrote the hubfs buffer several times and the reading process only was able to playback for a few seconds. We can improve on this somewhat if we make use of hubfs "paranoid mode" in which the speed of the writing process is limited to match that of reading processes, by checking to see if the write pointer has gotten ahead of the read pointers by a certain amount, and then serving reads preferentially if so. If we alter the above commands by adding:
echo fear >/n/audhub/ctl
Prior to dumping the mp3 data in, then the playback begins working. Whenever the writing process gets ahead of the reading process, hubfs forks internally and sleeps the handling of writes while still answering read requests. This isn't a fully adequate solution for a real radio-type service however, because only the "closest" reader ends up gating the speed of the writer. What we really want to do is rate-limit the writing process to a bitrate which matches that of the audio data, regardless of what readers might be doing.
Rate-limiting micro library
The concept of rate-limiting is a component which can be generally useful to other applications than just hubfs, so I decided to design it as a very small "library" which could be used in other contexts. The limiter has the following parameters: the rate at which it will accept data, the minimum time in between bursts of data, and the interval after which the timers and counts reset.
struct Limiter{ vlong nspb; /* Minimum nanoseconds per byte */ vlong sept; /* Minimum nanoseconds separating messages */ vlong startt; /* Start time to calculate intervals from */ vlong curt; /* Current time (ns since epoch) */ vlong lastt; /* Timestamp of previous message */ vlong resett; /* Time after which to reset limit statistics */ vlong totalbytes; /* Total bytes written since start time */ vlong difft; /* Checks required minimum vs. actual data timing */ ulong sleept; /* Milliseconds of sleep time needed to throttle */ };
We initialize a Limiter by providing those parameters, and receive a pointer to the structure:
Limiter* startlimit(vlong nsperbyte, vlong nsmingap, vlong nstoreset) { Limiter *limiter; limiter=(Limiter*)malloc(sizeof(Limiter)); if(!limiter) sysfatal("out of memory"); limiter->nspb = nsperbyte; limiter->sept = nsmingap; limiter->resett = nstoreset; limiter->startt = 0; limiter->lastt = 0; limiter->curt = 0; return limiter; }
To implement limiting, we need to make a call to the limit function whenever a write happens. We call it by providing a pointer to the previously initialized Limiter structure and telling it how many bytes have been written:
void limit(Limiter *lp, vlong bytes) { lp->curt = nsec(); lp->totalbytes += bytes; /* initialize timers if this is the first message written to a hub */ if(lp->startt == 0){ lp->startt = lp->curt; lp->lastt = lp->curt; return; } /* check if the message has arrived before the minimum interval */ if(lp->curt - lp->lastt < lp->sept){ lp->sleept = (lp->sept - (lp->curt - lp->lastt)) / 1000000; sleep(lp->sleept); lp->lastt = nsec(); return; } /* reset timer if the interval between messages is sufficient */ if(lp->curt - lp->lastt > lp->resett){ lp->startt = lp->curt; lp->lastt = lp->curt; lp->totalbytes = bytes; return; } /* check the required elapsed time vs actual elapsed time */ lp->difft = (lp->nspb * lp->totalbytes) - (lp->curt - lp->startt); if(lp->difft > 1000000){ lp->sleept = lp->difft / 1000000; sleep(lp->sleept); } lp->lastt = nsec(); }
The crucial timekeeping function is nsec() which tells us how many nanoseconds have elapsed since the unix epoch in 1970. At one billion nanoseconds per second, that is a lot of nanoseconds. Because nanoseconds are a very awkward quantity for humans to work with, some of the new flags to hubfs are expressed in more human friendly terms. Rather than "nanoseconds per byte", hubfs accepts a flag that specifies "bytes per second" and then converts to the specification used by the limiter api, as well as accepting the reset time as defined in seconds. Only the minimum separation interval is specified by the user in nanoseconds.
#define SECOND 1000000000 if(applylimits){ h->bp = bytespersecond; h->st = separationinterval; h->rt = resettime; h->lp = startlimit(SECOND/h->bp, h->st, h->rt * SECOND); }
Other changes and radio practicalities
In addition to the ratelimiting, two other changes were made. One is providing an optional parameter for maximum message length in bytes. The primary purpose of this parameter is not audio streaming, but flood protection for the use of hubfs to provide irc-like chat services. The other parameter is changing from a compiled-in static buffer aray per hubfile, to a runtime malloced buffer with a flag to define the size in number of bytes. For audio streaming, it is generally desirable to have several megabytes of buffer rather than just a few hundred kilobytes.
With the ratelimiting code in place, it is necessary to set the desired parameters for a smooth stream. Testing involves doing something like
dd -if foo.mp3 |tput -p |audio/mp3dec >/dev/audio
The correct parameter for the hubfs ratelimiting is just a little bit higher than the observed throughput. For the user-provided "UFO radio" service, the pipeline is something like:
hubfs -q 7000000 -b 20000 -s UFORADIO mount -c /srv/UFORADIO /n/UFO touch /n/UFO/radio audio/flacdec < song.flac|dd -conv swab|audio/mp3enc -V 7>> /n/UFO/radio
To export this service to the grid, from within the grid namespace, a series of commands is used similar to:
bind -c /n/pubregistry /mnt/registry myip=this.domain chmod 666 /srv/UFO gridlisten1 -t -d UFORADIO -m /n/UFO tcp!*!4458 /bin/exportfs -S /srv/UFORADIO
Clients find the address by catting the /mnt/registry/index file, then srv and mount it, and then just "play /n/UFO/radio". The only issue is for clients located at a significant geographic distance. Because the 9p protocol is rather latency sensitive, clients who are across an ocean from the user providing the stream often need to establish another layer of buffering using either a local hubfs which buffers data from the remote radio, or using another program such as "pump". I like using hubfs in this fashion:
hubfs -q 7000000 -s radiobuffer mount -c /srv/radiobuffer /n/buffer cat /n/UFO/radio >/n/buffer/radio & [wait until about 2 mb are buffered] play /n/buffer/radio
Once the buffer is consumed and the audio begins to click/pause, I can just interrupt the playback and restart it.
Thanks very much to kiwi for UFO radio! | http://doc.9gridchan.org/blog/180302.hubfs.radio | CC-MAIN-2021-21 | refinedweb | 1,307 | 51.31 |
How to separate Drag and Swipe from Click and Touch events
Gestures like swipe and drag events are a great way to introduce new functionalities into your app.
Everyone has a favourite library, but there are chances that has no version for the framework you are using or it’s an old port with issues.
PhotoSwipe is an image gallery, the library wrapper for react has unnecessary dependencies, deprecated life cycles and issues that needed the creation of react-photoswipe-2.
With these problems it's unwise to depend of something that is no longer supported.
This is how to create React wrapper for PhotoSwipe, but the basic principles are the same for all other libraries. You need to import the package and expose its APIs.
First, we need to import the library and bind it with React. We will create a component in a file called photoSwipe.js. It will be a functional component, but you can change it in a class component with an ease. The imports will be:';
The library want us to include two CSS and two JavaScript files. We know that the CSS rules will apply globally when the page is loading, but how to access the JavaScript file? We look the JS file as a module and we need a name to access it later this JS scope.
In our case we bind the two files with the React component with the names
PhotoSwipe and
PhotoSwipeUI_Default.
Second step is to add all needed DOM elements. We do that in the
return. We need to convert the
html to
jsx so change everywhere
class to
className,
tabindex to
tabIndex and
aria-hidden to
ariaHidden.
Next step is to initialize PhotoSwipe constructor:
const photoSwipe = new PhotoSwipe(pswpElement, PhotoSwipeUI_Default, props.items, options);
The constructor has 4 params:
pswpElementis the first
<div>in the return, the one with the
className="pswp". To have reference we will use useRef Hook. We initialize the variable
let pswpElement = useRef(null);and then add the ref in the
<div>
ref={node => { pswpElement = node; }}.
PhotoSwipeUI_Defaultis the import that we already have;
props.items this is the data array from which we will build all the slides:
// build items array const items = [ { src: '', w: 600, h: 400 }, { src: '', w: 1200, h: 900 } ];
options is the object where we define options:
// define options (if needed) const options = { // optionName: 'option value' // for example: index: 0 // start at first slide };
The PhotoSwipe constructor should init only once when the component is loading and we must bind events that listen when we open and close the gallery with React. All of this should be in the useEffect Hook.
In the final code below we will see the code we need in the useEffect Hook. We notice that useEffect Hook has an array as a second argument. This means that the Hook will execute every time when props or gallery options are changing.
Now we can change props in the parent component. For example, we can notify parent component that the gallery is closed when listening for PhotoSwipe's
destroy and
close events. Also, with props we can know when to open the gallery, because changing props from the parent will fire useEffect Hook. This is the general way to wire events and life-cycles.
Our main component should have thumbnails, galleries, images' data and methods to open and close PhotoSwipe.
Image data for PhotoSwipe should be an array of objects.
The methods for open and close as you can see in the final code below are just simple functions that change state and pass it through props to the wrapper. The only difference is that when we interact with the grid of thumbnails we need to open different image. To do that, we add another property to the state which is the index.
Creating a wrapper in React is simple enough when we understand the way wrappers and our project communicated and how to wire events and life-cycles. In this guide we did all of this and is the foundation to create and extend React wrappers.'; const PhotoSwipeWrapper = props => { let pswpElement = useRef(null); const options = { index: props.index || 0, closeOnScroll: false, history: false }; useEffect(() => { const photoSwipe = new PhotoSwipe(pswpElement, PhotoSwipeUI_Default, props.items, options); if (photoSwipe) { if (props.isOpen) { photoSwipe.init(); photoSwipe.listen('destroy', () => { props.onClose(); }); photoSwipe.listen('close', () => { props.onClose(); }); } if (!props.isOpen) { props.onClose(); } } }, [props, options]); return ( <div className="pswp" tabIndex="-1" role="dialog" aria- <div className="pswp__scroll-wrap"> <div className="pswp__container"> <div className="pswp__item" /> <div className="pswp__item" /> <div className="pswp__item" /> </div> <div className="pswp__ui pswp__ui--hidden"> <div className="pswp__top-bar"> <div className="pswp__counter" /> <button className="pswp__button pswp__button--close" title="Close (Esc)" /> <button className="pswp__button pswp__button--share" title="Share" /> <button className="pswp__button pswp__button--fs" title="Toggle fullscreen" /> <button className="pswp__button pswp__button--zoom" title="Zoom in/out" /> <div className="pswp__preloader"> <div className="pswp__preloader__icn"> <div className="pswp__preloader__cut"> <div className="pswp__preloader__donut" /> </div> </div> </div> </div> <div className="pswp__share-modal pswp__share-modal--hidden pswp__single-tap"> <div className="pswp__share-tooltip" /> </div> <button className="pswp__button pswp__button--arrow--left" title="Previous (arrow left)" /> <button className="pswp__button pswp__button--arrow--right" title="Next (arrow right)" /> <div className="pswp__caption"> <div className="pswp__caption__center" /> </div> </div> </div> </div> ); }; export default PhotoSwipeWrapper;
import React, { useState, Fragment } from 'react'; import PhotoSwipeWrapper from './photoSwipe'; export default () => { const [isOpen, setIsOpen] = useState(false); const [index, setIndex] = useState(0); const items = [ { src: '', w: 600, h: 400 }, { src: '', w: 1200, h: 900 } ]; const handleOpen = index => { setIsOpen(true); setIndex(index); }; const handleClose = () => { setIsOpen(false); }; return ( <Fragment> <div> {items.map((item, i) => ( <div key={i} onClick={() => { handleOpen(i); }} > Image {i} </div> ))} </div> <PhotoSwipeWrapper isOpen={isOpen} index={index} items={items.large} onClose={handleClose} /> </Fragment> ); };
Gestures like swipe and drag events are a great way to introduce new functionalities into your app.
Create a simple sticky header only with functional components and React Hooks with no npm packages or other complicated functionality.
Easy to fallow steps to bootstrap Aurelia CLI with Pug (Jade) and Webpack. With working metadata passed from webpack config file.
Here is a simple React app example using Material UI. The problem I stumbled is how to add JSS withStyles into Higher-Order Components (HOC). | https://pantaley.com/blog/Create-React-wrapper-for-PhotoSwipe/ | CC-MAIN-2019-47 | refinedweb | 1,019 | 52.29 |
Have you had that type of people in your life? did anything you did or said make a difference at the end?
please share your thoughts .
1. Download the tar zipped file from dellfand's site (), unzip it, cd into the folder and run 'make'. This produces the executable. You have done this already.
2. As root, copy the executable to /usr/local/bin.3. Put this line in /etc/rc.local to have it run on boot./usr/local/bin/dellfand 1 0.5 40 50 55
The above will run dellfand as a daemon( the parameter 1), with a sleep time of 0.5 seconds(parameter 2) with an off,low and high temperatues of 40, 50 and 55. Change the temperatures to the ones that suit you.P.S: The BIOS in some laptops, with some BIOS versions, is more active than in others. You may get interference. It could be that reducing the polling delay (e.g. to 0.5 seconds) will reduce the annoyance caused by this. Currently I know of no other solution.
Congratulations on the new linux.com! I know it was tons of hard work.
And that's why I'm blogging this thought. How many upgrades and migrations have you done in your career? Too many, if you're like me. Seldom is there any automatic migration script that can just handle everything, esp. when big monopoly-like companies are involved. (You know who.)
The only time I've seen a good migration is when the company that wrote the new program gets some money if you switch. Then, a good migration path is a selling point.
So, back to the site. It's written in Joomla, I understand. Cool. I don't honestly care, except that I'm learning django right now. And what happens if we need to move our projects from django to Joomla, or to any other framework? Lots of rewriting. Lots of reinventing the wheel.
Sure, it's not as big a wheel as it was, back in the GUI days. HTML, CSS, Javascript are all pretty standard. Still, we write these little wheels, and have to reinvent them whenever we change language, database, OS (oops! that server wasn't running linux?!), etc. When are we going to get smarter?
I propose that we do one very simple thing:
Be explicit.
If you're programming in a language, then please embed a comment to what language it is, the version, and--most importantly--where I can find the language specification and a reference implementation of the compiler/interpreter. Better yet, provide a BNF notation and an explanation of the Abstract Syntax Tree. (What? You're using a language that's not open, or isn't well-documented? Don't make me come over there!)
If you're encoding data in XML, PLEASE, PLEASE provide a reference to the DTD or Schema definition. IHMO, that silly URL in XML that tells what namespace it's in should actually reference a valid document. Most of the time, if you try to open that URL, you get nothing. (This was just bad design on the side of the XML designers.)
What I want is this: Perfect parsers. The only way that's going to happen is for the code and data to be explicitly defined. (And, yes, you can do this with dynamic languages.) But once you have perfect parsers, voila!--you have much easier time migrating data and code. In fact, maybe it wouldn't be that hard to write migration programs. But that's another blog entry...
cat /dev/sda | aplay -fdat | http://www.linux.com/community/blogs?start=1740 | CC-MAIN-2013-20 | refinedweb | 605 | 78.25 |
This is a guest post from Chris Griffith. Chris is UI/UX engineer, an educator in the Ionic community and author of the Ionic Native Mocks library.
When you create an Ionic application, often you have need to include a Cordova plugin or two. In fact, several Cordova plugins are included when you generate an Ionic application using any of the Ionic starter templates. These plugins give your Ionic application access to features on our mobile devices that are traditionally unavailable to developers not using native programming languages.
Currently, there are over 3,100 plugins registered for Cordova. Of these, the Ionic team has selected around 160 for which to create TypeScript interfaces, Ionic Native, to ease development. Here is the definition:
Ionic Native is a curated set of wrappers for Apache Cordova plugins that make adding any native functionality you need to your Ionic mobile application easier.
Ionic Native provides an interface for all of these plugins that match the Cordova plugin, wrapping plugin callbacks in a Promise or Observable to make it easy to use plugins with Angular change detection.
From One Platform to Another
Once you’re using Ionic Native, your workflow must now shift from using your computer’s browser to either running in a device simulator or on an actual device. While the Ionic CLI makes some of that process easier, it can still become time consuming.
Here is a real-world example of this issue: I was prototyping a new app where one of the first actions the user would do was scan in a barcode. Adding this functionality is not difficult. I was able to install the plugin and the Ionic Native wrapper, update the
app.module.ts file and was good to go. But from that point further I was required to run my application on the device. Since there were several more screens to create, I needed to find a way to bypass this ‘requirement’.
One option would have been to simply comment out that code block and continue developing, but that would require a lot of maintenance; comment the code to develop, remember to uncomment the code before testing or deploying to a device, uncommenting the code again to do more development. Instead of going through that several times a day, I decided to create a mock provider for the barcode scanner plugin that would serve as in interface to the methods and properties of the actual Ionic Native wrapper’s calls to the Cordova plugin. With this mock injected into my application, I can simulate interaction with the barcode scan plugin and continue my local development unimpeded..
Adding mocks to an App
From this initial effort, I decided to generate mocks for the entire Ionic Native Library in a collection of mocks called Ionic Native Mocks. To use an Ionic Native Mock run the following command in your terminal to install the appropriate mock for your project:
npm install @ionic-native-mocks/<plug-in> --save
For instance, to install the camera mock:
npm install @ionic-native-mocks/camera --save
You also need to install the Ionic Native package for each plugin you want to add. Please see the Ionic Native documentation for complete instructions on how to add and use the plugins. { } // elsewhere in your app }); }); } }
By design, the Ionic Native Mocks I wrote are very generic. They return the bare minimum amount of data for them to function. For them to be more useful in your project, you may want to customize them. For example, for the bar code scanner, I wanted the mock to always return a valid product code.
To begin modifying an Ionic Native Mock file, you will first need to get the code directly from GitHub and the source Typescript code and add it to your project manually. While each plugin is available via npm, those files are installed in your project’s “node_modules” folder and can easily get overwritten or deleted.
In your project, create a new directory named mocks, and create another directory named barcodescanner. Within that directory, download the index.ts file from Github into this directory.
Now let’s adjust out app.module.ts file. Like all Ionic Native modules, we need to import it.
import { BarcodeScanner } from '@ionic-native/barcode-scanner';
Also import our plugin mock as well.
import { BarcodeScannerMock } from '../mocks/barcodescanner';
Instead of including the Ionic Native plugin directly into the providers array, we instead tell Angular to provide a mapping to our mock. This allows us to keep the rest of application referencing the real Ionic Native module, yet use the code from the mock instead.
{ provide: BarcodeScanner, useClass: BarcodeScannerMock }
At this point we could build our app, making calls to the barcode scanner plugin without needing to test on an actual device. But, out of the box, the barcode scanner mock is going to return an empty string.
But, let’s have it return something that we might want our user to scan. For my original app, it was a QR code on our packaging. Open the index.ts file within our project and change the scan function to
scan(options?: BarcodeScannerOptions): Promise { let code='NCC-1701'; let theResult:BarcodeScanResult= {format:'QR_CODE', cancelled:false, text:code }; return new Promise((resolve, reject) => { resolve(theResult); }); }
Save the file, and run the application. Now when you call the barcode scanner, it will return your custom data. When you are ready to use the real plugin, change the provider and remove the import of the mock.
Parting words
Hopefully, this post has shown you how to leverage Ionic Native Mocks into your Ionic application, and how it can reduce the friction during the build & test cycle of development. The mocks are are open source, and if you find an issue, please let me know. Happy Coding! | https://blog.ionicframework.com/ionic-native-mocks/ | CC-MAIN-2019-18 | refinedweb | 967 | 52.19 |
Frequently Asked Questions - Apache XML Security for C++
1. Compiling and Using the Library
1.1. Is OpenSSL required?
The main development work for the library is done using OpenSSL, so this is the recommended option. However, Windows Crypto API and NSS interfaces are also now provided.
It is also possible to implement interfaces for other cryptographic libraries and pass them into the xml-security-c library during initialisation (via the XSECPlatformUtils::Initialise() call).
1.2. Does the library provide a full C++ wrapper for OpenSSL?
The C++ crypto interface layer provided for the library provides only the smallest subset of cryptographic functions necessary for the library to make calls to the provided library. Applications will need to work directly with OpenSSL (or other libraries) to read and manipulate encryption keys that should then be wrapped in XSECCrypto* objects and passed into the library.
1.3. What is WinCAPI?
WinCAPI is the developmental interface being built to give users of the library access to the Windows Cryptographic library.
It is not a C API wrapper for the overall library.
1.4. Is Xalan required?
The library can be compiled without linking to Xalan-c. However doing so will disable support for XPath and XSLT transformations.
To disable Xalan-c support either use --without-xalan when running configure on UNIX, or use the VC++ "without Xalan" settings.
1.5. Are versions of Xalan prior to 1.6 supported?
No. Whilst the functionality required is available in prior versions, the location of include files changed in 1.6. A decision was made in version 1.0.0 of Apache XML Security for C++ to update the source to support these new locations.
1.6. I sign a document and when I try to verify using the same key, it fails
After you have created the XMLSignature object, before you sign the document, you must embed the signature element in the owning document (which is returned by the call to DSIGSignature::createBlankSignature(...)) before calling the DSIGSign().
1.7. How does the library identify Id attributes?
During a signing operation, finding the correct Id attribute is vital. Should the wrong Id Attribute be used, the wrong part of the document will be identified, and what the user signs will not be what they expect to sign.
The preferred method (and the method the library uses first) of finding an Id is via the DOM Level 2 call DOMDocument::getElementById(). This indicates to the library that the Id has been explicitly identified via a schema, DTD or during document building. However, if this call fails, the library will then search the document for attributes named "Id" or "id" with the appropriate value. The first one found will be used as document fragment identifier.
As of version 1.2, the library also provides methods to allow callers to set additional Id attribute names. This can be done in one of two ways. DSIGSignature::registerIdAttributeName() will register a new name that will not be matched to a namespace. DSIGDSignature::registerIdAttribiteNameNS() will register an attribute name together with the namespace in which the attribute resides.
As this is a potential security exposure, this behaviour can be disabled using a call to DISGSignatures::setIdByAttributeName(false). There are also methods provided to modify the list of attributes that will be searched. However it is recommended that these methods not be used, and DOM attributes of Type=ID be used.
Warning In version 1.1 and above, the library defaults to searching for Id attributes by name if a search by Id fails. As this is a potential security risk, this behaviour may be changed in a future version of the library.
1.8. What parts of the XKMS specification does the library support?
The library currently supports X-KISS (XML Key Information Service Specification) message generation and processing. Support for X-KRSS (XML Key Registration Service Specification) will be provided in version 1.3 of the library.
1.9. Does the library provide a programmatic XKMS client?
Not yet. A command line tool xklient is provided for generating and processing messages. This can be used as an example for processing XKMS messages.
A programmatic client will be provided in version 1.3 of the Apache XML Security for C++library. | https://cwiki.apache.org/confluence/display/SANTUARIO/c_faq | CC-MAIN-2018-17 | refinedweb | 710 | 57.47 |
I have been messing around with C++ for awhile but in a class room setting and apparently the things that they are doing there are VERY WRONG!!
The code I have is simple, but I get [Linker error]undefined reference to Selection(char) when I compile.
my header:my header:Code:
#import <iostream>
#import "guitar.h"
using namespace std;
//function prototypes
char Selection(char choice);
int ESection();
int main()
{
//declare variables
char choice;
cout<<"\n\n"; //print 2 blank lines
cout<<head << "\n\n" << head2;
choice = Selection(choice);
system("PAUSE");
}
A kick in the right direction would be great. I read posts here and googled it but I am still unsure of what the solution is.A kick in the right direction would be great. I read posts here and googled it but I am still unsure of what the solution is.Code:
using namespace std;
char Selection()
{
//declare variables
char select;
//This function display a list of choices to the user
cout<<"Please select a category: \n\n";
cout<<"E Section A#/Bb Section\n";
cout<<"F Section B Section\n";
cout<<"F#/Gb Section C Section\n";
cout<<"G Section C#/DB Section\n";
cout<<"G#/Ab Section D Section\n";
cout<<"A Section D#/Eb Section\n";
cout<<"Enter: ";
cin.get(select);
return select;
} | http://cboard.cprogramming.com/cplusplus-programming/69630-undefined-reference-new-standards-have-me-lost-printable-thread.html | CC-MAIN-2014-23 | refinedweb | 217 | 59.53 |
[Edit: Note that this post is based on a pre-release, and now out of date CTP of the MVC framework. I'd suggest you use it to understand concepts, but look elsewhere now if you're after up to date facts. I've recently added a post about Virtual Earth and the MVC framework that includes the use of some AJAX - read it here]
I’ve been curious about how AJAX support might be added to the ASP.NET MVC Framework. Specifically I could see huge potential in defining portions of pages (views) as User Controls inheriting from System.Web.Mvc.ViewUserControl<T>. This allows the region of the page to be rendered either inline in another view, or on its own, and therefore means that an initial page can be rendered, yet the User Control section can be updated using AJAX calls. What enables this to work is the fact that the Controller.RenderView method in the framework can handle both ViewPage and ViewUserControl derived UI elements.
So I decided to write some code to enable me to play with this. Bear in mind that this is purely flaky experimental code – I’m sure the product group will do a far better job than this sample, but the point of this post is to demonstrate the kind of things you can achieve with the MVC framework, not to produce a product! If you are interested in AJAX support in the framework, I recommend heading over to Nikhil’s blog for a quick read.
This post is actually incidental to the real reason I was writing some code, to demonstrate a different technique – which will be revealed in a future blog post in the not-too-distant future.
As usual the code included conveys no warranties etc, and has been written against the December 2007 CTP – and therefore will likely be superseded or simply won’t work on later releases J.
Anyway, to meet my needs I’ve done a number of things;
1. Created a simple Employee table, and used LINQ-to-SQL to get at the data easily.
2. Written a Controller with two actions – one to render a full initial page, and one to render just the portion of the page contained within the user control.
3. Created two views – one full page (that includes the user control), and one user control.
4. Created an extension method for the AjaxHelper type that renders a link to update a user control’s content.
The code for the controller is very simple. I’ve created a simple Model entity to hold a list of Employee entities, plus a selected one;
public class EmployeeSet
{
public List<Employee> Employees { get; set; }
public Employee SelectedEmployee { get; set; }
}
There is then an action that renders a whole view, passing in the EmployeeSet entity;
[ControllerAction]
public void ViewPeople()
DbDataContext db = new DbDataContext();
List<Employee> all = (from e in db.Employees
select e).ToList<Employee>();
EmployeeSet set = new EmployeeSet();
set.Employees = all;
// by default, show the first employee in the user control
set.SelectedEmployee = all[0];
RenderView("People", set);
The key to this working is of course the “People” view, for which the first version looks very simple indeed;
<asp:Content
<table>
<tr>
<th>Name</th>
</tr>
<% foreach (Employee emp in ViewData.Employees)
{
%>
<tr>
<td>
<%= emp.Name %>
</td>
</tr>
<%
}
</table>
<div id="Individual">
<prs:PersonInfo
</div>
</asp:Content>
Of course, this ASPX view inherits from ViewPage<EmployeeSet> to enable me to use strongly typed access to the model data. Perhaps the next most significant point is the use of the User Control – note that I have omitted the Register directive in this snippet, but it does need to be there. I specify the name of the property on the View Data that I wish to pass to the User Control in the ViewDataKey property. This ensures that the currently selected Employee is passed to the control.
To allow this to work, I have an ASCX User Control view named PersonInfo that derives from ViewUserControl<Employee>. The content of this view is incredibly simple in my example (UI design is not a focus of this post!);
<%@ Control Language="C#" AutoEventWireup="true"
CodeBehind="PersonInfo.ascx.cs"
Inherits="MvcAjax.Views.AjaxSample.PersonInfo" %>
Name: <%= ViewData.Name %><br />
Job Title: <%= ViewData.JobTitle %>
Browsing to /AjaxSample/ViewPeople will now render a list of all Employees, plus the details of a specific employee (the first one in the list) below the main list.
So my next task is to add AJAX that will allow the “Individual” DIV element content that contains the User Control to be replaced with the result of a call to a Controller Action of my choice. I love the lambda expression approach used by much of the MVC framework, so my ideal solution is to replace the “emp.Name” in the list of employees with code similar to that below that outputs a hyperlink to perform my update;
<%= Ajax.UpdateRegionLink<AjaxSampleController>(d => d.UpdatePerson(emp.Id),
"Individual", emp.Name) %>
Here I am specifying the Controller I want to use, the Action I wish to call (and its parameters), the name of the DIV element I wish to update the content of, and the link text (the Employee’s name).
Implementing this boils down to an extension method, which adds functionality to the AjaxHelper class. I came up with the following;
public static string UpdateRegionLink<T>(this AjaxHelper helper,
Expression<Action<T>> action,
string elementId,
string linkText) where T : Controller
string{2}</a>";
string link = helper.BuildUrlFromExpression<T>(action);
return string.Format(linkFormat, elementId, link, linkText);
I then directly copied the BuildUrlFromExpression method from the MVC Toolkit implementation of the HtmlHelper class; in a real framework I assume this functionality would be shared, but this is a hack, not a real deliverable J
Basically all this function does is output HTML that looks like this;
<a href="javascript:updateRegion(
'Individual',
'/AjaxSample/UpdatePerson/3')">{Name}</a>
I love the automatic generation of the URL!
The code that is found at this URL is a very simple Action;
public void UpdatePerson(int id)
DbDataContext db = new DbDataContext ();
Employee emp = (from e in db.Employees
where e.Id == id
select e).SingleOrDefault<Employee>();
RenderView("PersonInfo", emp);
It simply loads the Employee that matches the specified Identifier, and renders the “PersonInfo” User Control only – and this is the power of splitting up the UI into these parts.
So where do I put the javascript definition of the function “updateRegion” that is called by the link generated by my extension method? I need to register it as a client script block... so my first attempt was to pass Page.ClientScript into my extension method, and register the prerequisite javascript there. If that worked, I was going to think about how to get hold of it through the ViewContext... if it is possible.
But it didn’t work. I’ve not gotten to the bottom of why yet, so I may well dedicate more time investigating in the future, but basically it doesn’t look like any client scripts are emitted when using the MVC framework. I guess this is something different about the page lifecycle execution, or perhaps an override in ViewPage.
Either way, my interim solution is to put the script in a “.js” file under the “Content” folder, and import it in the master page;
<script type="text/javascript"
src="/Content/MvcAjaxExtensions.js">
</script>
The script is quick and dirty, designed to get the job done and that’s about all, as quickly and simply as I could think of! As with all the code in this example, it is nowhere near production strength (</excuse-making>).
var regionId;
var req;
function updateRegionReturn()
if(req.readyState == 4)
document.getElementById(regionId).innerHTML = req.responseText;
function updateRegion(elementId, url)
regionId = elementId;
req = new XMLHttpRequest();
req.onreadystatechange = updateRegionReturn;
req.open('GET', url);
req.send();
It simply uses the XMLHttpRequest object to make a request to the supplied URL – which of course was built by my extension method and passed into updateRegion() by the link we generate on the page.
So it is a little hack, but the important thing is that the Proof of Concept works – try clicking a link on the People page, and hey presto, a web request is made, the Action is invoked, the User Control is rendered, and the mark-up is updated. I think this shows that AJAX support could feel really natural in the MVC framework – and there’s no reason why many of the standard AJAX functionality couldn’t be rolled in.
This is a very simple example, but I hope it has illustrated how the framework could evolve itself, or be built on to suit your own project. | http://blogs.msdn.com/b/simonince/archive/2008/01/23/ajax-support-in-the-asp-net-mvc-framework.aspx | CC-MAIN-2015-48 | refinedweb | 1,451 | 51.99 |
Mocking Static Calls Revisited.
Somehow everything seems easier once you take a look at the code. I already knew that FacesContext was a ThreadLocal variable. But only today, once I saw the code, I realized what I was missing. Basically the abstract base class javax.faces.context.FacesContext looks like the following:
public abstract class FacesContext { private static ThreadLocal _currentInstance = new ThreadLocal(); // Lot of abstract methods. public static FacesContext getCurrentInstance() { return (FacesContext)_currentInstance.get(); } protected static void setCurrentInstance(FacesContext context) { _currentInstance.set(context); } }
This means that when we implement our own version of a FacesContext, we can have it set itself in the ThreadLocal! The solution lies in a MockFacesContextWrapper class implemented as follows:
public class MockFacesContextWrapper extends FacesContext { private FacesContext mockContext; public MockFacesContextWrapper(FacesContext context) { this.mockContext = context; FacesContext.setCurrentInstance(this); } // Delegate methods for mockContext for all declared abstract methods in FacesContext }
Now your tests can be implemented using a mocked FacesContext without having your source code being aware that it's being tested. An example of a testcase:
public class SomeFacesBeanTest extends TestCase { private FacesContext facesContext; public void setUp() { facesContext = EasyMock.createMock(FacesContext.class); new MockFacesContextWrapper(facesContext); } // Your test methods. }
I like that now the code under test does not need to be adapted. You can program as you're used to, without writing an indirection method to wrap the static call in, or introducing a new field in every class that uses the FacesContext. How do you rank it against the solutions presented yesterday in the blog and its comments?
Good thinking Jeroen! It's an elegant solution, you only have to write the MockFacesContextWrapper once, and like you said the program is unware of Helpers etc. so I think we have a winner.
The issue, how to mock static calls, is still valid though. Not all static calls are implemented like FacesContext which allow for this implementation. But thanks to your blogs we can now choose from 5 different solutions :-).
I really like this solution. I agree with you that it is better not do adapt the code under test. Besides that, this solution is easy to understand for developers that need to maintain it.
Nice solution. Based on your code I came up with the code below (now you are not required to implement the abstract methods):
public abstract class MockFacesContext extends FacesContext {
public static void mockFacesContext(FacesContext facesContext) {
FacesContext.setCurrentInstance(facesContext);
}
}
I just read an article about a aspectJ based mocking framework that can mock private/protected and static methods/constructors:
Maybe that is an option?
You guys should also take a look at the Apache Shale Framework. It has a Test module which includes mock objects for the FacesContext.
It works really well and all you have to do is implement the AbstractJsfTestCase from shale to get access to a (almost) fully mocked FacesContext. | http://blog.xebia.com/mocking-static-calls-revisited/ | CC-MAIN-2017-13 | refinedweb | 471 | 56.86 |
Raspberry Pi Cluster Node – 04 Configuration Files with ConfigParser
This post builds on the third step to create a Raspberry Pi Cluster Node to store our configuration settings in a config file. Here we move all the configuration settings in our script into a useful .cfg file using the python ConfigParser.
Why use Configuration Files?
When developing a system there will always be variables you will want to tweak based on the machine its running on. These are typically simple settings such as the machine name, connection settings or passwords.
Ideally these settings are not hardcoded into our code but are outside the codebase. One major reason for doing this is because you do not want to hardcode passwords into your codebase. So developers typically save these important settings in configuration files. The scripts then read these configuration files to find the details they need.
In our previous script we hardcoded the IP address and socket port number used. This means that if we wanted to change the IP or port we would need to modify the code. This doesn’t make the script very flexible.
In this tutorial I am going to remove the hard-coded IP address and port number and use a configuration file.
Using the ConfigParser Module
Python has the
ConfigParser module that reads a file and loads the settings. This module loads files in the ini format. These are basic text files with specific formatting that allows a program to read in settings.
An example file format, as taken from the python docs, is shown below:
[mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid skip-external-locking old_passwords = 1 skip-bdb skip-innodb
The file is split up into sections, denoted by the square brackets. Each section then has a series of variables which may be set to a value with the equals sign.
We are going to use this format with the ConfigParser python module.
Changing out code to use ConfigParser
First I am going to create the config parser configuration file. For this I have chosen to have two sections, one for the master and one for the slaves. The example file is below.
[master] socket_port = 12345 socket_bind_ip = [slave] socket_port = 12345 master_ip = 127.0.0.1
Now I have the file, the first thing I need to do is load up the config file in my scripts. This is performed by importing ConfigParser and creating a new instance of the config as follows:
import ConfigParser config = ConfigParser.ConfigParser() config.read('rpicluster.cfg')
I place the import statement with all the other import statements at the top of the file. Then I create an instance of the ConfigParser and tell it to read my config file. This reads in the file and all configuration options.
Now it has loaded my config file up I can get out specific parts of the configuration
master_ip = config.get("slave", "master_ip") socket_port = config.getint("slave", "socket_port")
In the above code I have used two different methods of the config variable defined above. Both of these functions take two parameters, the section name of the config and the config name. The most basic one is
get which gets the config value as a string.
Similar to
get,
getint gets the value of the config but also converts it to an integer. I have used
getint for the port of the socket as this is required to be an integer. In addition to this there is also
getfloat and
getboolean which converts the value to a float and boolean respectively.
For the future I am going to store more values in the configuration settings. This means I can get away from hardcoding variables in my script and make it more flexible.
The full code is available on Github, any comments or questions can be raised there as issues or posted below. | https://chewett.co.uk/blog/1001/raspberry-pi-cluster-node-04-configuration-files-configparser/ | CC-MAIN-2020-05 | refinedweb | 643 | 64.61 |
So I hear that when they were planning Windows 8, Microsoft did all this research and determined that, if you give a programmer the choice between doing something synchronously and doing it asynchronously, then 99.9 times out of 100 they’ll do it synchronously. Hearing this, Microsoft decided to take away the option, and just force everybody to do things asynchronously, all the time. Because apparently we’re all expected to do all our computing on phones now, and phones are still kind of bad at computing at the moment.
In an attempt to assuage the 99.9% of programmers who resent having their code rendered unreadable when their nice pretty methods are split up into 17 callbacks apiece, Microsoft added the “await” keyword to .NET. Basically, it allows you to make those 17 asynchronous callbacks look like a single method again. Sort of, anyway. There are caveats.
The code shown below uses the await keyword to call an asynchronous method that calculates the sum of the first n integers. For large values of n, this calculation may take several seconds to complete. When the calculation method completes, the runtime will then start executing the code that appears after the await statement in the calling method.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace ConsoleApplicationAwait { class Program { static void Main(string[] args) { callLongTaskAsynchronously(); Console.ReadKey(); } static async void callLongTaskAsynchronously() { int numberOfIntegersToAdd = 1000000000; Console.WriteLine ( "About to calculate sum of first " + numberOfIntegersToAdd + " integers asynchronously..." ); Console.WriteLine("About to await..."); long result = await Task.Run(() => calculateSumOfFirstNIntegers(numberOfIntegersToAdd)); Console.WriteLine("Done with await..."); Console.WriteLine("Result of asynchronous procedure: " + result); Console.WriteLine("Press any key to quit."); } static long calculateSumOfFirstNIntegers(long numberOfIntegersToAdd) { long returnValue = 0; for (long i = 1; i <= numberOfIntegersToAdd; i++) { returnValue += i; } return returnValue; } } }
Notes
- Your code may look a little neater when you use the await keyword, but remember that it’s still not really synchronous. The method in which the await keyword appears will return immediately to the calling scope, which then continues on its merry way. So the Console.ReadKey() in the Main() method of the program above will be called before the “calculateSum” method completes. Which means if you hit a key before the calculation finishes, your program will finish without showing any results.
- That usage of Task.Run is kind of ugly, but it’s still cleaner than all the other examples I could find. Or at least it was the least ugly of all the ones that I could get to run at all. In previous betas of C# 4.5 (or whatever version it is), I think they used a slightly different set of methods and classes, which don’t seem to work now. Anyway, if you can figure out a better way, use it. | https://thiscouldbebetter.wordpress.com/2012/05/22/using-the-await-keyword-with-asynchronous-code-in-c/ | CC-MAIN-2018-13 | refinedweb | 471 | 58.99 |
Learn Drools: Part I
Learn Drools: Part I
Drools is a library and rules engine in Java that lets you add business rules (logic) separate from other code in the system. This is an overview and introduction.
Join the DZone community and get the full member experience.Join For Free
Drools Introduction
When we implement a complex software, often we require maintaining a set of rules which will be applied to a set of data for action to be taken on them. In a regular term, we call them rule engine. If there is a small set of rules, we can create our own rule engine that will maintain a certain order and be applied on incoming data to take the decision or categorize the data.
The advantage of maintaining one's own rule engine is that we have a pure control over the rule engine's algorithm. This means that we can easily change the algorithm logic to be simple; we don’t have to rely on the third party for the pattern matching logic.
On the other hand, if the rules are ever-changing or there is a huge set of rules, we will probably not want to maintain our own rule engine as it increases our development cost. Who wants to take this responsibility?
It would be nice if we could delegate that work to some third party that is tested and trusted so we could clearly separate the data and logic.
The third party maintains the rule engine application and we just define the rules as a strategy. It would be nice if it were declarative so that business analysts could understand the logic.
What Is Drools?
Drools is a Business Logic integration Platform (BLiP). Drools is an open-source project written in Java. Red Hat and JBoss maintain Drools.
Drools has two main parts:
1. Authoring
By authoring, we create a rules file for Drools (.drl). This file contains the rule definition in a declarative way. In this file, we can write a set of rules that will be fired at the run time. It is the developer's responsibility to write these rules as per business requirements.
2. Runtime
With the runtime, we create a working memory. It is the same as a session in Hibernate. As a rules file contains a set of rules, the runtime creates memory load. These rules and apply to the incoming data. In Drools, we called incoming data as facts.
Rule: A rule is nothing but the logic that will be applied to incoming data. It has two main parts: when and then.
When: Determines the condition on which the Rule will be fired.
Then: The action; if the rules met the condition, they defines what work this rule performs.
This is the syntax:
Rule <Rule Name>
when
<condition>
then
<Action>
End
In this example, we will greet a person based on current time. We will define the rules in Drools files. Drool will load these rules and fire on the incoming data.
Step 1: Create a.drl (droolRule.drl)file where we will define the rules.
package com.rules import com.example.droolsExample.Person rule "Good Morning" when person: Person(time >= 0, time < 12) then person.setGreet("Good Morning " + person.getName()); end rule "Good Afternoon" when person: Person(time >= 12, time < 16) then person.setGreet("Good Afternoon " + person.getName()); end rule "Good Night" when person: Person(time >= 16, time <= 24) then person.setGreet("Good Night " + person.getName()); end
Please note that here we create three rules: “Good Morning”, “Good Afternoon “ and “Good Night.” In the When section, we check the current time based on the Person POJO’s time property. In the Then section, we set the greeting messages accordingly.
Step 2: Create Person POJO class.
package com.example.droolsExample; public class Person { private String name; private int time; private String greet; public String getGreet() { return greet; } public void setGreet(String greet) { this.greet = greet; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getTime() { return time; } public void setTime(int time) { this.time = time; } }
Step 3: We create a class named DroolsTest.java.
3a. Load the rule file (i.e., droolsTest.drl) by using InputStream.
3b. Create a package using the above rule and add them into drools PackageBuilder.
3c. Create a RuleBase by using the above Package. Rulebase is the same as Sessionfactory in Hibernate; it is costly.
3d. Create a working memory from this RuleBase. It is same as Session class in Hibernate. This working memory manages the rules and incoming data. Apply the rules on the data.
3e. Add incoming data into working memory. Here, we create a Person Object and add it into Working Memory
3f. Fire all rules.
DroolsTest.java should look like the following:olsTest { public static void main(String[] args) throws DroolsParserException, IOException { DroolsTest droolsTest = new DroolsTest(); droolsTest.executeDrools(); } public void executeDrools() throws DroolsParserException, IOException { PackageBuilder packageBuilder = new PackageBuilder(); String ruleFile = "/com/rules/droolsRule(); Person person = new Person(); person.setName("Shamik Mitra"); person.setTime(7); workingMemory.insert(person); workingMemory.fireAllRules(); System.out.println(person.getGreet()); }
Output:
Good Morning Shamik Mitra.
We set the time for 7 a.m., so it satisfies the Good Morning Rule condition and fires this rule.
Published at DZone with permission of Shamik Mitra , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/learn-drools-part-1-1 | CC-MAIN-2019-26 | refinedweb | 912 | 67.96 |
Important: Please read the Qt Code of Conduct -
Save and Restore QToolbar position
I am building a GUI with a QMainWindow and QToolbars. There are three toolbars in my application. Two of them are created only if some internal conditions are fullfilled.
How can i save and restore the position of a QToolbar? Is it possible?
If i use the standard facilities QMainWindow::saveState and QMainWindow::restoreState, the layout is restored incorrectly, due to a non constant number of toolbars.
Just add some member variables to hold the states of your toolbars, when application is terminated serialize their values to a file, when application is ran again read that file and set your toolbars accordingly.
Suppose i have:
[code]
class MyClass
{
QMainWindow* m_wnd;
QToolBar* m_toolBar1;
QToolBar* m_toolBar2;
QToolBar* m_toolBar3;
xxx saveState(QToolBar* toolBar) const
{
?
}
void restoreState(const xxx&, QToolBar* toolBar)
{
?
}
};
[/code]
How do i save and restore position of m_toolBar? I tried with m_toolBar->saveState / restoreState and that did not work
First of all, use the code wrapper function when pasting code in the forum. Second - that code does not contain toolbars, only pointers to toolbars, in the constructor of your class you will call methods to create the toolbars, you can set the custom variables that hold your toolbar states - whether a toolbar is visible, whether it is floating, where is it docked and so forth, connect the signals emitted when the toolbars are arranged by the user to slots that again store the state in your custom variables. You can put a method that saves those user variables to a file in the destructor, and in the constructor check if the file exists, and if so, read in the data from it and restore the toolbars the same way the user left them when he quit the application.
QToolBar has NO save and restore state methods! Unless someone else knows a standard way to do it with the Qt API, you will have to do it yourself in the manner I keep on describing to you.
connect the signals emitted when the toolbars are arranged by the user to slots that again store the state in your custom variables
Which signals do you mean?
restore the toolbars the same way
Which way do you mean?
It sounds like i should reprogramm the whole QToolbar logic in my code...
What i need is the following:
[code]
ostream << m_toolBar1->position();
ostream << m_toolBar1->orientation();
ostream << m_toolBar1->size();
...
m_toolBar1->setPosition(getPosition(istream));
m_toolBar1->setOrientation(getOrientation(istream));
m_toolBar1->resize(getSize(istream));
[/code]
No, you don't have to touch the logic of QToolBar at all. You only need to serialize a few variables that hold information on whether your toolbars are visible, docked or floating, and respectively where are docked or their floating position. When a toolbar is modified it emits a range of signals such as:
@void visibilityChanged ( bool visible )
void orientationChanged ( Qt::Orientation orientation )@
you also have QToolBar member methods such as:
@bool isAreaAllowed ( Qt::ToolBarArea area ) const
bool isFloatable () const
bool isFloating () const
bool isMovable () const@
[quote author="ddriver" date="1331817648"]... You only need to serialize a few variables that hold information on whether your toolbars are visible, docked or floating, and respectively where are docked or their floating position...
[/quote]
I do not need info about visibility or orientation, i just want to know how i can save and restore the position of a QToolbar... Is it possible?
Have you actually read all the stuff I posted? I already told you how, both in short and in detail, and you still ask? It is basic programming so sorry if I am not rushing to write the code for you. QToolBar already has all the methods you need to copy its state into your custom variables and to restore the state from those variables.
Please do not extend your posts...
[quote author="ddriver" date="1331816587"]QToolBar has NO save and restore state methods! Unless someone else knows a standard way to do it with the Qt API, you will have to do it yourself in the manner I keep on describing to you.[/quote]
[quote author="ddriver" date="1331818311"]I already told you how, both in short and in detail, and you still ask?[/quote]
So, it is impossible, isn't it?
Otherwise, could you show me how to get and restore the position?
[code]
ostream << m_toolBar1->position();
...
m_toolBar1->setPosition(getPosition(istream));
[/code]
Can't you read English? I already told you HOW to do it, by what type of logic you assume it is impossible? Judging from your responses it seems like you need to work on your C++ and basic programming skills before you rush into building applications.
QToolBar inherits a QPoint returning member function called pos from QWidget that holds the position of the toolbar in its parent widget.
Start off by a bool isToolBar1Created, set it to false, when your user creates the toolbar, then set it to true.
Then have another one, bool isToolBar1Docked, when the user docks the toobar, set it to true, when the user detaches the toolbar to be floating, set it to false.
Then have a third variable that depending on whether the toolbar is floating or not, contains either its position or the location it is docked at.
When the user quits the application, write the three variables to a file. When the application is ran again, read in the file, if the first variable is false, then you don't create a toolbar, if it is true, then you create the toolbar, when you create it, check the second variable, if it is true then read the third for the dock location and dock the toolbar in that location. If it is false, set the toobar to floating and move it to the position stored in the third variable.
100% possible as you see.
Please stop editing your posts.
[quote author="ddriver" date="1331819488"]I already told you HOW to do it, by what type of logic you assume it is impossible?[/quote]
You said everything, but not a word about a position. I need a postion, a relative position, a floation position, not visibility, not if its dockable or not, not orientation. A position. Just a position.
[quote author="ddriver" date="1331819488"]
QToolBar inherits a QPoint returning member function called pos from QWidget that holds the position of the toolbar in its parent widget.[/quote]
Also, learn how to use the PLENTIFUL and DETAILED Qt documentation.
bq. pos : QPoint
This property holds the position of the widget within its parent widget.
You also have:
@void move ( const QPoint & )@
which, in case you cannot figure out, moves the widget to the QPoint location.
As you probably (don't) see, it is not only ENTIRELY possible, but QUITE EASY...
I also suggest you remove that foolish "[Impossible in Qt!]" qualifier from your thread and rethink your strategy - I really don't think you will go far into programming if you expect others to do the thinking and reading for you.
It is cool how you change your posts in the past from the future...
Certainly not as cool as being a lazy ungrateful jackass to people who waste their time to help you. Good luck, I am done with you!
[quote author="ddriver" date="1331829830"]Certainly not as cool as being a lazy ungrateful jackass to people who waste their time to help you. Good luck, I am done with you![/quote]
I hope you will not come back, dude.
- tobias.hunger Moderators last edited by
ddriver: There is no need to call people names here. Just move on to another thread if somebody annoys you.
[quote author="ddriver" date="1331819988"][quote author="ddriver" date="1331819488"]
QToolBar inherits a QPoint returning member function called pos from QWidget that holds the position of the toolbar in its parent widget.[/quote]
This property holds the position of the widget within its parent widget.
You also have:
@void move ( const QPoint & )@
which, in case you cannot figure out, moves the widget to the QPoint location.
[/quote]
It just does not work with QToolBar, dude.
Try it youself:
[code]
QPoint posa(m_toolBar1->pos());
m_toolBar1->move(pos.x()+10, pos.y());
[/code]
The toolbar stays where it was.
Both pos() and move() work for me perfectly well, when the toolbar is docked it returns and moves to coordinates, relative to the application window, when floating, it is relative to the active display, so the problem is all in your "TV", dude!
@Tobias Hunger - it wasn't a name, but a definition, it is offensive only when it is degrading, if it is justified and objective - I don't think there is something wrong with calling things what they are :)
O, really?
[code]
#include <QtGui>
int main(int argc, char argv[])
{
QApplication a(argc, argv);
QMainWindow wnd1;
QMainWindow wnd(&wnd1);
QToolBar* m_toolBar1 = new QToolBar(wnd);
QAction* action1 = new QAction("test 1", m_toolBar1);
action1->setText("1");
m_toolBar1->addAction(action1);
QToolBar* m_toolBar2 = new QToolBar(wnd); QAction* action2 = new QAction("test 2", m_toolBar2); m_toolBar2->addAction(action2); action2->setText("2"); wnd->addToolBar(m_toolBar1); wnd->addToolBar(m_toolBar2); wnd->setCentralWidget(new QLabel("text")); wnd1.show(); m_toolBar2->resize(200, 100); m_toolBar2->move(200, 200); return a.exec();
}
[/code]
Hi folks, please keep this thread civil;
@.ddriver: You need to change your screen name to something else.
Is that why you deleted your profile? I assumed you were just ashamed and went for a fresh and better start, but seeing how you mock my username, which further solidifies my theory that you are indeed a jackass, I see I overestimated you. I suggest you delete that profile as well, because your behavior gives me a good reason to report you on personal basis, then make yourself a normal profile and start behaving like a grown man, not a spoiled brat.
And just to assure you the methods work perfectly fine, here is a few screenshots in an animated gif and some code:
!!
@void MainWindow::setPos()
{
ui->mainToolBar->move(ui->spinX->value(), ui->spinY->value());
}
void MainWindow::getPos()
{
ui->spinX->setValue(ui->mainToolBar->pos().x());
ui->spinY->setValue(ui->mainToolBar->pos().y());
}@
.ddrivers screen name has been changed for stalking reasons.
And I'm closing this thread. | https://forum.qt.io/topic/14970/save-and-restore-qtoolbar-position/2 | CC-MAIN-2020-45 | refinedweb | 1,702 | 61.26 |
Hello! I need help. In my options scene, I want there to be a drop down, that goes from 1-10, (2 of them, one for x, one for y). I'm using the FPS Controller script using the MouseLook script. I have no idea how do do any of this :(. I just want to access:
public float XSensitivity = 2;
public float YSensitivity = 2;
(on the mouse look script) with the drop down making the defult 2, and making it so you can change them each from 1 through 10. (also making it save the players choice with player prefs and load it again). Thank you.
You seem to know PlayerPrefs and exactly what you want. What do you have trouble with?
player prefs kinda confuses me on how to set the values save them, im a noob. And how I would technically do this, take the Sensitivity from mouselook etc
I know I look like a total noob, but this is what I have ( I need to do it twice for the x and y but their on separate sliders): using System.Collections; using System.Collections.Generic; using UnityEngine;
public class sensitivitySliderScript : MonoBehaviour {
public float xFloat;
void Update () {
MouseLook.XSensitivity = xFloat;
PlayerPrefs.SetFloat("currentX", xFloat);
xFloat = PlayerPrefs.GetFloat("current
340 People are following this question.
How do I prevent my fps player from flying?
1
Answer
I Want to change ,y sensibility on my MouseLook script
1
Answer
How to use PlayerPrefs to save to floats (Sensitivity Help)
0
Answers
Multiple Cars not working
1
Answer
Distribute terrain in zones
3
Answers | https://answers.unity.com/questions/1364814/sensitivity-slider-drop-down-help.html | CC-MAIN-2019-51 | refinedweb | 263 | 64.81 |
Page Navigation and Client Interaction
In this section, we will explore the following concepts:
Page navigation techniques
Taking input from the user
Creating a managed bean
Creating and invoking a do<action> method
These concepts will involve the use of the following JSF centric components, tags and elements:
<h:commandButton>
navigaction-case entries in the faces-config file
managed-bean entries in the faces-config file
So far, we’ve played around a bit with JSF custom tags and internationalization and stuff, but all of that has been largely display generation. If all we wanted to do was display text to the user, we wouldn’t need a Servlet Engine and a JSF framework. No, the really kewl stuff happens when we start taking data from the user, and responding dynamically to the user depending on what type of data and information they deliver to us on the server side.
Two of the biggest challenges associated with building dynamic web applications is first, getting valid information and input from your clients, and secondly, providing an intelligent and informative response to the client based on the information they have provided. So, with these two tasks being the biggest challenges facing a modern day web application developer, nobody should be surprised by the fact that two of the most core features of JSF are the framework’s built-in navigation model, and the framework’s built-in facilities for simplifying the job of obtaining valid user input.
Of course, my challenge is to come up with some slick and friendly application that we could build together to effectively demonstrate the JSF framework’s page navigation and input management capabilities. To keep things fairly fun and light-hearted as we explore some fairly complicated concepts, I thought we’d prototype a little “Rock-Paper-Scissors” type of application.
I know, “Rock-Paper-Scissors” isn’t the most professional and technical of applications that we can develop, but the concept is pretty universal, and using this fun little game will help avoid having to spend too much time trying to explain all of the various use cases and non-JSF related details that might go along with an application that is more technical.
The basic idea of our implementation will be that the client will be asked to pick one of three gestures: Rock, Paper or Scissors. When the client has submitted their choice, the server will choose one of the three possible gestures, Rock Paper or Scissors. And then, the results of the game will be evaluated, and a response summarizing the results of the game will be sent back to the client. It’s all a pretty simple concept.
*Note, the term ‘gesture’ is used to describe the three choices of Rock, Paper and Scissors. The reason the term ‘gesture’ is used is because when the game is played interactively between two people, hand gestures are used to represent the choice of rock, paper or scissors.
Our online version of the Rock Paper Scissors game will involve five web pages, with the first page prompting the end user to type either Rock, Paper or Scissors into a textfield, after which, they will need to click on a submit button to initiate game play.
The game has three possible outcomes, either a win, lose, or tie. And actually, I’m going to include a fourth outcome, which is failure, which could happen if something either goes wrong, or perhaps the client hasn’t provided a valid option for game play. So, to accommodate our four possible outcomes, we will need to create an additional four JSP pages, named win.jsp, lose.jsp, tie.jsp and failure.jsp.
***Images go here.
Overall Application Flow
So, we have established that our little application needs five simple JSP pages: one for input, three to represent a potential result of win, lose or tie, and finally, a failure page to handle any errors or problems. But the question is: how do we get from the input page to the results page? I mean, there is a little bit of logic involved in order to decide whether there has been a win or a loss, and of course, we want to see the JSF framework’s neat page navigation features in action, right? Well, here’s how it is all going to work:
The first page the user will touch will have a textfield and a submit button on it. When the user types in a gesture and clicks the submit button, a request will go across the network and subsequently be captured by our application’s JSF framework. Of course, our application is going to need to first figure out what the user chose as their gesture, chose a gesture of its own, and then perform the multi-variable calculus required to figure out if the client won, lost, or tied. Of course, this is application logic, and we will need to code this application logic into a JavaBean. So, we’re going to need to create a JavaBean called the RPSBean (RPS=Rock, Paper, Scissors), and in that bean we will code a method named doRockPaperScissorsGame(). It is inside this method that we will code the required application logic.
The JSF framework will create an instance of the RPSBean for us, it will invoke the doRockPaperScissorsGame() method at the appropriate time for us, and it will take responsibility for figuring out when the Java Virtual Machine’s garbage collector will be able to get it’s greasy little fingers on the JSF created instance of the RPSBean and dispose of it. So, although we’re going to have to code a thing or two into this RPSBean class, the lifecycle of this JavaBean will be managed by the JSF framework. Given this fact, it’s not surprising that in JSF parlance, we call this type of JavaBean a ‘managed bean.’ All JSF managed beans must be described in the faces-config.xml file.
And speaking of the faces-config.xml file, there are a few other salacious entries that we are going to have to make in there if we want our application to work properly. You see, when our doRockPaperScissors() method executes, it must return a String. The String, which for us will simply be the words ‘win’, ‘lose’, ‘tie’ or ‘failure’, map to a navigation-rule from-outcome entry in the faces-config.xml file. The JSF framework then uses the returned String from the doRockPaperScissorsGame() method to figure out which JSP page will be invoked in order to produce a response. When the JSP is invoked, the appropriate markup gets created, and the response is then sent back to the user.
So, in summary, when the client clicks submit, a request is captured by the JSF framework. The JSF framework will look at the request, and not only create in instance of the appropriate JavaBean, but it will also invoke the method needed to implement the required application logic. When the logic is complete, the method will spit out a text String, and based on a corresponding text entry in the faces-config.xml file, the framework will figure out which JSP page will be called upon for generating a response that will get sent back to the client. And that’s it!
***Diagram showing the flow
The Landing Page
The landing page which is first viewed by the client is tasked with providing a textfield and a submit button, along with a bit of directions, to the end user.
<%@taglib uri="" prefix="f"%>
<%@taglib uri="" prefix="h"%>
<html>
<body>
Please chose between Rock Paper and Scissors:
<h:form
<h:inputText id="gesture" value=”#{rpsbean.clientGesture}”> </h:inputText>
<h:commandButton</h:commandButton>
</h:form>
</body>
</f:view>
</html>
There isn’t too much magic on this page. The required taglib directives decorate the top of the page, we have the standard form tag which then wraps both the h:inputText and the h:commandButton tags.
<h:form
<h:inputText id="gesture" value= ”#{rpsbean.clientGesture}” > </h:inputText>
<h:commandButton</h:commandButton>
</h:form>
The first thing to notice here is the value attribute of the inputText tag. We need to be able to take whatever the client types into the textfield and have that transferred to the server, where we can process it. The ”#{rpsbean.clientGesture}” entry for the value attribute tells the JSF framework that whatever is typed into this textfield should be transferred to a property named clientGesture in a JavaBean that is running on the server that is named rpsbean.
The second thing that needs to be pointed out here is the action attribute of the h:commandButton tag. Notice how the action attribute is set to "#{rpsbean.doRockPaperScissorsGame}"
The hash code sign, #, is a signal that tells the JSF framework that some JSF-specific processing is required. In this case, the JSF framework recognizes the fact that any time a submit button is clicked, processing logic is required, and in this case, the JSF framework will invoke the doRockPaperScissorsGame method of the rpsbean.
Another thing worth pointing out here are the id attributes that have been assigned to the form and inputText elements. JSF links elements and sub-elements according to their names or ids, in a ancestor:parent:child type of syntax. So, when we do processing and try to figure out what a user typed into the textbox, we can identify the inputText field with the id of gesture, inside the form named rpsgame, with the name rpsgame:gesture.
The syntax used here to specify the value of the action attribute is known as JSF expression language, and while the result of using this syntax will be to have the JSF framework invoke a method on a JavaBean, it is worth nothing that the syntax we see here is not a Java syntax. JSF expression langugage is intended to look and feel more like a scripting language, helping to keep JSP pages look cleaner, be free from actual Java code, and subsequently making JSPs easier to write and maintain.
So, our h:inputText tag is looking for a JavaBean named rpsbean so, it would probably be prudent to create such a JavaBean that will fill this need.
The basic outline of the JavaBean will look like this:
package com.mcnz.jsf.bean;
import javax.faces.context.*;
public class RPSBean {
}
}
I’ve placed the bean in a package named com.mcnz.jsf.bean, I have been generous to myself and imported javafaces.context.*, and I have created a simple class named RPSBean. When you create your own class, make sure the RPSB is all capitalized – case is important!
Now, after coding the class declaration, the next thing you’ll want to think about are any instance variables the class might need. Our little RPSBean will probably need to keep track of the client gesture, and the computer’s gesture, so making two String variables, one named computerGesture and the other named clientGesture, probably wouldn’t be such a bad idea.
Now, one thing to note about methods called from the action attribute of a custom tag is the fact that those methods must return a text String.
The text String is supposed to provide the JSF framework some type of guidance as to how to mitigate page navigation, so in our case, we will have four possibilities, the Strings: “failure”, “win”, “lose” and “tie”.
It’s always a good habit to start off your do<action> method by explicitly setting a String named result to a valid value, which in this case we set to failure, and then specifying what will be the last line of code in the method, return result;
package com.mcnz.jsf.bean;
import javax.faces.context.FacesContext;
public class RPSBean {
public String doRockPaperScissorsGame() {
String result = "failure";
/*Method implementation logic will go here. */
return result;
}
}
Of course, we need to keep track of what the client selected, and what the computer has selected, so we’ll add two instance variables of type String to our RPSBean. We will add the setters and getters as well.
Work is work. It is needed in all the stage of life. Please specify what kind of work you wanna need.
kollagen | http://javaserverfaces2-tutorials-jsf20.blogspot.com/2009/08/needs-work.html | CC-MAIN-2018-22 | refinedweb | 2,042 | 55.37 |
JPL Institutional Coding Standard for the C Programming Language
[ version edited for external distribution: does not include material copyrighted by MIRA Ltd (i.e., LOC-5&6) and material copyrighted by the ISO (i.e., Appendix A)] Cleared for external distribution on 03/04/09, CL#09-0763.
Version: 1.0 Date: March 3, 2009
Paper copies of this document may not be current and should not be relied on for official purposes. The most recent draft is in the LaRS JPL DocuShare Library at .
Jet Propulsion Laboratory California Institute of Technology
1
JPL DOCID D-60411
Table of Contents
Rule Summary Introduction Scope Conventions Levels of Compliance LOC-1 LOC-2 LOC-3 LOC-4 LOC-5 LOC-6 References Appendix A (omitted) Unspecified, Undefined, and Implementation-Dependent Behavior in C Index Language Compliance Predictable Execution Defensive Coding Code Clarity MISRA-C:2004 shall Compliance (omitted) MISRA-C:2004 full Compliance (omitted) 4 5 6 6 7 8 10 13 16
19 21
22
2
1 0. to avoid a known problem. 3 .JPL DOCID D-60411 Version History DATE 2008-04-04 2008-05-12 2009-03-04 SECTIONS CHANGED All Rule 13 Revision for external distribution REASON FOR CHANGE Document created Added guidance for the use of extern declarations. and Appendix A REVISION 0.0 Acknowledgement The research described in this document was carried out at the Jet Propulsion Laboratory. © 2009 California Institute of Technology. Government sponsorship is acknowledged. California Institute of Technology.2 1. Copyrighted material omitted: LOC-5 and LOC6. under a contract with the National Aeronautics and Space Administration.
2 Predictable Execution 3 4 5 *6 7 *8 9 10 11 12 13 14 15 16 *17 18 19 20 21 22 23 *24 *25 *26 *27 *28 *29 30 31 Use verifiable loop bounds for all loops meant to be terminating. Do not use direct or indirect recursion. rules *) All rules are shall rules. 4 . Do not hide dereference operations inside macros or typedefs. Do not use selective value assignments to elements of an enum list. Use no more than two levels of indirection per declaration. and #endif in the same file as the matching #if or #ifdef. Use short functions with a limited number of parameters. setjmp or longjmp. Check the return value of non-void functions. except those marked with an asterix. or explicitly cast to (void). Use IPC messages for task communication. Declare data objects at smallest possible level of scope. Use U32. Make the order of evaluation in compound expressions explicit.JPL DOCID D-60411 Rule Summary 1 Language Compliance 1 Do not stray outside the language definition. use static source code analyzers. Place no more than one statement or declaration per line of text. short. etc. Do not use non-constant function pointers. 2 Compile with all warnings enabled. Do not undefine or redefine macros. etc instead of predefined C data types such as int. Place #else. Do not place code or declarations before an #include directive. safety margins. Place restrictions on the use of semaphores and locks. #elif. Do not cast function pointers into other types. 3 Defensive Coding 4 Code Clarity 5 – MISRA shall compliance 73 All MISRA shall rules not already covered at Levels 1-4. Make only very limited use of the C pre-processor. barrier patterns. Do not use expressions with side effects. Do not use dynamic memory allocation after task initialization. rules 6 – MISRA should compliance *16 All MISRA should rules not already covered at Levels 1-4. Use static and dynamic assertions as sanity checks. Check the validity of values passed to functions. Use memory protection. Do not use goto. Do not use task delays for task synchronization. Use no more than two levels of dereferencing per object reference. I16. Explicitly transfer write-permission (ownership) for shared data objects. Do not define macros within a function or a block.
Ken Clark. addresses software risks that are related to the use of multi-threaded software. Al Niessner. rather than maintaining a multitude of project and mission specific standards. and the tools and processes that are used to verify code compliance. Guidelines for the use of the C language in critical systems. Marcel Schoppers. Dave Smyth. starting in 2004.com.” 2 Neither of these two sources. Len Day. Frank Kuykendall. Micah Clark. Bob Rasmussen. ISBN 0 9524156 4 X PDF. Steve Watson. Len Reder. California. Kim Gostelow. 1 which was originally defined for the development of embedded C code in automobiles. 93-95. though. www. California Institute of Technology. MISRA-C 2004. Doug McIlroy (Dartmouth). Todd Litwin. The intent of this standard is not to duplicate the earlier work but to collect the best available insights in a form that can help us improve the safety and reliability of our code. Dave Hecox. should be reviewed for possible revision no more than once per year and no less than once per five years. Chris Grasso. IEEE Computer. By conforming to a single institutional standard. The second source is the set of coding rules known as the “Power of Ten. Tom Lockhart. Glenn Reeves. Mary Lam. Gerard Holzmann. This standard aims to fill that void. Rajeev Joshi. all those above are employees of the Jet Propulsion Laboratory. Will Duquette. Mike Roche. Kirk Reinholtz. Igor Uchenik. Alex Murray. Their contributions (which do not necessarily imply the endorsement of this document) are gratefully acknowledged here. Many software experts both inside and outside JPL have contributed to the creation of this document with proposals for good coding rules. we can achieve greater consistency of code quality at JPL. Ken Starr.misra-c. in Pasadena. Roger Klemm. Eddie Benowitz. Alex Groce. Benjamin Cichy. Dave Wagner. pp. Two earlier efforts have most influenced the contents of this standard. MIRA Ltd. Steve Scandore.JPL DOCID D-60411 Introduction Considerable efforts have been invested by many different organizations in the past on the development of coding standards for the C programming language. Ed Gamble. June 2006. Kenny Meyer. Peter Gluck. This rules included in this standard. Garth Watney. Steve Larson. Lloyd Manglapus. but is today used broadly for safety critical applications. Dan Dvorak. include Brian Kernighan (Princeton University). Joe Hutcherson. 2004. Dan Eldred. Robert Denise. The first is the MISRA-C coding guideline from 2004. Scott Burleigh. People that have contributed in the preparations for this standard. Dennis Ritchie (Bell Labs). Tim Canham. Matt Wette. 1 5 . and critiques of those contained in earlier standards. Nicolas Rouquette. 2 The Power of Ten: Rules for Developing Safety-Critical Code. Jesse Wright. Unless otherwise noted.
With few exceptions.. A good example of architectural and structuring principles for software systems can be found in the ARINC 653-1 standard for safety critical avionics software. Should indicates a preference that must be addressed.. Airlines Electronic Engineering Committee. Such 3 E.g. to document copyright. Maryland.g. which generally operate under stricter resource constraints than. software test requirements. but with deviations allowed. and change history). and Release 653P1-2 from 7 March 2006. D57653 4 ARINC Specification.. version control systems. the development environment (choice of computers. Release 653-1 from 16 October 2003. with compliance verified. USA. Such additional requirements should be defined and documented separately in accordance with applicable controlling documents from JPL Rules. Rev 1. If a deviation from a shall rule is sought. ownership. General project and mission specific requirements that concern the context in which software is developed (e. project or mission specific requirements that fall outside the scope of this standard: File and directory organization.g. commenting and annotation.. the scope of this standard is further restricted as much as possible to the definition of coding rules that can reduce the risk of software failures. formatting. with tool-based checks). naming conventions.g.. substantial supporting evidence must be provided in a written waiver request. Aeronautical Radio Inc. This means that the rules are focused on embedded software applications. D-74352 and Software Development. An effort is made to limit shall rules to cases for which compliance can effectively be verified (e. general principles of software architecture also fall outside the current scope. compilers. the format of file headers (e. For conciseness. • • Shall indicates a requirement that must be followed. fall outside the current scope. operating systems. static analyzers.g. process related requirements) but not the code itself. e. Software Development Standard Processes. Avionics Application Software Standard Interface.JPL DOCID D-60411 Scope The coding rules defined here primarily target the development of mission critical flight software written in the C programming language. ground software. provided that an adequate justification is given for each deviation. conventions for the use of telemetry channels or event reporting. 6 . 4 Conventions The use of the verbs shall and should have the following meaning in this document. Rev 6. etc). build scripts or makefiles. 3 The following are some specific examples of process..
21.6. achieving full compliance with this standard – at least through level 4. It is also possible to certify compliance at different LOC levels for different parts of a large code base. it will not be sufficient for the cognizant engineer or the project or mission lead to approve a waiver from a shall rule. ranging from the most general to the most specific.5. The number of rules defined at each LOC is summarized in the following Table. should determine which level is appropriate. 19. 19.10. 7 .17 Power of Ten Rules 1. developed before this standard went into effect. not associated with the project seeking the waiver. Level of Compliance LOC-1 Language Compliance LOC-2 Predictable Execution LOC-3 Defensive Coding LOC-4 Code Clarity MISRA-C:2004 Rules 1. the most closely related rule in the MISRA-C:2004 standard or the Power of Ten rule-set is quoted. 8. the amount of effort needed to achieve compliance will increase with each new level.5. 19.5 For each rule given. This trade-off can be different for heritage code. For existing code.4 6.4 19.JPL DOCID D-60411 waiver requests must be evaluated by a team of software experts from across JPL.1.1. Because these rules are part of a JPL Institutional standard. 6. 16.3 11. 19.2. 10 2. is not expected to have a measurable impact on schedule or cost.13. 12. 3 5. Levels of compliance certification for each project or mission should be defined in the project’s Software Management Plan (SMP).1. Schedule and cost considerations. weighed against mission risk.1. .2. 17. 8. 16.7. Levels of Compliance This standard defines six levels of compliance (LOC). Compliance with this standard can be certified for each level separately. For newly written code.1 9. 2. 9 5 That is.4. 14. 13. 19. 20.3.3. an independent institutional approval process must be followed for significant deviations.1. 1.3. The name of each segment is meant to be suggestive of its approximate purpose.2. 19. 20.10. 7 4. preferably with the help of tool-based compliance checkers. 16. Level of Compliance Rules Defined at Level 2 LOC-1 Language Compliance 10 LOC-2 Predictable Execution 7 LOC-3 Defensive Coding 12 LOC-4 Code Clarity 73 LOC-5 MISRA-shall rules 16 LOC-6 MISRA-should rules Cumulative Number of Rules Required for Full Compliance 2 12 19 31 104 120 The rules defined at LOC-1 through LOC-4 correspond to the following MISRA-C and Power of Ten rules. 8.12.
using a command line with minimally the following option flags: gcc –Wall –pedantic –std=iso9899:1999 source.c A suggested broader set of gcc compiler flags includes also: -Wtraditional -Wshadow -Wpointer-arith -Wcast-qual -Wcast-align -Wstrict-prototypes -Wmissing-prototypes -Wconversion 8 . and can be understood. with no errors or warnings resulting. can be analyzed by a broad range of tools. as contained in the ISO/IEC standard definition. All code shall further be verified with a JPL approved state-of-the-art static source code analyzer. The rule prohibits straying outside the language definition. undefined and implementation dependent behavior in C. and forbids reliance of undefined or unspecified behavior.1] This rule should be considered routine practice. The closely related #warning directive is not defined in the language standard. The C language standard explicitly recognizes the existence of undefined and unspecified behavior. Given compliance with Rule 1. A list of formally unspecified. which are by definition implementation defined and outside the language proper. with no errors or warnings resulting. this means that the code should compile without errors or warnings issued with the standard gcc compiler.1. [MISRA-C:2004 Rule 1. even for non-critical code development. with no reliance on undefined or unspecified behavior. is given in Appendix A. debugged.2] The purpose of this rule is to make sure that all mission critical code can be compiled with any language compliant compiler. and its use is supported. It ensures that there is no hidden reliance on compiler or platform specific behavior that may jeopardize portability or code reuse. This rule also prohibits the use of #pragma directives. 1.JPL DOCID D-60411 LOC-1: Language Compliance Rule 1 (language) All C code shall conform to the ISO/IEC 9899-1999(E) standard for the C programming language. [MISRA-C:2004 Rule 21. and maintained by any competent C programmer. The #error directive is part of the language. but its use is allowed if supported by the compiler (but note Rule 2). Rule 2 (routine checking) All code shall always be compiled with all compiler warnings enabled at the highest warning level available. tested.
If the compiler or the static analyzer gets confused. 9 . and produce sparse and accurate messages. the code causing the confusion should be rewritten so that it becomes more clearly valid. The JPL recommended static analyzers are fast. Many developers have been caught in the assumption that a tool warning was false. only to realize much later that the message was in fact valid for less obvious reasons.JPL DOCID D-60411 The rule of zero warnings applies even in cases where the compiler or the static analyzer gives an erroneous warning.
Power of Ten Rule 3] Specifically. One way to enforce secure loop bounds is to add an explicit upper-bound to all loops that can have a variable number of iterations (e. The two rules combined secure a strictly acyclic function call graph and control-flow structure. attempting to allocate more memory than physically available. code that traverses a linked list).2. and help to secure predictable performance for all tasks. etc. The reason is simple: memory allocators and garbage collectors often have unpredictable behavior that can significantly impact performance. sbrk(). pre-allocated.4. This rule is common for safety and mission critical software and appears in most coding guidelines. [MISRAC:2004 Rule 16.. after task initialization. using stray pointers into dynamically allocated memory. When the upper-bound is exceeded an assertion failure and error exit can be triggered. 10 . It shall be possible for a static compliance checking tool to affirm the existence of the bound. The absence of recursion also simplifies the task of deriving reliable bounds on stack use. For standard for-loops. alloca(). overstepping boundaries on allocated memory. Such a server loop shall be annotated with the C comment: /* @non-terminating@ */. A notable class of coding errors stems from mishandling memory allocation and free routines: forgetting to free memory or continuing to use memory after it was freed. Power of Ten Rule 1] The presence of statically verifiable loop bounds and the absence of recursion prevent runaway code. Rule 5 (heap memory) There shall be no use of dynamic memory allocation after task initialization. this rule disallows the use of malloc(). [MISRA-C:2004 Rule 20. and similar routines. An exception is allowed for the use of a single non-terminating loop per task or thread where requests are received and processed.JPL DOCID D-60411 LOC-2: Predictable Execution Rule 3 (loop bounds) All loops shall have a statically determinable upper-bound on the maximum number of loop iterations. the loop bound requirement can be satisfied by making sure that the loop variables are not referenced or modified inside the body of the loop. which in turn enhances the capabilities for static checking tools to catch a broad range of coding defects. [Power of Ten Rule 2] Rule 4 (recursion) There shall be no direct or indirect use of recursive function calls.g. Forcing all applications to live within a fixed. area of memory can eliminate many of these problems and make it simpler to verify safe memory use.
This style of software architecture is based on principles of software modularity. when used for locking. and interrupt mask and unmask operations. havoc. Each task or module should maintain its own data structures. No task should directly execute code or access data that belongs to a different task. If the guess is wrong. Rule 9 (semaphores and locking) The use of semaphores or locks to access shared data should be avoided (cf. Only the owner of a data object should be able to modify the object. Rules 6 and 8). but it does place a restriction on how tasks use such modules. Rule 7 (thread safety) Task synchronization shall not be performed through the use of task delays. including deadlock. should always appear in pairs. Communication and data exchanges between different tasks (modules) in the system are best performed through a disciplined use of IPC (inter-process communication) messaging. and not allow direct access to local data by other tasks. interrupt masking and data locking to achieve task synchronization. Ownership equals write-permission. Note that this rule does not prevent the use of system-wide library modules that are not associated with any one task. IPC messages should then contain only data. can be the result. to avoid access conflicts that can lead to data corruption. to 11 .JPL DOCID D-60411 Rule 6 (inter-process communication) An IPC mechanism should be used for all task communication. calls shall always occur in a single predetermined. All IPC messages shall be received at a single point in a task. Unlock operations shall always appear within the body of the same function that performs the matching lock operation. If used. if a shared object does not have a single owning task. nested use of semaphores or locks should be avoided. If such use is unavoidable. Rule 8 (access to shared data) Data objects in shared memory should have a single owning task. within the same function. Semaphore acquire and release operations. order. preferably via IPC messages. The use of a task delay for task synchronization requires a guess of how long certain actions will take. Specifically the use of task delays has been the cause of race conditions that have jeopardized the safety of spacecraft. Ownership should be passed between tasks explicitly. data hiding. and documented. Callbacks should be avoided. Generally. and the separation of concerns that can avoid the need for the often more error-prone use of semaphores. and never any function pointers. preferably no data pointers. access to that object has to be regulated with the use of locks or semaphores. but non-ownership generally will not exclude readaccess to a shared object.
[MISRA-C:2004. The use of nested semaphore or locking calls in more than one possible order can cause deadlock. Rule 9.JPL DOCID D-60411 comply with the second part of Rule 9. or access to memory outside allocated regions. Semaphore operations can also validly be used for “producer-consumer” synchronization. when supported by the operating system. Mission critical code should not just be arguably.4. an area of memory above the stack limit allocated to each task should be reserved as a safety margin. Critical parameters should similarly be protected in memory by placing safety margins and barrier patterns around them.3] 12 . A health task can detect stack overflow anomalies by at regular intervals checking the presence of the bit-pattern for each task. unless all items are explicitly initialized. but trivially correct. safety margins and barrier patterns shall be used to allow detection of access violations. When not available. In those cases acquire and release operations may appear in different tasks. For instance. i.e. Rule 14. and filled with a fixed and uncommon bit-pattern. [MISRA-C:2004. so that access violations and data corruption can be detected more easily. the "=" construct shall not be used to explicitly initialize members other than the first. Rule 12 (enum Initialization) In an enumerator list.. memory protection shall be used to the maximum extent possible. Rule 11 (simple control flow) The goto statement shall not be used. The same principle can be used to protect against buffer overflow. Power of Ten Rule 1] Simpler control flow translates into stronger capabilities for both human and tool-based analysis and often results in improved code clarity. Rule 10 (memory protection) Where available. There shall be no calls to the functions setjmp or longjmp.
The use of distributed state information can significantly reduce code transparency. and then place an extern declaration to the same object x in another source file. double). Similarly. its value cannot be referenced or corrupted. most current compilers (including gcc with all warnings enabled at the highest setting) and most current static analyzers. Although their use is sometimes unavoidable. [MISRA-C:2004 Rule 16. incompatible purposes. the easier it is to diagnose the problem. Especially function parameters should be declared with the type qualifier const wherever possible..g. Rule 14 (checking return values) The return value of non-void functions shall be checked or used by each calling function. The header file must be included in every file that refers to the corresponding data object: both the source file in which the actual declaration appears and the files in which the object is used. Without precautions. Clearly. 8.g.7. Power of Ten Rule 6] This rule supports a well-known principle of data-hiding. int) in one source file. The rule discourages the re-use of variables for multiple. that avoid storing local state. if an erroneous value of an object has to be diagnosed. there is a hidden danger in the use of extern declarations in C. The correct remedy for this significant flaw in current compiler technology is to: Place all extern declarations in a header file. If this rule is followed.10. No declaration in an inner scope shall hide a declaration in an outer scope. which complicates fault diagnosis. the compiler will be able to flag all type inconsistencies reliably. and that do not modify data declared in the calling function indirectly. as recommended in Rule 10).10.. reduce the effectiveness of standard software test strategies. if the two types have different size (as in our example of int and double) havoc will result (mitigated only partially by the use of barrier patterns. the fewer the number of statements where the value could have been assigned. and complicate the debugging process if anomalies occur. if we declare a global data object named x as type A (e. will not detect the type inconsistency. or explicitly cast to (void) if irrelevant. Good programming practice is further to prefer the use of immutable data objects and references. This means that data objects should by preference be declared of C type enum or with the C qualifier const. The rule is consistent with the principle of preferring pure functions that do not touch global data.JPL DOCID D-60411 LOC-3: Defensive Coding Rule 13 (limited scope) Data objects shall be declared at the smallest possible level of scope. [MISRA-C:2004 Rule 8. while accidentally using another type B (e. If an object is not in scope. Power of Ten Rule 7] 13 . Note the similarity in this treatment of extern declarations and the standard use of function prototypes (which follows very similar rules).
When an assertion fails. [MISRAC:2004 Rule 20. Rule 16 (use of assertions) Assertions shall be used to perform basic sanity checks throughout the code. Power of Ten Rule 7] This is consistent with the principle that the use of total functions is preferable over non-total functions. All functions of more than 10 lines should have at least one assertion. an explicit recovery action should be taken. Statistics for industrial coding efforts indicate that unit tests often find at least one defect per one hundred lines of code written. [Power of Ten Rule 5] Assertions are used to check for anomalous conditions that should never happen in reallife executions. The syntax #e turns the assertion condition e into a string that is printed as part of the error message. Assertions must be side-effect free and can be defined as Boolean tests.JPL DOCID D-60411 Rule 15 (checking parameter values) The validity of function parameters shall be checked at the start of each public function. __FILE__ and __LINE__ are predefined by the macro preprocessor to produce the filename and line-number of the failing assertion. __LINE__. the 6 A public function is a function that is used by multiple tasks. A total function is setup to handle all possible input values. 14 . \ __FILE__. the call to tst_debugging can be turned into a call to a different error-logging routine after testing. false) In this definition. library functions are typically re-entrant.%d: assertion '%s' failed\n". and loop-invariants. No assertion should be used for which a static checking tool can prove that it can never fail or never hold. Because in flight there is no convenient place to print an error message. by returning an error condition to the caller of the function. In flight. e. The odds of intercepting defects increase with a liberal use of assertions. parameter values.. #e). such as a library function. 6 The validity of function parameters to other functions shall be checked by either the function called or by the calling function.g. A recommended use of assertions is to follow the following pattern: if (!c_assert(p >= 0) == true) { return ERROR. In a multi-threaded environment. not just those parameter values that are expected when the software functions normally. Assertions can be used to verify pre.and post-conditions of functions.3. } where the assertion is defined during testing as: #define c_assert(e) ((e) ? (true) : \ tst_debugging("%s. expected function return values. they can be selectively disabled after testing in performance-critical code. Because assertions are side-effect free.
To check the opposite requirement. An even stronger check can be provided by static assertions that can be evaluated by the compiler at the time code is compiled. A static assertion can be defined like the c_assert above. but can be used standalone (i..3] This rule appears in most coding standards for embedded software and is meant to enhance code transparency and secure type safety.JPL DOCID D-60411 assertion then turns into a Boolean test that protects. This assertion will trigger a “division by zero” warning from the compiler when the code is compiled on 32-bit machines (thus triggering Rule 2). The examples above are for dynamic assertions that can provide protection against unexpected conditions encountered at runtime. Rule 17 (types) Typedefs that indicate size and signedness should be used in place of the basic types. [MISRA-C:2004 Rule 13. i. and enables recovery. [cf. etc.e. U16 for unsigned 16-bit integer variables. the following static assertion can be used: c_assert( 1 / (sizeof(void *) & 4) ).2] Rule 19 The evaluation of a Boolean expression shall have no side effects. to make sure that we are executing on a 32-bit machine only. automatically logging every violation encountered. not in a conditional). from anomalous behavior.. MISRAC:2004 Rule 12. This version will trigger the “division by zero” warning from the compiler when the code is compiled on machines that do not have a 32-bit wordsize. for instance as follows: c_assert( 1 / ( 4 – sizeof(void *)).e.1] 15 . Typical definitions include I32 for signed 32-bit integer variables. [MISRA-C:2004 Rule 6. Rule 18 In compound expressions with multiple sub-expressions the intended order of evaluation shall be made explicit with parentheses.
.4). developers often have to resort to using earlier implementations as the referee for interpreting complex defining language in the C standard.e. MISRA-C:2004 Rule 19. [MISRA-C:2004 Rule 19. 1024) possible versions of the code. variable argument lists (ellipses) (cf. [MISRA-C:2004 Rule 19.) There is rarely a justification for the use of other conditional compilation directives even in large software development efforts.and tool-based checkers. Rule 21 (preprocessor use) Macros shall not be #define'd within a function or a block.1). Specifically. MISRA-C:2004 Rules 19. also years after it is written. #if. Each such use should be justified in the code. Note that with just ten conditional compilation directives. or extend the code reliably. The purpose of these rules is that all code remains readily understandable and maintainable. Code clarity cannot easily be captured in a comprehensive set of mechanically verifiable checks. and especially when examined under time pressure and by anyone other than the original developer.12 and 19.5] Rule 22 (preprocessor use) #undef shall not be used. Rule 20 (preprocessor use) Use of the C preprocessor shall be limited to file inclusion and simple macros. so the specific rules included here serve primarily as examples of safe coding practice. [Power of Ten Rule 8] The C preprocessor is a powerful obfuscation tool that can destroy code clarity and befuddle both human. the use of token pasting (cf. (See also Rule 23.JPL DOCID D-60411 LOC-4: Code Clarity Especially mission critical code should be written to be readily understandable by any competent developer. In new implementations of the C preprocessor. even with a formal language definition in hand. revise.13). The rules in this section aim to secure compliance with this requirement. The use of conditional compilation directives (#ifdef.6] 16 . All macros are required to expand into complete syntactic units (cf. The effect of constructs in unrestricted preprocessor code can be extremely hard to decipher. there could be up to 210 (i. Code does not just serve to communicate a developer’s intent to a computer. but also to current and future colleagues that must be able to maintain. MISRA-C:2004 Rule 16. without requiring significant effort to reconstruct the thought processes and assumptions of the original developer. each of which would have to be tested – causing a generally unaffordable increase in the required test effort. #elif) should be limited to the standard boilerplate that avoids multiple inclusion of the same header file in large projects. and recursive macro calls are excluded by this rule.
Each function should be a logical unit in the code that is understandable and verifiable as a unit. this means no more than about 60 lines of code per function.17] Rule 24 There should be no more than one statement or variable declaration per line.JPL DOCID D-60411 Rule 23 (preprocessor use) All #else. Rule 26 The declaration of an object should contain no more than two levels of indirection.1] 17 . A single exception is the C for-loop. and increment) can be placed on a single line. Rule 30 (type conversion) Conversions shall not be performed between a pointer to a function and any type other than an integral type. Rule 25 Functions should be no longer than 60 lines of text and define no more than 6 parameters. It is much harder to understand a logical unit that spans multiple screens on a computer display or multiple pages when printed. Long lists of function parameters similarly compromise code clarity and should be avoided. loop bound. [Power of Ten Rule 9] Rule 28 Pointer dereference operations should not be hidden in macro definitions or inside typedef declarations. Excessively long functions are often a sign of poorly structured code. where the three controlling expressions (initialization. [Power of Ten Rule 4] A function should not be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration. [MISRA-C:2004 Rule 11.5] Rule 27 Statements should contain no more than two levels of dereferencing per object. [MISRA-C:2004 Rule 19. Typically. #elif and #endif preprocessor directives shall reside in the same file as the #if or #ifdef directive to which they are related. Rule 29 Non-constant pointers to functions should not be used. [MISRA-C:2004 Rule 17.
They can make it hard to follow or analyze the flow of data in a program. and when alternate means are provided to maintain transparency of the flow of control. especially by tool-based checkers. [MISRA-C:2004 Rule 19. Function pointers especially can restrict the types of checks that can be performed by static analyzers and should only be used if there is a strong justification. it can become difficult for tools to prove absence of recursion.] 18 . typedefs (where not provided in system-wide include files). macro definitions. [Levels 5 and 6 omitted in this version for copyright restrictions – consult the original MISRA C guidelines for details. file static declarations. followed by function declarations. In these cases alternate guarantees should be provided to make up for this loss in analytical capabilities.JPL DOCID D-60411 Pointers are easily misused. even by experienced programmers.1] The recommended file format is to structure the main standard components of a file in the following sequence: include files. Rule 31 (preprocessor use) #include directives in a file shall only be preceded by other preprocessor directives or comments. If function pointers are used. external declarations.
Other relevant publications and standards: European Space Agency (ESA) Board for Software Standardization and Control.. Addison_Wesley. Kernighan and Dennis M. UK Ministry of Defence. C-Style standards and guidelines. Secure Coding in C and C++. 2005. Writing Solid Code.'' IEEE Computer. Jerry Doland and Jon Vallett. Second Edition. Report SEI-94-003. October. 2002. 1999-12-01. 2000. David Straker. 19 .Rules for Developing Safety Critical Code. 2nd Edition 1988. Thomas Plum. 1997. Software Eng. C Programming guidelines. Part 2: Guidance. ISBN 0-201-17928-8.C. 538 pgs. 1992. 1989. 1989. Tech. 1994. NY. 1978. Andrew Koenig. Prentice Hall. ISBN 0-13-116898-3. ISBN 0-911537-07-4. Guidelines for the use of the C language in critical systems. 1992. Aug. Date of ISO approval 5/22/2000. MISRA-C: 2004. June 2006. Steve Maguire. pp. Software Considerations in Airborne Systems and Equipment Certifications. 2004. Safer C: Developing software for high-integrity and safety-critical systems. Brian W. ISO/IEC 9899:1999 (E) – Programming Languages – C. Microsoft Press. Plum Hall. March 30. ''The Power of Ten -. Code 552. C Traps and Pitfalls. Goddard Space Flight Center. Robert C. Washington D. 1995. 1993. Aug. DOD-178B. UK Ministry of Defence. McGrawHill. 93-95. Published by ANSI. Inc. Les Hatton.JPL DOCID D-60411 References Primary documents: Motor Industry Software Reliability Association (MISRA). Ritchie. Defence Standard 00-55. RTCA. New York. Seacord. The C Programming Language. C Style Guide. Branch. Prentice-Hall. C and C++ Coding Standards. International Standard. Requirements for safety related software in defence equipment. Addison-Wesley.
C (joint strike fighter air vehicle) from 2005 (154 rules) SIM Realtime Control Subsystem Coding Rules from 2005 (JPL/SIM) (24 rules) MSL Coding Rules from 2006 (JPL/MSL) (82 rules) Rules suggested by JPL developers. 2007 (38 rules) 20 .JPL DOCID D-60411 Other documents and standards consulted Spencer's 10 commandments from 1991 (10 rules) Nuclear Regulatory Commission from 1995 (22 rules) Original MISRA rules from 1997 (127 rules) Software System Safety Handbook from 1999 (34 rules) European Space Agency coding rules from 2000 (ESA) (123 rules) Goddard Flight Software Branch coding standard from 2000 (GSFC) (100 rules) MRO coding rules from 2002 (LMA) (132 rules) Hatton's ISO C subset proposal from 2003 (20 rules) MSAP coding rules from 2005 (JPL/MSAP) (141 rules) JSF AV Rules Rev.
] 21 .JPL DOCID D-60411 Appendix A Unspecified. • • The following more detailed list is reproduced from Appendix J in ISO/IEC 9899-1999. Undefined behaviour: The definition of the language can give no indication of what behavior to expect from a program – it may be some form of catastrophic failure (a ‘crash’) or continued execution with some arbitrary data. Undefined. or the order in which the actual parameters in a function call are evaluated. Example: the range of values that can be stored in C variables of type short. undefined and implementation defined behavior – the following may suffice (based on a definition proposed by Clive Pygott in ISO SC22 in its study of language vulnerabilities: • Unspecified behaviour: The compiler has to make a choice from a finite set of alternatives. [Remainder omitted. int. All references contained in this list are to numbered sections in the ISO document. Example: the order in which the sub-expressions of a C expression are evaluated. for copyright restrictions. and Implementation-Dependent Behavior in C As a short synopsis of the basic definition of unspecified. but that choice is not in general predictable by the programmer. Implementation defined behaviour: The compiler has to make a choice that is clearly documented and available to the programmer. or long.
10 scope. 17 O order of evaluation. 17. 6 J L libraries. 16 embedded software. 11 A alloca. 8. 11 multiple inclusion. 7 22 . 17 typedefs. 12 error. 16 T task communication. 10 buffer overflow. 13 indirection. 15 stack. 11 token pasting. 12 sbrk. 10 recursive macro. 6. 10. 13 dereferencing. 12 bound. 10 F function parameters. 12 modularity. 12 P pedantic. 9. 12 stack overflow. 14 locking. 14. 18 E ellipses. 2. 10 const. 15 G goto. 15 synchronization. 11 R recursion. 12 loop. 14 N non-terminating loop. 13 semaphores. 16 total functions. 8 pragma. 11 task delays. 12 static assertions.JPL DOCID D-60411 Index hiding. 10 memory protection. 10 stack limit. 6. 11 side effects. 14 S safety margins. 8 preprocessor. 14 pure functions. 15 M malloc. 11. 17 IPC. 10 loop-invariants. 12 shared objects. 18 public function. 13 B barrier patterns. 16. 17 dynamic assertions. 11 C call graph. 8. 13 D data-hiding. 8. 15 W warning. 13. 13 H heritage code. 14 typedef. 14. 15. 16 return value. 15 enumerator list. 11 setjmp. 12 JPL Rules. 9 assertions. 14 I immutable data. 11 longjmp. 10 analyzer. | https://de.scribd.com/document/202857269/JPL-Coding-Standard-C | CC-MAIN-2019-18 | refinedweb | 6,446 | 59.9 |
B E C K M A N
SEE FRONT COVER ON PAGE 8 CALEB SHIIHARA | RAYMOND VASCO CLARISSA HERNANDEZ | MICHELLE NAZARENO
Evacuations & LOCAL BARN burned down
ALL AROUND ALL-STARS
Student athletes with EARLY ADMISSION
a1
C O N T E N T S 03 Letter from the Editors 04 Meet the Staff 05 Starting on a High Note 06 Canyon Fire 2 Blazes Through OC 07 Beckman Solar Panel Installations Delayed 08 Fall Style: Male 10 Fall Style: Female 12 Teachers Pay It Forward 13 "What's Your Deal?" with Elis 14 Senioritis, a Virulent Disease 15 ACTs, SATs, and GPAs 16 Stressed Student Skincare: Survival Guide 18 Wilson Lu: Makeup Prodigy 20 Jessica Ark "Focuses on the Bigger Picture" 21 Ivana Rusich Spikes Her Way to USC 21 Digital Artist Megan Dang 22 Bon AppĂŠtit: Featuring All-Natural Dining 24 Athletic Scholarships 26 US Loses Spot at the World Cup 26 Astros Steals the Championship 27 Rigorous Rowing in SoCal Makes a Splash 28 Varsity Girls Tennis: CIF Champions 30 "Stranger Things" Season Two Delivers 31 Dia de Los Muertos: A Celebration of the Dead 32 Beckman Rewind
Letter Editors from the
Dear Patriots, New layout, different style, more creative freedom. Life as a student is all about new experiences to prepare for a life beyond high school; we believe that the Beckman Chronicle should reflect that. In the past few years, our staff count has been steadily increasing. With a large number of new journalists, the Beckman Chronicle decided to move from newspaper publication to magazine production. Additionally, to keep up with the media demands of the 21st century, we revamped our website,, and post new articles weekly. However, this transition could not have been completed without the support and patient guidance of our advisor, Mr. Tanara. He encouraged our creative ideas and challenged us to think outside of the box to cover stories never heard before. We also thank Mrs. Manning for her advice regarding our magazine design and layout. In taking on this new project, we’ve done our best to amplify the students’ voices. We hope you will enjoy the work our team has put into this magazine. The theme of this issue is "Nights & Lights.” With the year coming to a close and winter settling in, it feels like the end of an exhausting all nighter. However, after this long and troublesome night, we can’t wait for the daybreak of next year. In the next semester, seniors can finally take a breather after the mad rush of completing college applications, students start their classes with a blank slate, and spring sports can launch into season. This year, we’ve focused on creating and curating more in-depth and creative articles— finding the extraordinary in the ordinary. We hope this magazine inspires adventure, proprovides insights, stimulates conversations peers. as vides newnew insights, andand stimulates conversations withwith youryour peers. We We wishwish youyou luckluck as you you through flip through the pages ofBeckman the Beckman Chronicle, guided byneon the neon lights. flip the pages of the Chronicle, guided by the lights. Have fun, The Editors-in-Chief
VOLUME 1: ISSUE 1 | THE BECKMAN CHRONICLE The Beckman Chronicle is a student-run publication that highlights accomplishments of students and faculty and celebrates diversity on campus.
ADVISOR Ryon Tanara
EDITORIAL STAFF (top; left to right) Editor-in-Chief | Director of Innovation | Digital Media Editor Victoria Choi Editor-in-Chief | Sports Editor Gena Huynh Editor-in-Chief Shannon Zhao Photography Editor | Design Editor Karolyne Diep
(right column; left to right) News Editor Ethan Prosser News Editor Stephanie Xu Editorials & Opinions Editor Hanna Kim Editorials & Opinions Editor Soowon Lee Features Editor Ruby Choi Features Editor Cynthia Le Arts & Entertainment Editor Daeun Lee Arts & Entertainment Editor Michael Lee
Front Cover Photo Karolyne Diep Front Cover Layout Victoria Choi Front Cover Model Caleb Shiihara
STAFF WRITERS
GUEST PHOTOGRAPHERS
News Alshaun Rodgers, Leena Shin, Montreh Sohrabian, Srihitha Somasila, Ivanna Tjitra, April Wang
Dyan Jaime & Marva Shi
Editorials & Opinions Dyan Jaime, Thomas Jang, Daniel Kang, Cindy Lim, Mariah Perry, Meganne Rizk, Jessica Vo Features Amelia Chung, Sameer Ghai, Rachel Ker, Nelson Lou, Aastha Sehgal, Emma Trueba Sports Dawson Bartelt, Matthew Basilio, Clarissa Hernandez, Devon McCoy, Allison Perez Arts & Entertainment Alyssa Arroyo, Aarushi Bhaskaran, Lauren Brown, Kevin Mateos, Rumsha Mussani, Keerthi Nair, Abby Pond, Ashley Singh
SPECIAL THANKS TO The Choi Family, Ewha Graphics 4790 Irvine Blvd. #105-151 Irvine, CA 92620 info@ewhagraphics.com
4
for more information on how to submit your own work in our next edition, please contact us!
FRONT COVER
STARTING ON A HIGH NOTE Layout by Victoria Choi | Recommendations by the Beckman Chronicle Staff
Want to update your playlist? Consider some music recommendations from the Chronicle Staff. Some are oldies, some are newbies. Some are chill, while others are more upbeat. Take some time to indulge in new music styles—maybe you'll discover a genre of music to serve as the soundtrack to your afternoon run, or those late night study sessions!
R&B/Soul Stigma Down NYLA Lights On Best Part Sunday Candy Unstoppable
Pop 21 Fire Rain Perfect Wolves Attention Lost Stars Easy Love Bad at Love Dusk till Dawn Good Old Days Too Good at Goodbyes
Alternative Rock V Emily King Blackbear H.E.R. Daniel Caesar (ft. H.E.R.) Donnie Trumpet & the Social Experiment Lianne La Havas (ft. FKJ) Dean BTS Heize Ed Sheeran Selena Gomez Charlie Puth Adam Levine lauv Halsey Zayn Macklemore (ft. Kesha) Sam Smith
Dance/Electronic Silence 4 Walls Sur ma route Sleep Talking Friends
Marshmello (ft. DJ Khalid) f(x) Black M Charlotte Lawrence Justin Bieber & BloodPop®
The Way I Do Bad at Love Out on the Town Electric Love
Bishop Briggs Halsey fun. Børns
Acoustic Art Exhibit Intertwined La Vie En Rose Something to Believe In [Live Acoustic]
The Things We Used to Share
Young the Giant dodie Daniela Andrade (Orig. Edith Piaf) Young the Giant Thomas Sanders
Oldies Killer Queen Down Under September Dancing Queen Fly Me to the Moon Can't Take My Eyes Off You Dream a Little Dream of Me Never Gonna Give You Up ;)
Queen Men at Work Earth, Wind & Fire ABBA Frank Sinatra Frankie Valli Doris Day Rick Astley 5
Canyon Fire 2 Blazes Through Orange County By Leena Shin | Photograph Courtesy of Steve Lopushinsky
This is the aftermath of the Canyon Fire 2 at the Peters Canyon Hiking Trail. Smoke clouds the sky and fire rages on.
On Monday, Oct. 9, a wildfire spread throughout the city of Anaheim and surrounding communities, resulting in mandatory evacuations for residents. High winds and low humidity fueled the quick-moving fire, forcing residents of Tustin, Orange and Anaheim to pack up their belongings and leave their homes. Many Beckman students and staff were impacted by the blaze. The Canyon Fire 2, named after the first Canyon Fire that took place in September, started in Coal Canyon near the 91 Freeway. Around 3,580 structures were destroyed, damaged or threat-
S
E
ened during the eight-day battle. Sophomore Amanda Boktor was one of the many students displaced by the blaze.“I was really anxious because I didn’t want my neighbors’ houses or my house to be affected by the fire,” said Boktor. “My family and I were not allowed to go home because the fire had gotten so bad. We spent two nights at a hotel, and we were allowed to go home in the morning on Wednesday.” Because the fire impacted so many students from Tustin, Superintendent Dr. Gregory A. Franklin made the decision on Monday to close all Tustin Unified campuses the following day. “Initially, I was happy that school was closed,” said junior Connor McGuire. “But I became pretty worried after I started to think about all the people who would be impacted.” In addition to school closures and mandatory evacuations, many businesses were forced to close down. One facility in particular, the Peacock Hills Equestrian Center, found itself in a unique predicament. The 500-acre horse riding facility was in the direct path
M
E
S
T
of the fire, forcing employees and local residents to quickly transport the horses housed in the stables to safety. When sophomore Cambria Cox heard about the need for volunteers, she did not hesitate to drop everything and rush to help. “I texted my mom, ‘There’s a fire and we need to go,’” said Cox, who has been riding at the facility for three years. Some horses were trailered out of the center while others were walked to a local Albertson’s parking lot. Volunteers and riders did everything they could to help ensure that these animals reached safety. The center, a safe haven to nearly 200 horses, was severely affected by the fire. The jumping facility burned down first, and the other buildings quickly followed. Firefighters were able to contain the fire by Tuesday, Oct. 17. Many families and businesses are in the process of repairing the damage or rebuilding their homes. Fortunately, no lives were lost during the week-long ordeal, but many families from the affected areas are still on edge as the California fire season seems to be growing longer and longer each year.
E
Neon Nights Dance - 9/1
Senior Sunrise - 9/2
Beckman students attended the Welcome Back Dance in early September where they danced the night away under the bright neon lights in the Beckman gymnasium.
6
Photograph courtesy of Paulina Vitarella
R The senior class of 2018 arrived at Citrus Ranch Park to watch the sun rise and to welcome their final year at Beckman.
Photograph courtesy of Marva Shi
Beckman Solar Panel InstallationS DelayeD By Emma Trueba | Creative Commons The Tustin Unified School District (TUSD) has entered a 25year Purchase Power Agreement with Partners For Many Generations Solar (PFMG Solar) to buy electricity at a fixed rate for its schools. Beckman is one of the 15 sites to break ground on a solar project that will install about 331 solar cells with parking canopies in the front and back parking lots during the 2017-18 school year. TUSD created this solar project to take advantage of the grandfathering current Time-of-Use (TOU) periods and net metering electric rates. According to a Frequently Asked Questions document released by TUSD, the solar project is estimated to save the District approximately $19-24 million. The first phase was to be completed by Sept. 14, 2017, but has yet to be started. A major factor contributing to the delay is a shortage of solar cells due to high demand. There has been a nation-wide solar cell problem due to the bankruptcy of the Solarworld company, the largest U.S. solar panel maker. Due to this bankruptcy, TUSD is unable
R
to start construction on the parking canopies until more solar panels are available. According to an interview with Chris Pelissie, cofounder of Senga Energy, there are solar panels available for a project of this size, but when it comes to school districts, “they’re slow.” The panels are available, but due to price gouging, the purchase of the solar panels would be very expensive. If this project is not completed before Dec. 31, 2017, the district will not receive the benefits associated with the grandfathering TOU periods and net-metering rates. The district has filed to extend the deadline to Aug. 31, 2018 in order to be able to complete all of the 15 solar proects. As of now, there are no senior parking permits being sold, and once construction starts, there will be a significant decrease in available parking spots. Principal Donnie Rafter is aware of these concerns. “It is going to be a mess, but we don’t have a choice,” said Rafter. “At this point we know this is the right thing to do. We are working with the city to get variance on parking.” Variance on parking means that
E
Club Rush - 10/6
C
A
Solar panels scheduled to be installed in Beckman parking lots were delayed due to multiple unexpected causes.
the no-parking zones on El Camino along the sides of the school would become parking zones. Beckman is also requesting Irvine Company Properties behind the school to open their lots during construction. However, even if these areas were opened to Beckman students, traffic may still be an issue. “It would still be chaotic,” said senior Baylee Franklin who drives to campus. “It would help, but it would cause a lot more traffic and congestion in the streets.” The long-term benefits of this project are said to outweigh the initial chaos the school must endure in order to install the solar panels.
P
Game Concert - 10/18
Beckman clubs recruited new members during the semi-annual Club Rush event. Student leaders promoted their organizations to prospective students.
Orchard Hill’s Symphonic Orchestra and Beckman's Chamber Orchestra played scores from famous games such as “Tetris” and “Kingdom Hearts.” Photograph courtesy of Karolyne Diep
Photograph courtesy of Ivanna Tjitra
7
Fall Style Layout & Photographs by Karolyne Diep
RAYMOND VASCO Raymond prefers an urban-hipster style created by layering different pieces to bring dimension to his outfits. He pairs a Henley shirt with a flannel, throws on a pair of comfy jeans and finishes the look with black Vans. You can find Raymond looking through the clothing racks at ASOS, H&M and Zumiez.
CALEB SHIIHARA Caleb puts his own twist on a sophisticated street style. He purchases staple articles of clothing from stores like Cotton On, Forever 21 and TJ Maxx and makes them his own. Why pay more for ripped jeans and t-shirts when you can DIY? He distresses his clothing and rips holes in his shirts so he doesn’t have to burn a hole in his wallet.
8
Distressed & Faded Black Jeans Zumiez
Palm Tree Button-Up H&M Knitted Cardigan ASOS
Olive Bomber Jacket Forever 21 Black Hoodie H&M DIY Distressed LongSleeve Pacsun
Denim Ripped Jeans Cotton On
9
Striped Silk Blouse Thrifted
DIY Ripped Black Jeans Thrifted
Brown Suede Jacket Forever 21 Cream Sleeveless Mock Top Tilly’s
Black Button-up Pencil Skirt Tilly’s 10
CLARISSA HERNANDEZ With a cream mock top, brown suede jacket and a black pencil skirt, Clarissa prefers to adhere to a chic style. She highlights accents by pairing pieces that complement each other, bringing more personality to the outfit while still focusing on the overall look. Clarissa’s style is perfect for SoCal’s version of autumn. Her top stores to shop at are Urban Outfitters, Brandy Melville, Tilly’s and PacSun.
MICHELLE NAZARENO Between a vintage and modern style, Michelle likes to find a middle ground that incorporates both aspects into her outfits. She shops at Forever 21 for staple pieces, but prefers thrifting most of her outfits from Goodwill. Thrift shops give her the freedom to tailor pieces to her own taste, as she likes to recycle old clothing to capture that vintage look. Michelle prioritizes comfort, wearing loose and flowy pieces that align with her aesthetic.
11
Teachers Pay It Forward
Layout by Victoria Choi | Photographs by Karolyne Diep
"Being doesn’t
cool matter
in at
high school all. It’s s u c h
Some things that BHS teachers wish students knew...
a small part of your life and you leave. All the drama and all of the pressure doesn’t matter."
B el z
"Don't
feel like you have to take everything on that happens with school, friends, relationships, family, etc. [...] focus on yourself first, and don't let anything else take away from that.
"Enjoy the learning process for what you will gain,
Don't take it on, whatever it may be, if it's going to affect your life in a n e g a t i v e w a y. "
not just for the letter grade you will receive."
S anju r j o
Shekarchi
“Adopt the mindset of learning as a means of developing confidence.
No one expects kids to have the right answer all the time. So, if a student knows it is okay to take risks, and that it is okay to be wrong once in awhile[...]he or she is much more likely to find success and contentment and confidence in my class. I can teach English; I cannot teach confidence.”
Ha l lst rom “Sleep! It is so hard to put your best foot forward when you are lacking sleep. Sometimes
the best feeling in the world is shutting down the phone and computer and getting s o m e q u a l i t y s h u t - e y e .” Tig he 12
Responses by Cynthia Le Gena Huynh Soowon Lee Victoria Choi Stephanie Xu Shannon Zhao Daeun Lee
Dear Elis,
How do you stay focused and not get behind in AP US History work?
Sincerely,
Apt Andrew
I'm on a club board right now and the other members on my board aren't carrying out their duties. At this point, I kind of want to deck each and every one of them. Do you have any advice?
Sincerely,
Anxious Avocado
Dear Elis,
Is being “fake” the solution to ending friend drama?/ How do I prevent myself from drama among friends?
Annoyed Anna
To s u b m i t y o u r ow n q u e s t i o n t o E l i s , p l e a s e u s e t h e G o o g l e Fo r m i n t h e l i n k i n b i o o f @ b e c k m a n c h r o n i c l e o n In s t a g r a m .
Apt Andrew, Split up your work into smaller portions. But, let’s be real, you’ve heard this same advice many times before. Despite Mr. McGill, Mr. Friendt and Mr. Goldenberg telling you not to, you’ll continue to procrastinate. After you finish season 2 of “Stranger Things,” then you'll realize the sheer amount of coursework that you have to complete, wallow in your incapabilities for 15 minutes and pull an all-nighter while making empty promises to yourself not to procrastinate on the next unit. Get a “mom” friend who will nag you when you fall behind on coursework.
Elis
Anxious Avocado, Working with an uncooperative group of people can be very frustrating; however, you should talk to the other board members and express your concerns. They may then realize that they haven’t been doing their job and work harder to compensate. If all else fails, honestly, just deck ‘em. Just kidding.
Hoping you solve your problems without a referral,
Elis
Annoyed Anna, Although it may seem easy to act like everything is fine, in the long run, it’s going to continuously bother you. Talk to the friend in question and sort your feelings out with each other, but don't push yourself to be friends with those that don’t deserve your friendship; leave toxic people behind.
Don't be a snake,
Dear Elis,
Elis
Although photosynthesis does produce some ATP, these molecules are not used to do the work of the plant cells. What other process occurs in the cells that provides the ATP necessary to do cellular work such as make proteins, divide cells and move substances across membranes?
Sincerely,
with Elis
Start adulting,
Dear Elis,
Sincerely,
What's Your Deal?
Sciency Sam
Sciency Sam, Please do your own biology homework. Refer to Mr. Kim, Mr. Hu, or Mr. Chow in their respective rooms (211, 218, 209) for help! Or Wikipedia ;)
Yours truly,
Elis 13
senioritis: a virulent disease
S
By Cindy Lim
Senioritis: noun. The ill- ceptable for a multitude of reasons. sity has since denied those claims. ness, although not listed First off, all colleges do look Whatever the issue may be, seniors in the Centers for Dis- at senior year grades from second do not want to compromise their ease Control and Pre- semester. In an article published admission to future schools. Acvention A-Z index, is in The New York Times in 2012, cording to the National Association very real and slowly taking hold of the dean of Connecticut College for College Admission Counseling, unexpecting seniors at Beckman. Martha C. Merrill described the approximately 22% of colleges have UrbanDictionary.com describes consequences of students slack- revoked an admissions offer in rethe condition best: “A crippling dis- ing off in their senior year. “Every cent years. If one falters under a ease that strikes high school seniors. once in awhile, I find myself having senior year workload, the possibilSymptoms include: laziness, an ex- to send what I call an 'oops' letter. ity of rejection becomes a reality. cessive wearing of track pants, old This letter informs the student that Some may argue that Seathletic shirts, sweatpants, athletic we have noted a downward turn in nioritis is understandable because shorts and sweatshirts. Also fea- performance and request a written seniors spend months preparing tures a lack of studying, repeated explanation. That response is in- for the college application process absences and a generally dismissive cluded in the student’s permanent while balancing school work. Hitattitude. The only known cure is a record. Students with significantly ting “submit” on that last college phenomenon known as Graduation.” poor academic performance during application can feel like finally When college applications senior year need to know their of- crossing the finish line. However, are sent off to admissions officers fer of admission can be revoked.” what many students fail to realize around the country, seniors begin Today, colleges are becom- is that completing applications is to let go of their responsibilities and ing more selective, especially due not the end of the journey—it is only focus on having fun in their to the growing competition among just the beginning. For example, a final year. Students no longer find students. The more competitive huge grade dip can affect students’ joy in the day-to-day schedule of at- the school, the more likely that it financial aid opportunities. While tending classes as they realize that a will expect higher excellence in all some students’ acceptances may be mediocre performance will suffice. subject areas. If this requirement safe after a grade dip, some finan However, before all the af- is not met, there is a lower chance cial aid packages may not be imfected seniors’ accumulated accom- of admission and a potentially mune. This is because many merit plishments crumble at the founda- higher likelihood of rescinding scholarships are based on grade tion, it is essential to examine the admissions offers. Just this year, point averages (GPAs). Thus, if a potential consequences of Seniori- UCI rescinded 499 admissions of- student’s GPA suffers during the tis and contemplate the detrimen- fers to incoming freshman just last semester, he or she may not tal impact of this horrid disease. two months before the start of the even be eligible for the scholarship. The majority of 12th grad- school year. The university shared After examining all the ers are the victims of Senioritis as that some of these students had consequences and explanations they believe that their senior year inadequate grades during their se- above, it is essential that one does does not matter as much as their nior year. After public backlash, the not simply neglect the responsiprevious years. Students should un- university readmitted some of the bilities and duties as a high school derstand that senior year is one of students. Critics claim that UCI senior. Sweat pants and basketthe most important years and ac- did not expect so many students to ball shorts are acceptable. But knowledge that Senioritis is not ac- accept their offer, but the univer- slipping academically….not okay. 14
I
ACTs, SATs & GPAs
By Daniel Kang | Illustrations by Shannon Zhao
In order to get into highly selective universities, high school students take advanced classes—honors and AP courses—and are expected to have an outstanding GPA at all times. On top of rigorous classes, students are constantly burdened by standardized testing. Scholastic Aptitude Tests (SAT) and American College Tests (ACT), are the two standardized exams that are critical to the college application process. The score a student receives can either improve or worsen their college application, and subsequently, affect their entire college career. However, there is much more to a student than the scores they receive on a few exams. Standardized testing should not be required due to the tremendous amount of issues it creates in students’ lives. Standardized testing is one of the main factors when selecting college applicants, but applicants should be given the chance to be seen beyond their scores. According to the Washington Post, standardized testing does not accurately assess students on what they have learned. SATs and ACTs only focus on mathematics and English curriculums. Graduation requirements mandate that students take a variety of classes in all subject areas. Art, video production and history are just a few classes that cover some of the A-G requirements, yet students are never offered the opportunity to demonstrate their abilities in these subject areas on a
standardized test. Even if students excel in these courses, they never have the chance to demonstrate their full potential and intelligence beyond their level of competency in math and science. According to PBS SoCal, a student in San Francisco scored 800’s on both the Biology SAT Subject Test and the
"There is so much MORE to a student than the scores they receive on a few tests."
toms listed on ADAA website. This must be taken into consideration. SAT and ACT scores do not seem to be going away anytime soon, but some colleges and universities are starting to remove the requirement from their application process. Yale, Columbia, Dartmouth and other universities are dropping certain standardized testing requirements. Although they have not done away with the test scores completely, it is a sign that things are slowly changing. Hopefully, as time goes on, students will be evaluated more holistically, and non-academic skills, character and citizenship will be weighted more heavily. Freeing the standardized testing requirement from college applications could be the final puzzle piece to achieving fair assessment.
Chemistry SAT Subject Test but only scored a 1250 on the SAT. Is this student a competitive candidate for college admission based on their SAT score? It is hard to say. But then again, numbers only shed light on part of the story. Test anxiety is another factor that is not taken into consideration. Some may argue that test anxiety is not real. The Anxiety and Depression Association of America Upperclassmen struggle to prepare (ADAA) would disagree. Nausea, for standardized testing on top of headaches and shortness of breath their already-heavy load of mandatory schoolwork. are just a few of the physical symp15
Stressed Student Skincare SURVIVAL GUIDE
By Gena Huynh & Hanna Kim | Layout by Gena Huynh Students today experience great amounts of stress. It gives us headaches, uncontrollable frustration and worst of all: acne. Being a stressed-out, hormonal teen is the perfect combination for acne to form. Here are some tips and tricks to combat acne!
Step Two: Toner
How Acne Forms
When your blood sugar level rises, your body naturally produces insulin to bring it back down. However, when insulin gets released into your system, it disrupts the hormone balance which results in an increase of androgen. Androgen triggers more sebum, or oil, to be produced. This sebum is thicker than normal sebum. Due to a differnece in consistency, your pores get clogged and ta-dah! Acne is born!
Where to Start
Starting a skincare routine can be daunting due to the sheer amount of products in the market with vague descriptions on what they actually do. The key is to start slow.
Step One: Cleanser
Cleanse once in the morning and at night. Don’t cleanse too much as it over dries your skin. This causes your skin to overproduce oil, which in turn produces more acne. We reccomend Burt's Bees Intense Cream Cleansers, Burt’s Bees Chamomile Deep Cleansing Cream and COSRX’s Low pH Morning Gel Cleanser.
16
The toner helps reset the pH of your face. Your face’s pH is naturally acidic, but is slightly buffered after cleansing. The toner will help hydrate your skin and recover its natural oils. We reccomend Skinfood’s Tea Tree Clearing Toner for its anti-inflammatory properties.
Step Three: Moisturizer
Moisturizer forms a barrier which prevents water from evaporating through your skin. Even if you have oily skin, use moisturizer! Your skin is dry on the inside and is overproducing sebum to compensate. To combat oiliness, we recommend IOPE’s Plant Stem Cell Cream!
Bottom Row (L to R): Skinfood Strawberry Sugar Mask Wash Off, Acure Pore Refining Clay Mask, Laneige Water Sleeping Mask, Acure Brightening Fask Mask, Skinfood Black Sugar Mask Wash Off Top Row (L to R): Avocado and Oatmeal Mask Purifying Clay Facial Mask, Charcoal and Black Sugar Dual-Action Scrub Facial Mask, Honeydew and Chamomile Mask Overnight Cream Mask (All CVS Brand)
Top It All Off: Masks! Skinfood’s sheet masks contain extracts from a multitude of foods. Their tomato mask helps pack the skin with antioxidants. When your skin is irritated, aloe helps sooth away all redness. Lastly, tangerine peel extract helps the skin retain moisture!
Feel like your skin needs an extra pick-me-up? Try a face mask! Face masks act as intensive treatments for your skin when it is in need of moisture or cleansing. For example, if your skin is feeling especially inflamed, a calming aloe or tea tree mask should calm down your skin. Masks are extremely user-friendly. Before application, cleanse and tone. Apply the mask as stated on the directions. Sheet masks should stay on for 20 minutes. Masking is a great way to intensively treat your skin, but should be used sparingly. At the most, you should mask twice a week to prevent overdrying.
The OKA “Buckle Up the Pore” Pore Care Maskpack is perfect for nourishment. The mask contains witch hazel, mugwort, and green tea extracts which help soothe redness and minimize the appearance of pores. Green tea provides antioxidants that are beneficial to the skin. 17
18
Wilson Lu: Makeup Prodigy By Rachel Ker | Photograph Courtesy of Wilson Lu Specks of powder balancing on the thin bristles of a makeup brush possess the power to alter an appearance with a few strategic strokes. Often times, makeup is seen as a cover up—a mask to hide flaws and blemishes. This is not the case for makeup prodigy senior Wilson Lu. His passion for makeup is palpable. It is his artform, his outlet, his talent. The faces of his friends and family are his canvas, and over the last few years he has developed his craft. Wilson’s passion for makeup began with his infatuation with watching “RuPaul’s Drag Race” as a child. This show did not just entertain, it was the inspiration that gave way to his current aspirations.Tra-
sive to women only, and I liked that it’s something different. It really inspires people to go for what they want and not let gender rules stop them from what they want to do.” When he was a freshman, Wilson started to take makeup more seriously. He began to teach himself how to properly apply eyeshadow and foundation, and with time, his skillset started to take off. According to Wilson, learning to develop his makeup skills on his own was all about trial and error. What he could
“I realized it’s not really something that is exclusive to women only, and I liked that it’s something different.”
ditionally, makeup has been seen as something strictly feminine, but Wilson did not allow gender stereotypes to define what he could and could not do. “I realized it’s not really something that is exclu-
Talya Israel showing off her flawless pink glitter eyeshadow, done by makeup artist Wilson Lu.
not learn on his own, he learned by enrolling in a few makeup classes. By the end of his freshman year, Wilson started to develop a reputation as a makeup artist on campus. Wilson started to use his talent to benefit his peers. He was asked by a few friends on his color guard team in marching band to do their makeup. He saw this as an opportunity to practice his craft so he turned to his trusted brands Mac, Anastasia Beverly Hills, ColourPop and Glitter Injec-
tions to get his teammates performance-ready. His makeup sessions with friends slowly evolved into a small business where he is commissioned by parents to take care
“It really
inspires people to go for what they want to and not let gender roles stop them from what they want to do.”
of their children’s makeup needs for homecoming and other events. With all this talent and a possible business in the making, it would seem that Wilson is destined for makeup fame, but his dreams lie elsewhere. “I may become a makeup artist one day, but my main focus is to be an environmental scientist. If that fails, then a makeup artist sounds cool as a plan B.” The discovery of a talent is one of the greatest gifts life has to offer, and Wilson has embraced his. Societal norms may sometimes get in the way, but Wilson has learned to overcome the barriers and break the norms. Rather than using makeup as a cover up, he uses it as a form of art to celebrate his own identity. 19
Jessica Ark "Focuses on the Bigger Picture" By Emma Trueba | Photographs Courtesy of Jessica Ark With her Nikon in hand, wherever she goes, sophomore Jessica Ark is always taking pictures. From planned photo shoots in Los Angeles with her friends to action shots for the Beckman yearbook, Jessica is constantly capturing memorable moments. Coming from a long line of photographers, Jessica was exposed to the art of photography at an early age. Her father, a professional photographer, has been Jessica’s role model and they have formed a unique bond through their shared love of photography. “Anytime we do something, he always has a camera on him. We come closer together as a family because he always wants us to do something cool.” After receiving her first camera at the age of four, Jessica constantly worked toward improving and honing her photography skills. She later received her first professional camera the summer before 9th grade and has not stopped snapping photos since. She pours her own time and effort into making her photos picture-perfect. When she is not editing photos, Jessica spends time planning photo shoots with her best friends Alexis Ton and Nicole Ton. Every photographer has their own personal style. Jessica prefers blurry backgrounds with the subject of the picture in focus. She emphasizes people as the central 20
component in her pictures, muting the backgrounds in order to enhance the clarity of the person. She shies away from harsh lighting and specifically schedules pictures on cloudy days to achieve a neutral color palette. Her favorite time to shoot photos is between 4 p.m. to 6 p.m. because of the warm lighting it provides. While some photographers shoot raw, unedited photographs, Jessica prefers to edit her photos. “A picture captures something that is really important at the moment and captures emotions.” She
Jessica enjoys taking picutres of her friends. Here, Alexis Ton, Jessica’s best friend, smiles at the camera.
takes great pride in her photos, and spends every spare moment editing them until they meet her standards. Not only is Jessica an exceptional photographer, she’s also an excellent student. She is deeply involved in yearbook, where she is able to share her passion for photography with the entire student body. “She is a hardworking student and is always full of energy,”
said Mr. Blair, the yearbook advisor. Jessica’s passion motivates her to get her school work done as quick as she can in or-
“A picture
captures something that is really important at the moment. It captures emotions.” der to free up her time to work on her photos. “I have to finish if I really want to go to this photo shoot. It makes me want to get work done if I want to have fun.” Despite her love of photography, Jessica’s dream is to become a pediatrician. Her grandfather, a retired surgeon, inspired her to follow this line of work. She loves working with kids and spends time volunteering to support students with special needs on campus. Jessica is also involved in the Beckman Relay for Life Club, an organization that raises money for cancer patients. After high school, she will continue to take photographs whenever the opportunity arises.
Ivana Rusich spikes her way to USC By Sameer Ghai | Photograph Courtesy of Ivana Rusich At the beginning of Octo- stood out when recruiters watched ber, junior Ivana Rusich commit- her play. On school days, Ivana ted to play beach volleyball for the practices for at least two hours, but USC Beach Volleyball team. Al- on other days, Ivana can be found though playing for one of the top training for the entire day. When teams in the nation seemed like she is not practicing, she is honing a distant dream, Ivana knew that her skills in local tournaments in her hard work and determination Orange County and Los Angeles. would earn her a spot of that level. She has also competed in national When she found out she level tournaments at Las Vegas and made the team, Ivana was over- Phoenix. She is a decorated athlete come with emotion. “I was moti- with numerous tournament titles vated and excited.” Since entering and trophies. Her notable achievethe Beckman Girls’ Volleyball team, ments include winning “Best of the Ivana has dedicated countless hours Beach,” placing second at nationals to earn a spot on a college team. and third at the Junior Olympics. With another year to prepare Southern California is saturated with talented beach volleyball play- at Beckman before competing at the ers, so she knew she needed to work collegiate level, she cannot wait to that much harder to make sure she put on Trojan colors and “Fight On!”
Junior Ivana Rusich flawlessly delievers a jump serve at her opponents.
The Paint that Never Dries: Digital Artist Megan Dang By Amelia Chung | Photograph Courtesy of Megan Dang As Megan Dang’s eyes fo- deavors, but she never expected that cus on the screen, she carefully her artwork would attract 13,300 sketches out facial features with followers. Her art account displays extreme detail. She uses blobs of a wide range of artworks featuring color to imitate lights and shad- her main interests, which include ows, eventually creating an intri- fanart, pop-art, musical theater and cate snowy background. Freshman culturally-inspired art designs. The Megan is an exceptional artist that imagination that she puts into her shares the beauty of digital art with drawings has no limits and conothers on her Instagram account nects with thousands of people. dedicated to her artistic creations. Megan’s high-quality, cre In May of 2016, Megan start- ative artwork requires relented her art account on Instagram to less hours and an undying pasdisplay her artwork for other peo- sion for art. Megan spends about ple online. She began posting a cou- one to two hours producing art ple of her sketches, simply doing it for her art account every day to because she wanted to try new en- keep her followers up-to-date .
Exceptional artist Megan Dang celebrates her birthday at Irvine Spectrum.
For Megan, art is more of a hobby and stress reliever rather than a future job. She aspires to keep creating and sharing wonderful artwork that she and others can appreciate. 21
BON APPÉtit All-Natural Dining
featuring
Let's be honest, who doesn't love food? We all want delicious food, but sometime we don't have the money or time to go explore for ourselves. In this edition, the editors of the Beckman Chronicle took the opportunity to bring you the best "all-natural" dining spots. We chose to feature restaurants and shops that focus on bringing you the most delicious, organic and pure ingredients. We hope you enjoy and take these recommendations into account. Bon appétit!
Breakfast Mo c h a L at t e
Av o c a d o To a s t
Tru Bru Cafe 7 6 2 6 E C h a p m a n Av e , O r a n g e , C A 9 2 8 6 9 EDITORS’ FAVORITE: Chai Latte & Avocado Toast Right when you walk in, you are met with a cozy, warm environment decorated with quirky succulents and modern aesthetics. However, what makes this little cafe stand out is their delicious food. Taking a bite of their French Toast or Avocado Toast and a sip of their lattes will keep you coming back for more of their all-natural and organic treats. 22
Fre n c h To a s t & C h a i L at t e
Lunch/Dinner Sharky's woodfired mexican grill 6 7 2 5 Q u a i l H i l l P k w y, I r v i n e , C A 9 2 6 0 3
EDITORS’ FAVORITE: Chicken Fajita Bowl What is better than all-natural comfort food? Affordable all-natural comfort Mexican food! From the chicken in your plate to the salsa on the counter, Sharky’s Woodfired Mexican Grill offers fresh, vibrant flavors in bountiful portions for a more-than-reasonable price. Try any of their featured dishes. They will leave you stuffed and satisfied. O ’ Shine Taiwanese Kitchen 1 3 8 3 4 R e d H i l l A v e , Tu s t i n , C A 9 2 7 8 0
EDITORS’ FAVORITE: Popcorn Chicken O’Shine Taiwanese Kitchen brings outstanding food to the table. The homey, cafe vibe and subtle lighting really helps guests chill out, and there isn’t a single dish here that exceeds $15. O’Shine’s Crispy popcorn chicken with basil is a perfect bite to grab with friends. The subtle, but tasty seasoning brings the flavors of the chicken. Pop in for a fresh bite of poppin’ popcorn chicken with your pops!
Dessert Honeymee 4 1 4 Wa l n u t Av e , I r v i n e , C A 9 2 6 0 4
EDITORS’ FAVORITE: Misugaru Milkshake Finish the day with some cold delights! The sweet, creamy, 100% true milk ice cream at Honeymee is something worth trying. Add a swirl of all-natural honey or a honeycomb, and you will be unBEE-lievably happy that you took our advice. The honeycomb brings in a unique texture with a simple, natural sweetness. If ice cream isn’t your thing, Honeymee also offers delicious waffles, milkshakes, teas and coffees that will satisfy any palate. 23
24
A
THLETIC By Gena Huynh
Balancing grueling practice schedules, frequent games and academics is no walk in the park, but it is the typical schedule for student athletes at Beckman. The hard work they have put into their craft is amazing, and undoubtedly, these students will achieve great
SCHOLARSHIPS
Photographs by Marva Shi
things in the future. While they already know their destination for the next four years, unexpected challenges are sure to come. But these Patriot athletes are well-equipped to handle any challenge, on or off the field. We've appreciated all of their hard work in representing Beckman. Congratulations to all Patriot athletes who have received athletic scholarships to universities! Student athletes not pictured will be featured in a photo gallery on the Beckman Chronicle website ().
25
U.S. Loses Spot at the World Cup By Clarissa Hernandez | Creative Commons
The U.S. Men's Soccer Team faces defeat against Trinidad, losing their spot in the World Cup.
On Tuesday, Oct. 10, the U.S. Men’s National Soccer Team failed to qualify for the 2018 World Cup, for the first time in three decades, after their 1-2 loss against Trinidad. This year, 211 nations worldwide attempted to qualify for the World Cup, but only 32 teams earn the privilege to play in this prestigious tournament. In order to move on in the
competition, the U.S. needed to at least tie against Trinidad. Trinidad lost six straight qualifier games before facing the U.S., so it appeared that the odds were in the Americans’ favor. However, the U.S. did not fare so well. Defender Omar Gonzalez scored an own goal in the 17th minute after a failed attempt to clear a cross. Less than 20 minutes later, Trinidadian defender Alvin Jones scored from well outside the penalty box, giving his team a 2-0 advantage. At the beginning of the second half, American midfielder Christian Pulisic gave his teammates and fans hope after scoring from the top of the penalty box. Unfortunately, his efforts were not enough for the U.S to qualify. In response to the Ameri-
ASTROS STEALS
THE
cans’ loss, Bruce Arena resigned as head coach. The result stirred up various responses across the soccer world, including one from former U.S. national soccer team member and current national broadcaster for Fox Sports, Alexi Lalas. “It is a dark time indeed, but this is a time for leaders to step up,” said a frustrated Lalas in his broadcast following the game. “So, it's time to pay it back, make us believe again. You don't owe it to yourself, you owe it to us.” According to the Pew Research Center, 94.5 million people in the United States (about 31% of the population) watched at least 20 consecutive minutes of the last World Cup. However, it is improbable that the viewership will surpass this number next year in the absence of the men’s national team.
CHAMPIONSHIP
By Devon McCoy & Guest Writers Connor McGuire and Jacob Halpern | Creative Commons
The Houston Astros win their first franchise championship in the 2017 World Series.
The L.A. Dodgers forced a game seven in the World Series against the Houston Astros but ultimately fell short losing 5-1. Although the Dodgers were desperate for a championship after a 29-year drought, Houston fans definitely needed this victory in the midst of the grueling recovery process 26
post-Hurricane Harvey. Game seven went south when Dodgers pitcher Yu Darvish allowed a leadoff double to George Springer of the Astros. The Astros wasted no time scoring. Third baseman Alex Bregman grounded out on a pitch from Darvish, driving in Springer, giving the Astros a one run lead. They tacked on another in the inning, jumping ahead by two runs. The Astros stretched their lead with a surprising run batted in groundout from their pitcher Lance McCullers Jr., and a two-run home run from Springer. With this home run, Springer became the first player ever to homer in four consecutive World Series games.
Darvish was left in for one more batter until Springer homered, which ended Darvish’s night. The home run was Springer’s fifth of the series, opening up a commanding five run lead for the Astros. The Dodgers did push across a run during the sixth inning. Charlie Morton, Astros' fifth starter in their rotation, got into trouble after giving up a single to Joc Pederson and walking Logan Forsythe. After getting catcher Austin Barnes to pop out to shortstop Carlos Correa, Andre Ethier’s single to right field got the Dodgers on the board. That single would be all the Dodgers could muster off the ‘Stros. The Astros kept the Dodg-
ers’ offense, which was one of the best in baseball throughout the regular season, at bay as Morton shut the door, recording the final 12 outs to clinch the first ever World Series title for the franchise. Victory for the Astros sent back positivity and optimism to Houston. The players’ resilience lift-
ed the spirit of the city of Houston, ers, their World Series drought a community still in the process of hits 30 years. The loss stings, but recovering from Hurricane Harvey. with the emergence of rookie Cody Regardless of which team Bellinger, the core of their lineup won, it is impossible to look back set to return (Justin Turner & Coat this seven game series as a dis- rey Seager) and the resurrection appointment, and even the die- of Yasiel Puig, they are, without a hard Dodger fans can tip their blue doubt, one of the favorites to lift the hats to the Astros. For the Dodg- Commissioner’s trophy next season.
Rigorous Rowing in SoCal Makes a Splash By Matthew Basilio | Photograph Courtesy of Warren Lee The Southern California taxing sport that requires a great Oct. 28 against some of the top coastline is known for its beautiful deal of endurance and discipline, rowing teams from the West Coast. beaches and its water-based athletic rowing requires a certain amount of Although Beckman does activities. However, there are a few athleticism and team chemistry in not have a rowing team, there are aquatic sports that are not as popu- order to establish a successful team. a few students who participate in lar—specifically, the art of rowing. One of the most accom- the sport. “You have to be really While high school rowing has long plished rowing teams in South- committed; it's a really demandbeen a staple of the East Coast’s ern California is UCI’s rowing ing sport,” said junior and rowculture, it is not offered at many high team, known as Men’s Crew er Warren Lee. “Definitely during schools around Southern Califorraces, like a 2000 meter, it can be nia, despite its growing popularity. really tiring. You need to push There are two forms of rowing: through it; it really is a mental sweep rowing and sculling. In sport.” Along with taking many sweep rowing, teams consist of four Advanced Placement (AP) classes, or eight rowers, each rower holdLee rows for the Newport Aquating one oar. Depending on their ic Center, while also managing location on the boat, rowers are Link Cru as a student coordinator. referred to as either port or star Living in Southern Caliboard. The port side is also referred fornia, there are many ways to beto as the stroke side; the starboard come active outdoors, like playing Warren Lee tests his endurance as he side is referred to as the bow side. practices for an upcoming competition. basketball, surfing, hiking or beach In sculling, rowers are divided into volleyball, but another way to get in quads, doubles or singles, and each having recently traveled north touch with Mother Nature and get rower holds one oar in each hand. to Folsom, California to race at a great workout is definitely rowing. Penalties can also occur due to false the Western Conference Cham- There are a multitude of teams and starts, which is when a boat’s bow pionships on Lake Natoma. clubs to choose from. The requireRowing is a universal ments are very simple to join most crosses the plane of the starting line before the green light flashes. sport popular among both men rowing teams and clubs. The criteIn addition, a boat must meet the and women. USC’s women’s ria consists of having some rowing minimum weight—not including rowing team travelled to Lake ability, general fitness and condioars, bow number and items not Natoma in California to compete tioning, a good attitude and coachfastened on the boat. Competitors in the Head of the American Re- ability and a weight adjusted ergommust also be weighed while wear- gatta, which kicked off USC’s first eter (erg) score. Rowing is a great ing their racing uniform, without competition of the 2017-18 year. option to become involved with shoes or other foot gear. As a very Their race took place on Saturday, a team and to get a great workout. 27
Varsity Girls Tennis Makes Beckman History as CIF ChampS
By Allison Perez | Photograph Courtesy of Ashley Teng and Nick Friendt On Friday, Nov. 10th, Varsity Girls Tennis won 14–4 against Marlborough High School in the 2017-18 California Interscholastic Federation Southern Section (CIFSS) Division II Championships. The team is currently ranked first in Division II of CIF-SS and was first place in 2016 as well. However, the Patriots placed fourth in the Pacific Coast League (PCL) last year and were unable to continue into the 2016 CIF-SS season. This year was a different story. Beckman reaped five wins and five losses in this year’s PCL with an overall season record of 20 wins to five losses. One of those losses included a 7–11 defeat to Northwood High School. After some minor adjustments, girls tennis faced Northwood again and defeated the Timberwolves 13-5. This win pushed the Patriots into third place in league which qualified them for the CIF-SS playoffs. Before arriving at the championships at the Claremont Club, the girls rid themselves of pre-game jitters during the drive there. “My favorite moment of the CIF tourna-
28
ment—besides the matches—were ing moment for the girls, winning the bus rides because that’s where the championships also made we had lots of fun and bonded as a Beckman history. “It feels really team,” said junior second-ranked good to win the CIF title,” said doubles player Gnamitha Naga- Haj. “This year, we made a lot of nathanahalli. Other players agree history. We beat Northwood [...] that their best bonding moments I feel like we’ve worked so hard.” occurred on bus rides, where the team The season was tough, but holdblared music and celebrated wins. ing the CIF-SS title proved to be The championships show- worth all the struggle and effort. cased the girls’ drive to win, and Now that the CIF-SS season more importantly, their improve- is over, the girls will be receiving ment throughout the season. “In our downtime until next year’s season CIF run, senior Emily Lu and junior begins. For the seniors, like Teng Kayla Cruz ended up going 15-0 at and Lu, this is their final season, number one doubles,” said Varsity but the rest of the team will conTennis Coach Nick Friendt. “Naga- tinue to compete with Patriot pride. nathanahalli and sophomore Isha Shah beat the number two double teams 7-6. Doubles came up big, especially in the playoffs and league.” According to Coach Friendt, the doubles teams struggled this year, but no such distress was evident in their performance at Claremont. Adding onto doubles’ success, singles players fared consistently well throughout the year. “Singles was a strength for us, and senior Ashley Teng finishe[d] her career as the second all-time winner of sets,” said Coach Friendt. “Freshman Kensington Mann had a great year; a dominant year at number two singles and sophomore Christelle Haj played number three singles and was very good and probably our most improved player from last year as well.” Some of the successes of the singles players include Teng concluding this season with a win-loss ratio of 67–9 and an astonishing high school career singles record of 208–32. This record is only rivaled by Beckman alumni Megan Heneghan, who achieved the highest record of 226–18 from 2006-2009. A defining and exhilarat29
TURNS POP CULTURE UPSIDE DOWN by Aarushi Bhaskaran | Creative Commons Combining science fiction, ‘80s synth pop, “Dungeons and Dragons” and Eggo waffles, “Stranger Things” took the world by storm when it was released two years ago. The Netflix original series, created, written and directed by Matt Duffer and Ross Duffer, explores the supernatural in the town of Hawkins, Indiana. Taking on the archetypical setting of a “small town where nothing ever happens,” the series showcases a very Spielberg-esque style to evoke a sense of ‘80s nostalgia. As a boy goes missing, it soon becomes clear that something sinister is afoot, transforming this sleepy town into a highly dangerous place. Season two was released on Friday, Oct. 27, satisfying eager fans of the show. The interpersonal dynamics between the characters are done excellently. Winona Ryder’s performance as Joyce Byers, a desperate, grieving mother, perfectly captures the character. Her relationship with the broken Chief Hopper (David Harbour), is highly compelling. The friendship and loyalty within the younger group, Mike (Finn Wolfhard), Dustin (Gates Matarazzo), Lucas (Caleb McLaughlin) and Will (Noah Schnapps) is heartwarming and recalls children from other science fiction movies in the ‘80s. However, the stunning performance of Millie Bobby Brown, who plays the mysterious Eleven, steals the show. Mike’s sister, Nancy (Natalia Dyer), is soon involved in the strange events that go on in the 30
town, as well as a love-triangle that is entertaining for the audience. While it is clear that Jonathan (Charlie Heaton), Will’s older brother, is the intended love interest for Nancy, the audience finds itself torn as it falls for the unconcerned charm of Steve (Joe Keery), her other love interest. The series was not expected to become such a global phenomenon, adding great pressure for season two to deliver. The new season ties
“However, the stun-
ning performance of Millie Bobby Brown, who plays the mysterious Eleven, steals the show.” up the loose ends and addresses the cliffhanger in the season one finale. The first season ends with Will having been rescued. However, it hints at further problems for him in the following season. Season two confirms this; Will’s woes are far from over and the residents of Hawkins continue to face the looming threat of the Upside Down. The characters are not allowed to speak of the events that transpired in the previous season and are under constant supervision. The new season also has changing relationship
dynamics, exploring new partnerships and groups and bringing in new characters. Among the new characters are Maxine “Max” (Sadie Sink), a possible new addition to the younger group, and her bad-tempered, abusive and racist step-brother, Billy (Dacre Montgomery). Meanwhile, there is romantic friction between Nancy and Jonathan as they deal the loss of Nancy’s best friend Barb and Jonathan’s problems with Will since his return. Joyce also has romantic entanglements this season, with a new boyfriend, Bob (Sean Astin). At this time, Mike continues his attempts to contact Eleven. Her whereabouts remain concealed from the rest of the characters for most of the season, providing her with a separate, independent plot-line. Season two of “Stranger Things” mostly holds up the standards set by its predecessor. The new characters are engaging, while the season also expounds upon the relationships between the old ones. The plot, pacing, atmosphere and characterization are all done excellently. There is little to criticize; however, there are some weaker moments in the season, most notably parts of Eleven’s separate plot-line. The rest of the season is excellent and contains all of the elements that made the series so popular in the first place. Overall, “Stranger Things” has redefined the genre and has carved a place for itself in pop culture.
A CELEBRATION HONORING THE DEAD: DIA DE LOS MUERTOS AT BOWER'S MUSEUM By Keerthi Nair | Photograph by Keerthi Nair On Sunday Nov. 5, 2017, Bower’s Museum hosted a Dia de los Muertos (Day of the Dead) Celebration, commemorating the ancestors of families residing in Southern California. The day consisted of multiple performances that showcased the rich Mexican culture that is an integral part of the community. The celebrations started at 11 a.m. with face painting booths and art stations. Children and adults could purchase and decorate a calavera de azucar, a sugar skull. Traditional foods and drinks like Mexican hot chocolate and pan de muerto, bread of the dead, were sold at the concession stands. At 12 p.m., local organizations performed traditional music and dances. The Orange County Mariachi Kids were up first and played rancheras, a type of Mexican country music played with guitar and horns. The performance was followed by a traditional Mexican dance, Danza Azteca. The dancers wore intricate head pieces decorated with feathers. Inside the museum, classical guitarist Joel Aceves performed songs while guests visited stations that highlighted elements of Mexican culture and heritage. The exhibition enabled visitors to gain a deeper understanding of Mexico’s history and the origins of the celebration. There was also an ofrenda, offerings on an altar, set up near the ticket counter. The altar was covered with notes from relatives
and pictures of loved ones lost. Bower’s Museum was able to showcase this holiday through a variety of performances and stands that were set up around the museum. People of all ages were present, whether they be someone’s great
grandmother or a two-monthold baby. It was a place where people from all over Southern California were able to come together and enjoy a Mexican tradition that dedicated a period of honoring toward their loved ones.
Traditional Mexican performers give their last performance of the day.
31
Rew 32
ind
Rewind is a collection of photos illustrating student life at Beckman. As the semester closes, seniors struggle to meet college application deadlines, juniors suffer from SAT/ACT preparation, sophomores take on harder classes and freshmen try to find their place. As students, we constantly struggle to achieve success throughout our high school journey. We walk with bare feet, picking up pieces of our identity, trying to glue them together. We work with calloused hands and clumsy fingers. We laugh just for the sake of laughing. We heal with Band-Aids and ice packs. Despite all the hardships and struggles, we depend on our family, friends and teachers that support us. We lose ourselves in the good times to savor the taste of nostalgia and to break free from rules and expectations. We join clubs and teams that end up integral to our lives. We create a sense of community, a sense of belonging. As wandering high schoolers, we find ourselves in each other. As you look through this gallery, remind yourself of the moments you shared with others and remember the taste of nostalgia. Photographs were collected by open submission from students at Beckman. To submit photos for next edition, DM us on Instagram @beckmanchronicle or email us at beckmanchronicle@gmail.com By Karolyne Diep
33
ADS 34
ADS 35
AD 36 | https://issuu.com/beckmanchronicle/docs/upload_fall_winter_edition_2017-18_ | CC-MAIN-2018-34 | refinedweb | 10,499 | 61.06 |
I'm at my wits end and hope someone can help me out! I've spent hours and hours with Adobe tech support only to have a hard time understanding the thick accents and none knowledgeable enough to figure out the issue.
And my case was dropped due to schedules; I was busy with Thanksgiving which seemed to make no difference to one who doesn't celebrate it! But I digress...
I really don't remember this happening in Lr 3.5, but it is in Lr 4.1-3 and only effects Sepia toned images I edit for the most part. Just finished about 200 images from an event where I will put them on a DVD to music for purchase. They all looked fine when I finished with them. I exported them as JPEGs, long side 1024, quality 74. The rest of the long story short, iMovie made them look awful, opted for iDVD. First time viewing them all was good, but after multiple viewings after rearranging/adding/deleting images from the project, they look like crap. For some very strange reason, many took on a pinkish/reddish coloration and often will introduce artifacts that simply weren't there in Lr. I viewed my edited images on 2 other Mac's and the issue remains, so not my computer.
I have an 27" iMac, OS 10.6.8 (completed up-to-date), 3.3GHz duo core, 12GB RAM, 960GB of free space on a 1TB HDD. Under warranty the display and recalled hard drive have been replaced, so my Lr 4.3 is completely fresh; not from a backup. With the new HDD recent, I haven't calibrated my computer as of yet. My images are on a WD 3TB with 1.25TB of free space (a tad on the low side but shouldn't effect anything). My camera is a Sony A33 and I shoot strictly in RAW set for sRGB; using ProPhoto in Lr and for sometime now only in Manual.
I would be so appreciative if anyone can explain why this is an issue. I need to finish off this DVD for the event and if I cannot figure this out, it's lost time and money! Admittedly I haven't been doing this for very long, so if it's something I'm doing wrong, I'd really like to know what it is...
Here are my examples:
Screen shot of image in Develop Module
Exported JPEG (coloration/banding is evident in upper right quadrant)
I don't have an answer for you but here's what I'm thinking:
You say that the probelm occurs with JPGS that were exported from Lr with quality 74 and a pixel dimension of 1024 pixels on the long side.
And you say that the problem arose "after multiple viewings after rearranging/adding/deleting images from the project".
Question: Did this "rearranging/adding/deleting images from the project" involve saving these JPGs mutiple times?
I assume you know that the JPG file-format applies a "lossy" compression. "Lossy" means that image information is discarded and cannot be retrieved. Each time the JPG is re-saved more information is discarded, so that after several savings there is so little information left that the images "look like crap" .
You should use higher quality on the jpeg export. 74 will often show banding. Secondly, iMovie and iDVD are absolutely terrible at slideshows. They really lower the quality terribly. iMovie generates a low resolution jpeg copy from your original, introducing scaling artefacts. It also does a terrible terrible job at gamma correcting the images, introducing horrible posterization. You can mitigate the first problem by exporting higher resolution from Lightroom, but that is not fool-proof. The second problem cannot be fixed. It is just really terrible and you would need to use something different like Final Cut or Adobe's Premiere. There really is no fix in iMovie. iDVD is also terrible but for another reason. standard DVDs can only contain a maximum resolution of 720x480 pixels (on NTSC discs). Needless to say, that is really terrible quality and will not work well for your purpose. Unfortunately, if you want a DVD that will play on set top box DVDs there is nothing you can do about that as it is a limitation of the medium.
For your purpose, I would recommend you do one of the two following things.
I. Create the slideshow in Lightroom in the slideshow module. Export to a 1080p (or 720p) H264 mov file (export Video Slideshow in the slideshow menu) while set to music. Simply burn the resulting file on a CD or DVD from the finder. This will play in any current computer. It will also play on many BluRay players with superb quality. The only limitations of this method are that you cannot pan and zoom and that you can only use a single track as background music.
II. Export the images at fairly high resolution (say 2000-3000 pixels on the long side) with output sharpening. Import the exported images into iPhoto. Create a slideshow, set the aspect ratio to 16:9 (in the settings box) set to music to a track or a playlist, define the pans and zooms, and set the transitions (dissolve is usually the best) and hit export. In the dialog, uncheck "Automatically send slideshow to iTunes". Hit the Custom Export button. As format, select Movie to MPEG4. Hit the "options" button, select "MP4" in file format, in video format, select H264, set the image size to 1920x1080 HD. Set the Data rate to something more reasonable than the very low rate it defaults to such as 4000 kbit/s or so. The higher the better but you don't want to go higher than 10 Mbit/s. You can burn this to disc in the finder just like above. This is an onerous method but gives nice pans and zooms and you can set the slideshow to a playlist of songs.
To be honest, this is the first time I've done this. But now that you mentioned it, if I had to save the project after making adjustments to how it looks, perhaps it does resave the images? I didn't even think it would have affect the images, but that makes a lot of sense. Which of course means degraded images with each save. Duh! Now I feel really stupid... Guess this newbie should've done extensive research on what I was doing. But with this a fairly time sensitive shoot (was asked while at the event to do this and never did it before), I just thought it wouldn't be this much work or difficulty. lol I was so pleased at getting better at capturing images correctly, had very little editing to do on this first event batch, when it all went awry, I was just sickened... Thanks web-weaver!
Wow, I really appreciate all the information! Ok first thing I need to do is purchase Final Cut or Adobe's Premiere (I will check them out today!). I'm not sure I fully understand about the DVDs though. Are you saying the burn will be superb if I use FC or AP, but not iM or iDVD? Or are you saying I need a special disk when I burn. Sorry, but I'm clueless to doing this.
I never have used iPhoto, so Lr Slideshow it is. Only reason I didn't use that to begin with it was my understanding you can only use 1 song and this DVD will have about 10-12 used on it. So the event DVD will be more like a production, not just a bunch of images slapped onto it willy-nilly. I want my DVDs to look spectacular and have the music sound good too. So do I stick with Lr's Slideshow or get the better Photo DVD creator software? Thanks so much Jao vdL!
> Are you saying the burn will be superb if I use FC or AP, but not iM or iDVD? Or are you saying I need a special disk when I burn. Sorry, but I'm clueless to doing this.
There are two issues. First is that iMovie is terrible with still images because of a bug that has been in there for as long as iMovie exists and that they refuse to fix. It appears to do two gamma conversions in a row back and forth that result in posterized dark areas and that is a holdover from the days when Macs used gamma 1.8 on their displays. So don't use iMovie for stills. FC and AP don't have this problem, but they both do not solve the second problem. That is, if you burn a video DVD using ANY software suite, you run into a limitation of video DVDs. You cannot get more than single definition video on video DVDs which have a max of 480 lines of vertical resolution (for NTSC DVDs - the US standard). The DVD burning software converts your video (or your series of stills) to MPEG2 video at those 480 lines and puts that on the DVD. The nice thing about this is that you can play the disc using any DVD set top player connected to any TV. The bad thing is that for still images, the resolution is pitiful and your images will look terribly pixelated. This is true for whatever video editing and DVD authoring software you use. What you really need is HD video. Unfortunately, there are no cheap consumer oriented BluRay authoring software packages and especially not for the Mac, so you cannot generate (easily at least) a BluRay disc on your Mac. What you can do is burn a data CD or DVD with a simple high definition movie file on it. This will play on most computers and will even play on some BluRay set top box players. The latter will look superb on a good high def TV. As I explained above, you can generate such movie files directly from Lightroom or using the iPhoto hack. There are also several low-price software packages that do it but I don't have much experience with them. So there is generally no need for an expensive video editing suite.
Once again Jao, I really appreciate your help and time! So to verify I'm understanding all this correctly:
1. Merge all songs into 1 track to use in Lr Slideshow (remembered I could do this this morning... duh!)
2. Arrange images as I like
3. Once I like what I have produced, export as an H.264 .mov file
4. Burn a data DVD
5. This DVD will play on all modern computers (PC and Mac) and many Bluray players. It will look superb!
Last questions:
1. What would I use to burn the DVD? Obviously I wouldn't use iDVD don't think anyways. So get something like Burn? I mean I certainly don't want to degrade what I've created.
2. If I created .JPEG or .TIFF files for my opening/ending credits and Special Thanks To in PS, I can import/insert them where I want within the images in Lr correct?
Your summary is all correct. One small addition that I would make I
describe in another thread about slideshows skipping on this forum
yesterday. I run the exported 1080p movie file through a program called
handbrake that you can find on the web by googling for it. I just use the
default settings on it and you will get a much smaller and potentially more
compatible file out of it with basically no loss of quality. I burn that
file on a USB stick or a CD or DVD. People really seem to like those discs
as they play in many BluRay players on their high def TVs. Many TVs and
bluray players now come with USB slots that you can just stick the USB
stick into and play the H264 movie files from with the caveat that the
default Lightroom export often seems to be of too high a bitrate making the
movie skip frames and stutter. The handbrake trick fixes that.
1. The finder can burn DVDs. Simply insert a blank DVD, choose to let the
finder handle it, drag the file onto the disc icon or into the finder
window that will open and hit the burn button on the opened finder window.
2. Absolutely. You can also use the built in opening and ending screens but
they are not that flexible. With the outside graphics, the trick is to
create a collection that you set to user order and insert the slides at the
beginning and end. Then create a slideshow from that.. That is onerous indeed
and the reason why I also told you about iPhoto which outputs sharper video
without tricks. That said, nobody but crazy photographers notices this but
I do find this a real annoyance with Lightroom as I am a bit of a
perfectionist when it comes to my images.
Sent from my iPad
I really cannot thank you enough Jao! I now think I have a handle on all this. lol Guess the final product will tell me for sure.
One last thing though. You brought up something I didn't even think of. If I were to leave all my images at the size they are now and export at H.264, are you saying it would make them look incorrect by cutting them off? Meaning they will not keep the size, so I need to crop to 1080p? And if that is the case, any chance it will look good in full screen 4:3 (I believe it is) with my images as they are whether that be on a computer or HD TV?
The reason I'm asking is many of my shots would be considered tight. There wasn't much room around to get the compositions large enough to look good at the 1080p size. So many of my images from this event would be a challenge to crop to 1080p and still maintain their look. Hope this makes sense.
Yes I do understand most won't notice the small things that we photogs would, but that said, I'm a perfectionist too.
You only need to crop if you set the slideshow to have images scale to fill
the entire area. The default slideshow layout doesn't do this and you'll
get bars at the side or at the top if the aspect is not 16:9. The default
1080p export gives you video at 1920x1080 pixels, which is the standard
16:9 aspect ratio of hi def TVs and most computer displays nowadays. If you
have a 4:3 tv it is probably not high definition and normal DVD format of
480 lines of resolution is probably as good as it gets on that tv and
you'll probably get the best quality by simply feeding the Lightroom export
into iDVD. Computers of course generally do fine with whatever you feed
them as most displays are better than 720p. So if your audience is people
with high def TVs and bluray players or computers, burn the movie file
directly on a disc or a USB stick. If your audience is people with non high
def TVs and simple (non BluRay) DVD players, iDVD will work fine.
Probably you just need to experiment.
Sent from my iPad
Perfect! Again my many thanks for helping out this stupid newbie to all of this! You're a darlin'!
Jao vdL wrote:.
This has been issue with the LR Slideshow module as far back as I can remember. On Mac systems the Bicubic algorithm (Good) is used to resize images, but then no sharpening is applied to the images. On Windows systems the Nearest Neighbor algorithm (Ugly) is used to resize images with no sharpening applied. Nearest Neighbor images look sharp, but suffer from "jaggies" in diagonal lines. More details here if you're feeling very bored and have nothing better to do:
The only way around this is to follows Jao's suggestion or simply use another application to create your slideshow. Fortunately, LR's Export module does a very good job of resizing and sharpening images to your liking, which can then be reimported back into the LR Slideshow module or another application. If you choose Jao's suggestion make sure the image 'Height' is exactly the height inside the Slideshow template's frame.
Example- The LR 'Widescreen' template fits to the image height, so you would use Export> Width & Height = 1920 x 1080 for making a 1080p video. Other templates that place a border above or below the image will require using a smaller height that matches the actual image size in the template. The Widescreen template with images cropped to the 16:9 aspect ratio provides the largest image size and best overall appearance onscreen.
And the most important issue with the LR Slideshow module concerns using of the 'Stroke Border:'
Make sure that 'Stroke Border' is unchecked since even a 1 px border will cause your Slideshow exports to appear soft. See the above link for more details. I have confirmed that LR4.3 still has this issue!
Well after some frustration, I played my images in LR Slideshow with music playing in background in a playlist, since I haven't joined them yet. Most of the images look great! But...why does there always need to be a but? I have 2 questions:
1. Why do a few still have that reddish/pinkish banding?
2. When I arranged all the images in Library they were exposed how they should be. So why do some of them show as overexposed (a few way overexposed) when played in Slideshow?
Note: I have not exported anything; no images or video. This is just what I see when playing in through Lr Slideshow from the Library.
This is starting to sound like you have issues with your display icc profile. SInce you haven't claibrated your monitor there has to be some ICC profile assisgned to it by your OS, which apparently is not compatible with LR. The quickest test for this is to assign a standard sRGB.icc profile to your display and see what the images look like both inside and outside of LR. I'm not a Mac user so perhaps somone watching here can tell you how.
Your best option is to hardware calibrate your monitor, since the lumincance level and color calibration are probably way off the target 100-120cd/M2 and 6500K color temperature. This is why we here things like, "Why are my prints dark," How come my prints have a yellow tint (or whatever color shift)."
It is extremely rare for the system-supplied display profile of a built in screen on a Mac to be bad. I basically have not seen that yet. You're still better of calibrating but the built in profile is usually not terrible. To check it, go into system preferences, hit the displays button and click on the color tab. If you're not calibrated, it will show Color LCD there, which is the Apple-supplied profile.
The symptoms LadyCharlyTX describes sound different than a bad profile to me, but more like the previews not being up-to-date and some Develop settings not being applied yet. One thing to do before generating the slideshow is to generate 1:1 previews for all the images or to set the standard previews to the largest size and quality and regenerate the standard previews. See if that solves the issue.
Thanks so much guys for replying! Sorry to be a bother, but really appreciate you helping out this newbie!
1. I opened up the displays color profile, ColorSync Utility opened. I ran a verify which showed 6 bad files from Adobe. It says to make a backup of profiles before I run Repair, but I don't know where to find them to do that. Any ideas?
2. As to 1:1 previews. I presume I'm to click through all my 180 images and click on the 1:1 above Navigator and allow them to render when I see the Loading... Then I should view the images to Fit Frame and take them to Slideshow. Is that correct? (Edit: So figured while I was waiting I'd do as I stated with my images to get ahead start. Strange thing; after the Loading... goes away, the images are extremely soft, something are pixelated, then after 1-3 sec. the image is clear. Gosh I'm so frustrated with this!)
3. Prior to getting a new Display and hard drive due to a recall, I had purchased Spyder4Elite. Absolutely hated it due to making my screen look red. Haven't tried it since the new HDD, but reckon I should give it ago. But like you said Jao, the iMac color profile on the 27" desktop has beautiful colors that are pretty darn true.
4. I've had this iMac in to the shop under warranty so many times. I keep telling them the graphics card isn't any good, but all tests show it's good. When I took my WD 3TB EXHDD in and plugged it into a MBP the images I exported straight out of Lr 4.2 or 4.3 that had the pinkish/reddish coloring/banding showed up on it to. But the guy did say he was at a loss as to the cause unless my iMac is possessed! lol That definitely was no help.
To make 1:1 previews:
In the Library module grid view select all the images that need previews, then go to toolbar Library> Previews> Render 1:1 Previews.
What profile is currently assigned to your 27" display?
1. They are being overly cautious. You can do repair on the profiles but as
long as your display profile is not the one it flags you should be fine.
2. No you can create 1:1 previews with one click. Select all your images
and go to the Library menu -> Previews -> Render 1:1 Previews
3. In general, for most displays, after calibration your screen will look
somewhat warmer and more saturated. You usually want to calibrate to D65
(6500 K) which is the default option in the Spyder software. Note that even
though the built in profiles on Macs are not bad, they are still not
anywhere near perfect and the result of the Spyder (if it is not a broken
one) is the one you should trust.
4. That is a little worrisome. I don't really understand what you are
saying here. Are you saying that the same exported images with the same
develop settings that look wrong on your machine look normal on the MBP you
tried in the shop? That could be a bad display profile indeed or a problem
with the videocard. If it tests normally than the card is probably OK and I
would really urge you to calibrate your display using the Spyder.
Thanks trshaner, that was so easy!
I have 3 profiles listed on the iMac; 2 are called Display and other is iMac which is what I use. In the profile it says:
Size: 6296
Preferred CMM: Apple
Specification Version: 2.1.0
Class: Display
Space: RGB
PCS: XYZ
Created: 1/3/13 8:07pm (not sure why that is since I've not touched it nor have I installed any updates)
Platform: Apple
Device Manufacturer: (it's blank)
Device Model: (it's blank)
Device Attributes: 00000000 00000000
Rendering Intent: Perceptual
PCS Illuminant: 0.96420, 1.00000, 0.82491
Creator: Apple
MDS Signature: (it's blank)
Jao, when I look at my B&W images, I expect them to be B&W not have a reddish (a couple of times greenish) tint which is what the previous calibrations with Spyder rendered. Now admittedly, I use Spyder with default settings and this is the first time I've ever calibrated displays, but still shouldn't it be close to normal looking?
A few weeks ago I talk to a DataColor rep at a camera expo. He said the only reason he could think of as to why I keep getting the red tint after calibration with S4E is if the video card was bad. lol Now see why I'm so frustrated and ready to chuck everything?
No the exported JPEGs w/the issues had them when I viewed some on the MBP. They couldn't or perhaps I should say wouldn't, install Lr 4 as a trial for me to test my catalog images.
A well-calibrated screen should appear completely neutral in greyscale
images. Certainly no tint should be visible. If after calibration you get
that, something is seriously wrong with either the calibrator or your
display. A Spyder and certainly the Elite should be able to get your
display completely neutral looking.
Do you have an Apple Store anywhere in your vicinity? They will generally
allow you to run other software to test if you ask nicely (they wipe their
machines completely every day so it's not a big deal to them) and I have
seen Lightroom on their machines before. You can also take one of your
trouble images (export to lossy dng!) and put it in a place where we can
download it to test such as a dropbox public folder.
Yanno I put out the bucks for the Elite thinking how could I go wrong? But even after they replaced the display I couldn't get it to calibrate at all where my B&W were neutral w/o a tint of some sort. Thus why I gave up. Tomorrow, I will try it again. If I get the same thing, I will contact DataColor about a replacement. Although after all the months of hoopla I've gone through, not sure they'll do that.
Yes as I said, I've had this iMac in to the store "a lot"! I spent near 3K for this and since I started photography and working images, it's been one headache after another. Perhaps it's just me not knowing what the heck I'm doing, but I really am fairly computer savvy compared to many and I'm absolutely stumped. Albeit everyone keeps blaming it on the fact that I shoot with a Sony, which is BS in my humble opinion!
Well I just looked over the 1:1 renders and not sure I see that coloration any more. Gosh seems the 1:1 previews cured the issue! But to be absolutely sure, I will run the 40 min. slideshow once again. If I still see the issue, what program/url would I use to drop a bad image into?
Thanks ever so much for your time!!!
Good to hear! In the future you can do the same thing by selecting all of the images for your slideshow (or whatever) and having LR update the 1:1 previews. Don't worry, it will only update those images that have Develop module changes.
Darn it, I spoke too soon! Still have images with the issues on them.
Strange happenings:
1. In Library, all are exposed properly.
2. In Slideshow the exposure is changed to where some have blown out bits.
3. Those images with the issues don't seem to have them while in Develop Module or it is significantly reduced.
4. Some images that didn't show issues in Library did in Slideshow.
5. My head hurts! lol I'm calling it a day and will be back at it first thing in the morning..... I will calibrate with S4E to see if that works. Will send a lossy .dng once I know where.
BTW here's what the verification on bad files said in ColorSync Utility:
Searching for profiles...
Checking 66 profiles...
/Library/Application Support/Adobe/Color/Profiles/AnimePalette.icc
Header message digest (MD5) is not correct.
/Library/Application Support/Adobe/Color/Profiles/ColorNegative.icc
Header message digest (MD5) is not correct.
/Library/Application Support/Adobe/Color/Profiles/RedBlueYelllow.icc
Tag 'pseq': Required tag is not present.
/Library/Application Support/Adobe/Color/Profiles/Smokey.icc
Tag 'pseq': Required tag is not present.
/Library/Application Support/Adobe/Color/Profiles/TealMagentaGold.icc
Tag 'pseq': Required tag is not present.
/Library/Application Support/Adobe/Color/Profiles/TotalInkPreview.icc
Tag 'pseq': Required tag is not present.
Verify done - found 6 bad profiles.
And yet again, thanks so very much for all your help guys!
That all doesn't make sense. The images in Slideshow are rendered from the
exact same previews as are used in Library. They should be identical. Does
the same issue occur when you create a fresh empty catalog and put the
images in there?
I thought images should look the same whether in Library, Develop or Slideshow, but that is not the case for me.
There's not many images in this catalog and to be honest, not sure a new catalog will bring in the VC that I'm working off of. Can you tell me if it will?
>I thought images should look the same whether in Library, Develop or Slideshow, but that is not the case for me.
Library and Slideshow should be identical. Develop is a special case where you can sometimes see very subtle differences in tint to Library, but this does not sound subtle to me at all.
>There's not many images in this catalog and to be honest, not sure a new catalog will bring in the VC that I'm working off of. Can you tell me if it will?
The trick is to select the images (or their VCs) you are working on and to export a new catalog from this (File -> Export as Catalog). Make sure to deselect "Export negative files" and "Include available previews)". Then load the new catalog (you can simply double click on it. The previews will be recreated in this new catalog.
Ok, I did as suggested and created a new catalog from the images of the event. Now they look correctly exposed in Slideshow! YAY! Only a few have the issues of coloration/banding, but to a lesser degree, which I will either try to fix or exclude them from the DVD. I presume my first catalog was somehow corrupted and that was causing me all the problems. Odd though as there were only about 3000 images in that catalog. My Lr 3.5 catalog has over 30K and never had any problems. I'm at a loss why Adobe tech never suggested doing that or why it even happened. Though with all the issues I've had with the Mac, perhaps that inadvertently caused it. Don't know.... it's electronics and anything can happen. lol
Haven't had a chance to calibrate this morning, but will this afternoon and post back if it worked or not. Seems I have another problem arise though. Might just be a setting I missed, not sure. I color labeled all the images for the DVD and put them in a user order in the old catalog that came through in the new. I then created a Smart Collection for the color labeled images, yet they did not retain the order I had them in and there's no user order to choose from. Also if I use the Attribute filter they all show up but in random order. Perhaps my brain is just fried over all this hoopla, but shouldn't it main its order no matter what? I remember losing the order a few times in the old catalog and had to rearrange them. I put them in a stack the last time, but that doesn't seem to work either.
Lastly would you suggest I do the repair in ColorSync Utility for the bad files? If so, how do I backup the profiles so I don't lose them?
Thanks for helping solve some of the issues I was having! You're a life save!!
I suggested it because it sounded like something was wrong with the
previews and this was the quickest way to test that. The previews getting
out of sorts is uncommon but certainly not unprecedented.
For your second question I believe smart catalogs cannot have user order
because they are dynamic (making it possible for images to appear/disappear
at any time) so it is not unsurprising they did not retain the order you
put the images in. If you want a certain order you want to create a normal
collection of the images and drag them around in there,
You can repair the icc files you listed above but you don't have to bother.
They are profiles that Lightroom doesn't use for anything. They probably
came out of a photoshop install and even there are only used for certain
cheesy special effects, so them having (very minor compliance) problems is
not a problem at all.
Once again, I cannot thank you enough for your time and help fixing my issues!! You are indeed an asset to this forum, when you fixed in a very short time what Adobe Tech couldn't in many days and hours!!!
So it was most likely the previews. Awhile back I had thought that, so I purged everything. That didn't help at all. I thought previews would be removed with a purge is that not correct? And I would gather it would be best to maintain the new catalog instead of the old?
Yes I had just remembered Smart Collections don't recognize stacks, but you beat me to the punch and now don't have to edit.
I don't do cheesy editing, so I'll leave them alone. lol
Lastly...
1. Would it be advisable to use the 1:1 previews on import? And what's the best way to maintain them to not get corrupted? As stated above, I have very little on the 1TB internal drive and that's how I always keep them on any computers. Working off EXHDDs for the most part, that way I always have an abundance of free space and little working the RAM.
2. And do you know how to create the main folders I had in Lr 3.5 w/o importing over 30K images along with them in Lr 4.3? Meaning in 3.5 I have all the folders, like many of the edits so kept it and will use images from there. When I import an image say from Italy, subfolder Venice it will not make the folder structure, but instead has just the title of the subfolder w/other eroneous info I don't want to see. The bulk of my images I just throw into 1 folder w/o subfolders, create a new one when it hits about 5K or every few months. Only special events/trips are put into their own folders. Example:
My Originals
.....Ireland
.........Dublin
.........Killarney
.....Italy
.........Venice
.........Cinque Terre
.........Tuscany
.....New Shoot 1
.....New Shoot 2
3. The previous brought up yet another question. I'm working on a keyword structure to import into Lr 4.3 When I started out with Lr 3, I was clueless about so very much! Keywords got out of hand and one the main reasons I didn't use the 3.5 catalog when I got 4 was to not have that happen again. Well of course when you bring over images from 3 w/keywords they appear haphazzardly in 4.
Any way to stop the keywords from 3 from attaching themselves in 4's keyword list? | http://forums.adobe.com/thread/1127705 | CC-MAIN-2013-48 | refinedweb | 5,988 | 72.87 |
|
Tutorial
|
Articles
|
Forum
|
Interview Question
|
Code Snippets
|
News
|
Fun Zone
|
Poll
|
Web Links
|
Certification
|
Search
Guest
Win Surprise Gifts!!!
Congratulations!!!
Top 5 Contributors of the Month
Abhijit
>>
Interview Question
>>
.NET Framework
>>
Post New Question
What is difference between MVC and MVP patterns?
Posted By :
Amit Mehra
Posted Date :
January 01, 2010
Points :
10
Category :
.NET Framework
MVC = Model View Controller
MVP = Model View Presenter
The main difference between MVC and MVP is how the manager (controller or presenter) behaves in the overall architecture. In MVC - controller is driver for the application who manages and controls the requests. In MVP View is the main driver and the first object instantiated in the execution pipeline.
More details can be found here
You can also find related Interview Question to
What is difference between MVC and MVP patterns?
below:...)
What is the difference between a namespace and an assembly name?
A namespace is a logical naming scheme for types in which a simple type name, such as MyType, is preceded with a dot-separated hierarchical name. Such a naming scheme is completely under the control of the developer. For example, types MyCompany.FileAccess.A and MyCompany.FileAccess.B might be logically expected to have functionality.
Application Deployment and Isolation
(More...).
(More...)
What is the difference between NULL AND VOID pointer?
NULL can be value for pointer type variables.
VOID is a type identifier which has not size.
NULL and void are not same. Example: void* ptr = NULL;
| http://www.dotnetspark.com/qa/1212-what-is-difference-between-mvc-and-mvp-patterns.aspx | CC-MAIN-2018-05 | refinedweb | 244 | 50.84 |
>>!
Never understood the drive behind "fixing" the FHS. Systems outside Unix land never had any rules for where to put files. My Atari and Amiga filesystems are a mess. macOS seems to have multiple layers of where crap is intalled, when they could have just left it alone.
There still are use cases for smaller / split up directories, mainly in the embedded world. Realistically, if your path is set up with sane defaults, you shouldn't have to care where the binaries are, and generally you don't care where the libraries are. Most will only care if their crap works, and if they can find the important things, like Documents, Pictures, etc. And most desktop environments set those up on ~\ by default.
AmigaOS has (IMO) a very clean structure. LIBS: DEVS: C: S: L: HELP: FONTS: and so on, they could be on different partitions/volumes but the default was directorys under SYS: (the volume you booted from)
And most programs don't really mess with the system-directories except for perhaps copying a library to LIBS:
Ah yes, the Amiga really is clean, until you start putting your own things on there. Granted I guess this is handled by Linux distributions via the package manager, whereas even Windows has the nasty habit of not sorting through things in any sane standardized way.
One thing I could never figure out for the Amiga is what the difference between a Tool and a Utility. Seems to me they are rather interchangeable, and so when I'd install something to either one, I end up having to check both when I look for the program again.
Of course the Atari ST line pretty much had an AUTO folder, and later versions of TOS supported .ACC files and CPX folder for their control panel modules. But it was the wild west otherwise.
Agree with the 'assign' command on AmigaOS, it's very useful, I also really like that you don't really have drive letters there, much like Unix..
Spoken by someone who's been a Linux sysadmin for the past 20 years. I have thousands of machines to look after, and everything is n+1; I literally couldn't give a single crap about any individual one of them. And nor should I; re-installing takes a few minutes (you have a provisioning system, right?), dicking about with a horked filesystem and/or shared libraries takes hours and may never actually work.
So yeah, if you want to waste your time & your employers money, go right ahead..
Agreed. If a *NIX-based system other than GNU/Linux gets hosed up badly enough to where a reinstall is needed, then you've done something seriously wrong. If you know admin basics you can, for instance, keep a Solaris or *BSD system running and ready for years and never even have to think the word "reinstall." Not all Linux-based systems are so easily hosed, and you can do the same with RHEL/CentOS or even Debian stable (stress on the stable part). However there's no denying that stuff in Linuxland has gotten far more complicated than other *NIX variants even so.
Sadly after many years of using Debian, this is the first time I'm going to have to restructure/reinstall one of my systems... but it's not Debian's fault really, one of the drives in the JBOD/LVM exhibited some random read errors, even though SMART says it's completely fine, and it caused a nasty loop of fsck checks that then completely hosed he file structure.
So some things like /var/lib/dpkg is missing...
Fortunately most of my actual files are still there, I just need to put them in the right place after copying them off. But I wasn't going to deal with this crap a second time, so ended up ordering a proper raid controller!
Right, but this is an article & discussion about Linux, so thought the context was pretty clear.
Okay, maybe there really is somebody out there using Debian/Hurd who cares about the distinction, but they can cope.
Some people won't be happy until everything gets stashed in a giant /System folder ala osx i guess ...
Personally, i think FHS itself is quite well structured, just almost always poorly implemented/adhered to.
I don't want to see it go.
I'd just like the current mess of people doing whatever the f*ck they want to go away.)
This is going to make portability of scripts even worse than it is now across Linux & BSD systems. Even Linux to Linux will be hard now. (not that it's great as it is)
I use which or env
Aside from that, hope that $PATH is set correctly
Edited 2016-11-23 15:26 UTC
...
/lib contains shared objects. Shouldn't contain executable binaries.
/usr does in effect contain things belonging to the user. It contains binaries and objects that are non-critical tot the system, and support the applications/platform that the user(s) run on it.
/home does indeed contain the personal files, profile, etc of the system's users.. "
They want this, but it's not the case now, so for now, it's still as described.
Solaris is a special kind of special lol
Inhad the same misconception about usr.
It doesnt represent "user" but "Unix System Resources". (Maybe it needs a name change eventually.)
the Users folder equivalents in linux are "home/*"
Having all major distributions with a merged usr has benefits in documentation where documentation from one will more likely work on another distribution.
it also allows for simpler "resets" of a system where user configuration can be removed deleting everything but usr.
LOL, well what more should I expect from an OS that puts all its important configuration files in a directory named 'etc', which doesn't actually mean 'et cetera'. I mean, do they TRY to make it a pain in the ass to grok? Do they sit around and have meetings, where they say, 'What else could we possibly do to make the operating system as unapproachable as possible to newcomers?'
And why should newcomers know about the things stored in /etc ? Do newcomers of MS Windows play around the registry ? These things are important only to people that want or need to deal with OS roots, most should not, the ones that really must will learn about those messy things and, BTW, I prefer a lot to play around /etc but, frankly, it is every day more uncommon to do so.
Edited 2016-11-24 12:38 UTC
Do you know the history of UNIX? Shorthand commands and directories were created because teletypes were slow. I prefer the short names. It makes seeing full paths and typing commands much easier. Spelling everything out is great for readability but horrible for usability. Have you used Powershell? It's very frustrating.
As an end user, should I have to?
That still doesn't excuse putting critical system files in a directory named 'etc'. Yes, I know what it stands for, but the point is that it's NOT intuitive. As in, if I had just installed Linux and was poking around, that is the LAST directory I would look in for such files.
... "
The problem with that is USR stands for Universal System Resources.
Edited 2016-11-26 22:39 UTC
There's pointless stupidity in Linux FHS, which could be simplified out. But, it's fairly easy to make whatever adjustments you want. I think this is less of an issue than some people want it to be and that's coming from someone who prefers things to be tidy rather than spread all over the place in a big mess, and isn't impressed with FHS.
For historical reasons it would be correct to remove usr altogether and make sure there would only be /bin, /sbin, /lib.
OsNews have covered this issue before back in 2012:...
And in here it is pointed out that /usr came to be because second disc had to be mounted somewhere and it was full of user stuff (hence the usr). Then at some point the OS grew so big, that it couldn't accomodate all the binaries in /bin and stuff was moved over into /usr. The third disc was then mounted to /home and user stuff moved away from /usr.
So now that everything can fit into /bin then so it should be. The /sbin would be perfect place to put static binaries.
However what about /usr/local? It should live in /opt and so should all the extra binaries and stuff that does not fit into /bin
I agree.
For a Linux system, it would make more sense to remove /usr completely and merge things back to the / filesystem.
/etc for configuration data.
/bin for dynamically-linked binaries
/sbin for statically-linked binaries (like it was originally intended)
/lib for libraries
/var for logs, caches, etc.
/opt for third-party software
/home for all users' data
/local can be created to replace /usr/local for locally compiled and installed software.
For non-Linux systems, I like the FreeBSD layout:
/ everything needed to boot the system to single-user mode
/usr everything needed to use the system after boot
/usr/local software installed via the official packaging system
Nice, neat, simple, and well-separated by function.
Edited 2016-11-23 20:23 UTC
Linux distros messed up the FHS by not paying attention to what must go where.
That is one reason I moved years ago to FreeBSD. Everything has its place in the file system hierarchy under FreeBSD. Base OS goes to one place, user installed apps go into another - things *never* get mixed. Even the 'etc' and 'rc.d' directories are separated between base system and user installed. I always know where to find anything. Long live FreeBSD!
Soulbender,
I agree. From a user point of view it shouldn't matter if software is from a distro's repo, another repo, github or my own. Why should software organization change based on how software was installed or even whether it's a static binary?
It just seems unnecessary and arbitrary to me. Just because the binaries were compiled differently or by someone else doesn't mean we need to segregate them in our hierarchy. Oh these binaries have debug symbols, lets put them in /usr/dbg/bin. These ones are optimized for core2, they go /usr/core2/bin. It's just seems pointless to me.
IMHO the only reason to separate binaries is when there is a technical need for them to be mounted on separate volumes for bootstrapping reasons. However modern distros don't do this.
I should note that in my own linux distro, I tried to create as simple a hierarchy as it made sense for me to do. It's funny because this was before I heard of gobolinux, yet our efforts turned out to be fairly similar. Even though I feel there are better alternatives to FHS, there are definite perks to siding with the majority, and sadly I reverted my hierarchy because changing all the packages was creating too much of a maintenance burden.
Well, I understand that most cases are not like mine but I do like separation for some reasons:
- To not interfere with what comes from main repositories (for some stuff, it is not uncommon that I install newer/development versions, generally on /opt or /usr/local);
- To "avoid" names clash;
- To restrict access with easy, instead of just relying on ACLs (actually, in most cases I use both);
Anyway, from my point of view, if they want to ditch things, /usr should be the logical (?) thing to go. Why opt to the easy kill instead of what seems the more appropriate ? In the long run, what seems hard usually is what gives the best rewards.
acobar,
- To not interfere with what comes from main repositories (for some stuff, it is not uncommon that I install newer/development versions, generally on /opt or /usr/local);
- To "avoid" names clash;
Well, I understand your thinking here, but it makes a fundamental assumption that I quite dislike - that software should be distributed from one main repo.
Now, this happens to be true for linux distros, but it's a poor outcome for 3rd party software distrobution.
1. because linux distros only have one main repo, they've never had to seriously solve the namespace problem.
2. because they never solved the namespace problem, it can be very difficult to use alternate repos.
It's not merely a theoretical problem, but it causes me some major annoyances at times...
So I concede you are right about the namespace clash, but rather than complicating the file system hierarchy, I think it makes more sense to blame the design of package management in linux that almost dictates the use of a centralized repo. Rather than fragment the filesystem around the limitations of our package management, I'd rather fix the package management.
Personally, I like the solution where a package is extracted into a single smpty directory with no conflicts. Then instead of having files scattered across the system directories and potentially conflicting with other package files, symlinks would be added to system directories like /bin. Configuration would still go into something like /etc.
When designing my distro, I tried addressing more subtle use cases, but at a high level this was the general idea.
Installing and switching versions is a simple matter of running a manifest script to create/reinstall/verify the symlinks. Completely removing a package is trivial, even by hand - delete the one directory and then scan for bad symlinks pointing to it. Determining which package and source is behind a binary is a trivial matter of following the symlink. No files will ever be left behind from a botched install or removal since every single file in system directories can automatically be accounted for by following the symlinks. It's trivial to install multiple versions of a package into different directories and add version specific symlinks if that's appropriate. It's absolutely agnostic as to the origin of the package, be it one or more repos, git, tar file, etc. If packages need to be installed across multiple disk volumes, with symlinks that's not a problem at all. If a package is broken, a simple "diff" will tell you if the installed files are authentic or corrupt.
Once you have a package management solution that offers these benefits, the need for FHS system directory hierarchy to give a repository it's own space becomes obsolete.
I'm a bit unclear with what your are doing. But if for some reason you want to apply different permissions to each repo/source, you could give each repo it's own target directory and set permissions recursively for that repo's directory. It wouldn't affect the symlinks in the system hierarchy.
To illustrate:
/pkg/repo1/package_a <- ACL pattern 1
/pkg/repo1/package_b <- ACL pattern 1
/pkg/repo2/package_c <- ACL pattern 2
/pkg/repo2/package_d <- ACL pattern 2
/bin/a -> /pkg/repo1/package_a
/bin/b -> /pkg/repo1/package_b
/bin/c -> /pkg/repo2/package_c
/bin/d -> /pkg/repo2/package_d
To be honest, I don't really care that much about the specific names of directories. My interest is for it to be cleaned up, consistent, and not have a design bias for a centralized repo.
Edited 2016-11-26 18:06 UTC
For the, usually big, applications that need to have more flexibility there are two solutions currently: Ubuntu snap and Red Hat flatpak. I think they solve most of the problems you have and throw in sandboxing as a bonus.
For the other things there is nothing best than a central repo on my humble opinion: you can upgrade to a new version of your distribution in one go with a few commands.
You would if you had to support different users with different versions of applications with different glitches for which you collect fix recipes.
acrobar,
It's actually possible that I'd like them, but until the distros officially switch it doesn't do me much good in solving the problems with the official repos, unfortunately.
I've done this, and I used to be a fan of it. But after years of being a sysadmin I've grown to hating it because the stable repo can often be outdated and testing can sometimes have regressions, meaning neither stable nor testing are appropriate for the whole system. This wouldn't be a problem if the repos weren't mutually exclusive, but they aren't made to work together and each repo will fight over the dependencies for the whole system.
I've been stuck in a scenario where I absolutely needed a new version, yet the only way to do it (using the official managed repos) was to upgrade the entire system to the testing repo. At least that's easy enough to do. Once I thought everything worked, but other clients using the shared server started calling about problems with unrelated services, and sure enough there were regressions as a result of the testing repo.
Now just to be clear, I take responsibility for switching repos and things not working. However the distro's centralized repos unequivocally deserve 100% of the blame for not being able to cope with selecting a single package from the new repo. It has bitten me several times over the years and I've grown very wary of the exclusive choice between updating the whole system at once to a non-stable repo, versus having to manually compile & manage software for the stable repo.
Sorry about the rant, I know some people can get by entirely within one repo and for them it's easy, but it can cause lots of stress when we have to stray from the central repo. Just because many linux distros have a central repo doesn't mean we should ignore the problems associated with installing software from other sources. Windows, for all it's other problems, doesn't have this problem at all.
Edited 2016-11-26 22:12 UTC
I have seen this happen only when needing a new version of some program with a hard dependency on new features of newer versions of libc and/or glib, most of other things are not so dramatic.
For many years I kept updating a huge bash script that I assembled to easy the pain of having to support a combination of Slackware, Debian, FreeBSD, CentOS and openSUSE. It was used to assembly the native packages (using rpmbuild, dpkg, pkg*, etc). I gave it the name of PASM - Package ASsembly Mule. It is ugly as hell and I don´t care about it anymore (almost, it makes my life easier when needed, even if it is seldom used). I also lowered my attention deficit disorder duo to too much interest span by sticking only to Red Hat/CentOS, openSUSE and Debian.
The last thing to strike me happened with Zeal, a document browser counterpart of Kapeli´s Dash. When trying to browse Qt5 Documentation (and one another I don´t remember) Zeal would segfault. Analyzing the code gave me no insights of what was wrong. Firing GDB gave me hints, even though it is a Qt5 application (confirmed by ldd), for some strange reason on openSUSE 13.1 the loader/dynamic linker would link it against libQtGui instead of libQt5Gui dynamically, no matter how I tried to enforce only Qt5 libs by ld options, so I may understand your pain. My last resort was a script that renamed /usr/lib64/libQtGui.so.4 to something else, fire Zeal (by nohup) and then rename libQtGui back. Ugh! (but works !)
Corner cases on open software are really a hell paradise.
Anyway, for all its faults, I still prefer a centralized repository for most things. The other cases are appropriately handled now with Flatpak and Snap in my opinion.
acobar,
Well, part of the problem is that even if a simple program doesn't have a hard dependency on new features, the repo may wrongfully think it does anyways because the dependency link in the repo says so.
This may be a dumb suggestion for your case, but have you made sure the library cache is fresh by running ldconfig?
Have you tried setting the environment paths manually for your binary or manually loading the correct library as a parameter to the linux loader in a script? That's not pretty either, but it seems better than actually updating the filesystem between invocations.
Or is a binary actually calling dlopen at run time as opposed to link time?
If so, you could probably use some hacking techniques to intercept the call and redirect it, but like you said: ugh!
If it were me, I'd take a look at "strace" to see exactly how it was trying to load the dependency. But anyways, it sounds like you already have a workaround, and sticking with what works is probably easier than going through more pain to get it working the way it should, haha.
I'm not quite suggesting we get rid of all centralized repos, only that the package management should support installation of packages from multiple repos without conflict. Different repos should have the option of being in different namespaces such that updates in one don't force undesirable updates and removals in another repo, which can be infuriatingly wrong.
These namespaces would solve my repo problems without causing new problems for existing repos.
Edited 2016-11-27 05:51 UTC
Some people might enjoy an article written on osnews about the same topic from 2012:
Understanding the /bin, /sbin, /usr/bin, /usr/sbin Split
I don't get how the FHS is a problem. When has the current directory structure caused problems? How is throwing everything into one directory solving anything? It's usually just people who recently switched to Linux who complain about this because they are not used to it. It amazes me that so many people complain about changes like systemd, which actually fix known issues, but insist on fixing something that isn't broken.
Long story shorted: many programs on /sbin and /bin have a dependency on things on /usr/*bin or /usr/lib* and if your /usr resides on a different partition and it cannot be mounted on boot you would not be able to start recovery.
See:
"...
""
""
And, finally, if things are already merged, why pollute the root with not really needed /bin and /sbin ? | http://www.osnews.com/comments/29509 | CC-MAIN-2018-13 | refinedweb | 3,789 | 60.35 |
Pages are positoned in the hierarchical site structure and can be either regular pages, or archive pages. More information about Archive Pages can be found in the article Archives.
The content templates of your Pages are designed using Page types. Remember that this does not necessarily mean that all pages of the same Page type has to be rendered the same way, it simply means that they have the same content structure.
The preferred way of importing Page types is by using the
Piranha.AttributeBuilder package. With this package you can directly mark your Page models with the Attributes needed.
Let's first look at how the simplest possible page type could look like. This page types doesn't provide anything other than the main content area which is made up of blocks.
using Piranha.AttributeBuilder; using Piranha.Models; [PageType(Title = "Simple Page")] public class SimplePage : Page<SimplePage> { }
To import this page type during your application startup you add the following line to your
Configure method.
using Piranha.AttributeBuilder; var builder = new PageTypeBuilder(api) .AddType(typeof(SimplePage));)); builder.Build();
Defining an archive page is equally simple, the only additional thing you need to do is to set
IsArchive = true.
using Piranha.AttributeBuilder; using Piranha.Models; [PageType(Title = "Simple Archive", IsArchive = true)] public class SimpleArchive : Page<SimpleArchive> { }
The page type is imported in the same way as regular pages.
The
PageTypeAttribute has the following attributes available for configuring the behaviour of the Page Type.
[PageType(IsArchive = true)]
If the Page Type should be used as an Archive Page. The default value of the property is false.
[PageType(Title = "Simple Page")]
The display title to show when working with pages in the manager interface. If this property is omitted the class name of the Page Type will be used as title.
[PageType(UseBlocks = false)]
Whether or not the main block content area will be used. This can be very useful for pages displaying information that should be fixed in its formatting and you want to limit the content to the pre-defined regions.
[PageType(UseExcerpt = false)]
If the excerpt should be available when working with pages in the manager interface. The default setting is true.
[PageType(UsePrimaryImage = false)]
If the primary image should be available when working with pages in the manager interface. The default setting is true.
By default, all page request are rewritten to the route
/page. Since you want to load different model types for your pages, and often render them by different views or pages you need to specify which route should handle your Page type. Let's say we have a page that also displays a hero.
using Piranha.AttributeBuilder; using Piranha.Extend; using Piranha.Extend.Fields; using Piranha.Models; [PageType(Title = "Hero Page")] [PageTypeRoute(Title = "Default", Route = "/heropage")] public class HeroPage : Page<HeroPage> { public class HeroRegion { [Field] public StringField Title { get; set; } [Field] public ImageField Image { get; set; } [Field] public TextField Body { get; set; } } [Region] public HeroRegion Hero { get; set; } }
By adding the
PageTypeRouteAttribute to your page type, all requests for pages of this page type will now be routed to
/heropage.
Let's say we would also like to use our Hero Page as the Startpage of the site, but we might want to handle it differently by adding some content from another system, or send it to a different view. We can then just add a second
PageRouteAttribute to our class.
[PageType(Title = "Hero Page")] [PageTypeRoute(Title = "Default", Route = "/heropage")] [PageTypeRoute(Title = "Start Page", Route = "/startpage")] public class HeroPage : Page<HeroPage> { ... }
By adding a second route the page settings in the manager will now show a dropdown where the editor can select which route the current page should use.
In some cases you might want to create a special kind of page that should include a completely different editor. An example of this is Archive Pages which includes a completely different editor for handling the post archive of the page.
A page can have any number of custom editors which can be acheived by adding multiple editor attributes to the page type.
[PageType(Title = "Product Page")] [PageTypeEditor(Title = "Products", Component = "product-editor", Icon = "fas fa-fish")] public class ProductPage : Page<ProductPage> { ... }
Custom editors are implemented as global
Vue components and are responsible for handling their own data, both loading and saving. Custom components are added to the manager by registering custom javscript resource in the manager module. You can read more about this in Resources in the section Manager Extensions.
You can secure pages by adding permissions to them that the current user must have in order to have access to them. By adding one or more permission to a page also means that the user has to authenticated in order to access the page. This can be done both from the manager interface but also from code.
myPage.Permissions.Add("WebUser");
You can read more about how to add custom permissions to your application in Authentication.
Now that you know the basics for setting up pages, you should continue to read about the different components available for creating more advanced content. Information about this can be found in the articles Blocks, Regions and Fields. | https://piranhacms.org/docs/content/pages | CC-MAIN-2020-40 | refinedweb | 861 | 62.98 |
I'm on ubuntu 15.04 (gnome) 64 bit and am trying to use dubs-cpp package, however, I don't seem to be able to get it work by getting this error:
/usr/include/
fatal error: core/signal.h: No such file or directory
#include <core/signal.h>
I tried both fetching the sources directly and using bzr.
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Ubuntu dbus-cpp Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- actionparsnip
- Solved:
- 2015-08-05
- Last query:
- 2015-08-05
- Last reply:
- 2015-08-05
Thanks actionparsnip, that solved my question.
http://
packages. ubuntu. com/search? searchon= contents& keywords= signal. h&mode= exactfilename& suite=vivid& arch=any | https://answers.launchpad.net/ubuntu/+source/dbus-cpp/+question/270014 | CC-MAIN-2021-17 | refinedweb | 117 | 57.37 |
Getting started with Grails and Extjs
This article describes how to get started using extjs with your grails app. Since the plugin is deprecated because of the GPL license fiasco, I decided to write my own simple grails script to handle installing extjs for the team for 2 main reasons:
1) Prevents me from having to commit 500+ files into SVN for every new version
2) Makes it easier to upgrade to newer versions of extjs in the future
We have 2 existing internal applications that use extjs and every upgrade I am kicking myself for not using maven to extract the extjs zip file. So with our new grails app I wanted to not make the same mistake.
Install Extjs
1) Download your choice of extjs. Note as of version 2.1 extjs is under the GPL license. Meaning if your project isn't open sourced under the GPL or an internal company app then you need to use the 2.0.x versions (2.0.2 is the latest). In our case I downloaded version 2.2.
2) Copy ext-2.2.zip into your grails plugins directory
3) cd into your grails application
4) Add the zip file to svn (or whatever): svn add plugins/ext-2.2.zip
5) run grails create-script install-extjs
6) Add the script to svn: svn add scripts/InstallExtjs.groovy
7) Modify the InstallExtjs.groovy script and add the following GANT code.
8) run grails install-extjs
9) Exclude the unziped ext directory: svn propedit svn:ignores web-app/js Exclude the folder ext.
Test it out
Now that you have extjs installed you can copy one of their examples into the grails web-app directory and update the links.
1) Open up the ext-2.2.zip file again and extract the array-grid.html and array-grid.js files from the examples/grid folder to the grails web-app directory.
2) Modify the array-grid.html file. Update the relative links for css and javascript by replacing ../../ with js/ext/. For example, href="../../resources/css/ext-all.css", should now be href="js/ext/resources/css/ext-all.css"
3) Open up the array-grid.html file in your browser.
Now you have extjs installed with a simple example on how to use it. Next I would like to create a grails controller that returns JSON to populate a simple grid.
6 comments:
Thanks for the helpful post - I hadn't considered automating this before.
You mention you want to create a controller to return JSON to populate a grid. Good news, your controller method will look something like this:
def list = {
def users = User.list()
render ([total: users.size(), rows: users] as JSON)
}
Why not just host the scripts and resources on a web server instead of your project? Do they really need to be included with the project at all (zipped or not)? I mean, it wouldn't work for open source projects but should be fine for internal apps.
I believe IDEs (like IntelliJ) need the source to offer things like code completion.
Thanks James, great post! It helped me a lot!
Thanks for the post. It was a great help. See for our site using grails and ExtJS (+Sencha Touch)
How to Download ExtJS API Documentation and Browse it Offline Locally | https://jlorenzen.blogspot.com/2008/08/getting-started-with-grails-and-extjs.html | CC-MAIN-2020-16 | refinedweb | 557 | 67.76 |
you are saying that you are using multiple modules. If so you'll
have to avoid the Java namespace clashes by using a different package
name for each module. Compile your C++ code with -DSwig= which will turn
the Swig namespace into an anonymous namespace. This should then avoid
the namespace linker clash.
There is one line in the CHANGES file on -fvisibility, but you are best
off consulting the gcc documentation.
William
Steven Sharp wrote:
> Since the classes are in different classes, modules, and made
> separately, I don't see anyway for SWIG to be able to tell that
> there's a duplicate class name. The errors came at link time (this is
> running on Linux/fc4).
>
> The renamed classes would have ended up looking like
> SomethingSuperLongClassNameCommonClass which didn't seem extremely
> usable. I added a post-SWIG processing rule to my makefile to get the
> namespaces from the .i file and insert them into the _wrap.h and then
> prepend them to the _wrap.cxx SwigDirector_ class names. This *seems*
> to work or am I borking something that I don't know about yet?
>
> Also, is there any documentation for the fvisibility flag? I've seen
> it mentioned in several postings but I haven't found anything in the
> docs about it yet.
>
> Thanks,
> Steven
>
> On 7/7/06, William S Fulton <wsf@...> wrote:
>> Steven Sharp wrote:
>> > On 7/5/06, William S Fulton <wsf@...> wrote:
>> >> Steven Sharp <steven.k.sharp <at> gmail.com> writes:
>> >>
>> >>> Is there any way to put the C++ SwigDirector_Blah classes in a
>> >>> namespace? I've tried putting the %feature(director) Blah inside the
>> >>> namespace tags with still no luck. I've got a bunch of classes that
>> >>> are the same class name in different namespaces but I run into link
>> >>> errors with duplicately defined names.
>> >>>
>> >> SWIG flattens the namespaces into one namespace. Have you tried
>> %rename? Also
>> >> you can target namespaces for applying any feature including
>> directors:
>> >>
>> >> %director Namespace1::Blah;
>> >>
>> >> or turn it off for some classes:
>> >>
>> >> %nodirector Namespace2::Blah;
>> >>
>> >> If this doesn't solve your problem, post a complete standalone
>> interface file
>> >> showing the problem.
>> >>
>> >> William
>> >>
>> >
>> > William,
>> >
>> > Thanks, that's what I'm doing presently. The problem is that the same
>> > class name appears in multiple namespaces in the C++ code. e.g.,
>> > foo::baz and bar::baz, baz being a template instantiation. When SWIG
>> > does its thing, I end up with multiple definitiions of
>> > SwigDirector_baz.
>> >
>> > If I could put the SwigDirector_baz into the foo and bar namespaces,
>> > then I wouldn't need to rename baz to be foo_baz and bar_baz. Or is
>> > rename the only safe solution for now, barring post processing of the
>> > *_wrap.{cpp,h} files to add in the namespaces?
>> >
>> >
>> Well %rename has to be used in SWIG if you have 2 classes with the same
>> name in different namespaces. SWIG should error out if it finds such a
>> case. Doesn't it do that for you? I've checked that %rename works for
>> directors. Here is a complete example:
>>
>> %module(directors="1") director_namespace_clash
>>
>> %rename(GreatOne) One::Great;
>>
>> %feature("director");
>>
>> %inline %{
>> namespace One {
>> struct Great {
>> virtual void superb(int a) {}
>> virtual ~Great() {}
>> };
>> }
>> namespace Two {
>> struct Great {
>> virtual void excellent() {}
>> virtual ~Great() {}
>> };
>> }
>> %}
>>
>>
>>
>> William
>>
>
On Mon, 10 Jul 2006, Reed Hedges wrote:
> I'd like to do this in Python:
>
> obj.addCallback(lamba: print "hello, world.");
>
>
> I have two problems:
>
> One is that I'm not really sure what kind of object 'lambda' creates.
> PyMethod_Check(obj) fails. Is it a PyCFunction?
lambda creates an object of type 'function', just like def, so I guess you
want PyFunction_Check. But you might want to do something more general so
that any callable object can be used (e.g., a bound method or an instance
of a class that implements __call__), in which case PyCallable_Check may
be what you want.
>?
This gets asked a lot. Functions with default arguments are, by default,
handled like overloaded functions, so you need to provide a "typecheck"
typemap (see).
As an alternative you could use %feature("compactdefaultargs") (see), though
this won't help in cases of actual overloading.
Josh
I am trying to wrap a new package and swig 1.3.29 generated a line of
code that won't compile:
int res = SWIG_AsCharArray(argv[2], (char *)0, );
Obviously, the third argument is missing. The error I get under Mac
OS X using gcc 4.0.0 is
In function 'PyObject* _wrap_ParameterList_set(PyObject*,
PyObject*)':
error: expected primary-expression before ')' token
If this is not enough to track the problem down, I can help to
isolate it better. But I was hoping this might be enough to figure
it out.
Thanks
** Bill Spotz **
** Sandia National Laboratories Voice: (505)845-0170 **
** P.O. Box 5800 Fax: (505)284-5451 **
** Albuquerque, NM 87185-0316 Email: wfspotz@... **
Hello, I'd like to have a typemap that converts a Python function object
(created using the 'lambda' expression) into a special "Functor" class in the
C++ library being wrapped.
For example, the C++ library has a method this:
void addCallback(Functor* f, bool flag = false);
You would use it like this in C++:
void function() {
puts("hello, world.");
}
...
obj.addCallback(new FunctorWithGlobalFunctionPointer(&function));
I'd like to do this in Python:
obj.addCallback(lamba: print "hello, world.");
I have two problems:
One is that I'm not really sure what kind of object 'lambda' creates.
PyMethod_Check(obj) fails. Is it a PyCFunction??
Thanks
Reed | https://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=200607&viewday=10 | CC-MAIN-2018-22 | refinedweb | 901 | 65.73 |
We can read test data from an excel sheet and use it to test Facebook login. To start with, we have to load the excel workbook with the help of the load_workbook method.
The path of the workbook is passed as a parameter to this method. Then we have to determine its active sheet by applying the sheet method on the loaded workbook object.
To test the Facebook login page, we need to have the email and password of the user and this data is saved in an excel workbook as shown in the below image. The email – abc@gmail.com is at cell (address: row=2, column=1) and password – test123 is at cell(address: row=2, column=1).
To read these data, we shall use the cell method on the active sheet. The row and column numbers are passed as parameters to this method. Then the value method is applied on those particular cell addresses to read values within it.
import openpyxl from selenium import webdriver #configure workbook path b = openpyxl.load_workbook("C:\\Data.xlsx") #get active sheet sht = b.active #get cell address of email within active sheet e = sht.cell (row = 2, column = 1) #get cell address of password within active sheet p = sht.cell (row = 2, column = 2) #get values email = e.value passw = p.value #set chromodriver.exe path driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe") driver.implicitly_wait(0.5) #launch URL driver.get("") #identify element l = driver.find_element_by_id("email") #enter email obtained from excel l.send_keys(email) m = driver.find_element_by_id("pass") #enter password obtained from excel m.send_keys(passw) #get values entered s = l.get_attribute("value") t = m.get_attribute("value") print("Email is: ") print(s) print("Password is: ") print(t) #browser quit driver.quit() | https://www.tutorialspoint.com/how-to-read-test-data-from-an-excel-sheet-and-use-it-to-test-facebook-login-in-selenium-webdriver | CC-MAIN-2021-17 | refinedweb | 291 | 61.83 |
All XAML dialects support a feature that is used mostly by accident, yet goes overlooked for its vast potential: Attached Properties. In simple terms, these are properties that can be arbitrarily attached to just about any object in the UI hierarchy and allow attaching data that needs to be associated and stored with those objects. In more sophisticated terms, Attached Properties are a feature that allows developers to extend UI objects with arbitrary features in very powerful ways.
But let’s start at the beginning: In systems such as WPF (Windows Presentation Foundation) or WinRT (what the industry generally refers to as the "XAML Dialects"), UI elements and controls behave much like you would expect them to behave. This includes elements with properties that can be used to change the appearance or behavior of said elements.
What may not be obvious to the casual observer is that in XAML, these properties are slightly different from "normal" properties. These XAML properties are implemented as "Dependency Properties." The main difference is that while properties in .NET are a language feature that combines a get/set pair of methods with some internally stored value (such as the text of a TextBox), Dependency Properties, while still being accessible through get/set, store their values in an external, highly optimized place.
Why such a difference? One reason is the matter of efficiency. Systems like WPF instantiate lots and lots of objects, all of which have lots and lots of properties. An efficient way is needed to store those properties, especially since many of those properties simply sit there with their default values (a scenario that received special efficiency tuning in the Dependency Property system). The second reason for Dependency Properties is that WPF (and other XAML dialects) requires features that standard properties don’t provide. One of these is built-in change notification. Another is the ability to define further meta-data, such as what the default value is, or how the property is to behave in binding scenarios. And those are just a few features.
Fundamentally, the Dependency Property system is a way to first define a property and register it with the overall Dependency Property system, and then assign values to those properties, at which point the system maintains that value for each object. You can think of this system as an external table of named values that go with a certain object. In object-terms, this is a bit odd because the values that go with an object are not encapsulated in the object anymore. Nevertheless, this is a very useful and efficient system, which, to most users, ends up looking very similar to any other property system.
As a simple example, let’s imagine a TextBox control that defines a new property to assign a security ID to each instance of the control. This hypothetical property could be used to identify the object uniquely and attach some security feature (such as making the control read-only based on certain criteria, which I won’t further explore here).
The following code snippet shows how you could subclass the TextBox and add that property:
public class TextBoxEx : TextBox { public string SecurityId { get { return (string)GetValue(SecurityIdProperty); } set { SetValue(SecurityIdProperty, value); } } public static readonly DependencyProperty SecurityIdProperty = DependencyProperty.Register("SecurityId", typeof(string), typeof(TextBoxEx), new PropertyMetadata("")); }
Note that there are two parts to the definition of a Dependency Property. The property itself is defined as a static, read-only member of the class, which registers itself with the Dependency Property system by providing details such as the name, the type, the class the property goes with, and the default value (as well as other optional parameters). Then, there is a standard .NET property, which uses the GetValue() and SetValue() methods to store and retrieve the values from the Dependency Property system. This is what makes the property look like a normal property to the users of this system. (For this to work, the object must inherit from DependencyObject in some form, as every UI element does.)
As a result, you can now use the new object in XAML like this:
<my:TextBoxEx
In order to reference the new class, you must declare an XML namespace to point to the .NET namespace the class is defined in. In this example, I chose to call that namespace "my."
Using this approach, you can now save the desired security ID into the table of values the Dependency Property system keeps. In other words, this value is now stored externally and associated with this TextBoxEx instance. You can now write some code that retrieves that value (either internally in that class or in some other place) and does something useful with it.
Using Attached Properties, I’ve been able to help many a customer simplify their projects greatly yet make them more efficient and flexible.
All this raises an interesting question: If you can externally store this property value with an object instance, then why can’t you externally store arbitrary values and associate them with any object without the need to subclass an object? After all, the system seems to be a way to map named values to object instances ("store ‘abc’ as the security ID with this textbox"). And in fact, this is exactly what Attached Properties allow us to do. They’re really just a special case of Dependency Properties (or looking at it from the other side: Dependency Properties are artificially restricted to store values for specific objects, but the overall system can do more).
A First Attached Property Example
To take the security ID example and turn it into an Attached Property, you first need a place to define the property. You still need a class for that. You could use a subclass of the TextBox class just like in the prior example, but I would like to make things a bit more generic and create an Attached Property that can be attached to any UI element, not just text boxes. After all, this kind of flexibility is one of the benefits of the Attached Property paradigm. For this purpose, I created a new class called "Ex." Note that this class still needs to inherit from DependencyObject for the required functionality to be available:
public class Ex : DependencyObject { public static readonly DependencyProperty SecurityIdProperty = DependencyProperty.RegisterAttached( "SecurityId", typeof(string), typeof(Ex), new PropertyMetadata("")); public static string GetSecurityId( DependencyObject d) { return (string) d.GetValue(SecurityIdProperty); } public static void SetSecurityId( DependencyObject d, string value) { d.SetValue(SecurityIdProperty, value); } }
There are two main differences between this example and the previous one that defined a standard Dependency Property. One difference is that this example calls RegisterAttached() rather than just Register(). The other difference is that this example doesn’t have a standard .NET property anymore, and instead, there now are static GetSecurityId() and SetSecurityId() methods. That’s because you will never set these properties on the Ex object directly, but instead want to store this value with other properties. It wouldn’t make sense to create a standard property at all.
You can now use this new Attached Property on any element you want. Consider these examples:
< TextBox my:Ex. < Button my:Ex. < TextBlock my:Ex.
What happened here? How did the C# property definition all of a sudden enable you to do this? Well, this is just special syntax that XAML allows. Whenever you refer to another class’ property within an element definition, XAML verifies that such a class ("Ex") exists, with the defined property ("SecurityId") available as a pair of static SET and GET methods of the required name. In this example, assigning this value calls the static Ex.SetSecurityId() method to assign the desired value and store it away for the specified object instance (the one the property appears to be set on, like the TextBox in the first line). You could argue that this is all syntax trickery, which is true, but it sure ends up as a very convenient system.
Attached Properties provide a much simpler and more flexible way to extend XAML than just about any other technique you may have heard of.
Now that you have these values stored away, you can use them for a variety of things. You could create control templates that pick up on this value for instance. Or, you could write code that accesses the value. The following code snippet goes through a parent container’s list of child items and checks the SecurityId property:
foreach (UIElement child in Children) { var id = Ex.GetSecurityId(child); if (id == "a1") { // do something significant here child.Visibility = Visibility.Collapsed; } }
It’s interesting to note how convenient and handy all of this is. First, you were able to create a property that is now available on any object. You didn’t have to subclass all conceivable UI elements and then add this property. Instead, you created it only once. (There is no danger of naming conflicts or ambiguity; because the property is assigned to the Ex object, other classes that might define a property of the same name can’t cause conflicts.) You were also able to use this property very easily on any type of UI element. You didn’t have to check the type of the child element and then write special code for each and every one of the possible types. You also didn’t have to create any interfaces. Instead, it’s possible to handle all items generically. Furthermore, you don’t have to worry about the property not being set, because at the very least, the call to the GetSecurityId() method returns the property’s default value.
The following snippet shows an example of a TextBox control template that uses this new property (this particular example is a WinRT XAML example):
<ControlTemplate TargetType="TextBox"> <Grid Background="White"> <ScrollViewer x: <TextBlock Text ="{TemplateBinding my:Ex.SecurityId}" VerticalAlignment="Bottom" HorizontalAlignment="Right"/> </Grid> </ControlTemplate>
This is all nifty stuff. It’s also something you may have inadvertently used before, since the different XAML versions use this technique extensively. For instance, if you are using a Grid layout element, you may have found yourself setting grid rows and columns on other elements:
< TextBox Grid.
So now you know how this works. (Some people think these properties are available for controls that are contained in Grid elements. This is not the case. You can set these attached properties regardless of whether an element is inside a Grid or not. It may not have any affect.)
Attaching Behavior
All of this is nice, but it doesn’t quite reach the "it’s magic" level of awesomeness yet. I would venture to guess that most developers never realize that there’s quite a bit more to Attached Properties, yet I’ve only scratched the surface. Where things get really interesting is when the assignment of values goes along with added behavior.
There’s a tiny gem of functionality in Attached Properties that provides a huge degree of control and power: When an Attached Property is set, a change notification method can be triggered. This change notification fires every time the property value changes and it passes the new value as well as a reference to the object the property value is associated with. This allows you to add any code conceivable, which is very powerful indeed.
Let’s consider a simple example. All XAML dialects allow defining a Grid layout panel. Grids can have rows and columns defined. Here’s an example:
< Grid > <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="Auto"/> <RowDefinition Height="25"/> </Grid.RowDefinitions>
This is very useful. It’s also very painful, when you consider how much code it takes to set up this simple arrangement. Furthermore, this verbose nature makes it harder than it needs to be to create a style for the Grid that defines row heights or to data-bind the row definitions. Although it may sometimes be useful to have access to all kinds of details for row definitions (there are other properties to set on each row), in more than 99% of the cases, one just wants to set up the heights in the manner shown in this example. A simpler and more concise approach would be nice!
You can easily enhance this scenario by creating a new attached property. Here is the definition of that property:
public static string GetRowHeights( DependencyObject obj) { return (string)obj.GetValue(RowHeightsProperty); } public static void SetRowHeights( DependencyObject obj, string value) { obj.SetValue(RowHeightsProperty, value); } public static readonly DependencyProperty RowHeightsProperty = DependencyProperty.RegisterAttached("RowHeights", typeof(string), typeof(Ex), new PropertyMetadata("", RowHeightsChanged));
This is almost identical to the previous example. The main difference is in the very last line, where the meta-data is configured to call a RowHeightsChanged() method whenever the property value is changed. (More about that in a moment.) With this definition, you can now use the new property like so:
< Grid my:Ex.
Whenever this line of code executes, the RowHeights property is set, and the RowHeightsChanged() method is fired. This is where the cool stuff happens: The change-method receives the new property value as a parameter, as well as a reference to the object the property was set on (the Grid in this example). This may seem trivial, but the consequences are of utmost importance. It allows you to write whatever code you want that will react to the property change and then interact with the object the property was set on. It will do the vast majority of things you could do when subclassing, yet you never had a need to create or use a new or special Grid class.
In this example, you use the change-method to grab a reference to the Grid (the parameter is of type DependencyObject. You first need to cast it and make sure the cast worked, as people could set this property on objects other than Grids), parse the property value, and then add appropriate row definitions to the Grid object. Listing 1 shows the full definition of this Attached Property, including its change-method.
Because you can use Attached Properties to attach both behavior and property values, they’re a simple way to go, and there are no limitations when compared with other techniques.
Using this simple Attached Property, you have certainly made your life easier and the syntax associated with the creation of rows much more concise and easier to understand and maintain. But the advantages go beyond plain simplicity and convenience. For instance, the property is now also very easy to data-bind:
< Grid my:Ex.
For this binding expression to work, all you need is a string value to bind to, which could easily be exposed in a view-model or some other data context.
Another benefit is that this makes row definitions easily stylable. For instance, you could create this style:
< Style <Setter Property="my:Ex.RowHeights" Value="*,Auto,25"/> </ Style >
Setting row heights on Grids is but one example, but I think you can see the potential this technique has.
Example: Fixing Text Boxes
There are quite a few things in the different XAML dialects that don’t necessarily work the way you want them to. After all, Microsoft couldn’t possibly think of everything, or have enough time to implement everything to satisfy everyone’s needs. This means that developers often change things to their liking. One way to do this is through subclassing. This often works well, but it also has some issues.
For instance, let’s say you would like to change a TextBox’s behavior to select all its text when it receives focus. You may already be using hundreds or even thousands of text boxes throughout the application when that requirement arises. Furthermore, you may have different types of text edit controls that all have that same requirement.
With all that, creating subclasses of all the controls in question and then updating all the UIs to use those new controls is a lot of work. You may encounter other issues, such as there already being various subclasses of text boxes for other reasons. Now what?
Using Attached Properties, you can simply define a Boolean Attached Property called "SelectOnEntry" and create a change handler that’s triggered whenever the property is set to true. In that handler, you grab a reference to the object, and subscribe to its GotFocus event so you can use it to set the selection. Here’s the change handler for that property:
private static void SelectOnEntryChanged( DependencyObject d, DependencyPropertyChangedEventArgs args) { if (!(bool) args.NewValue) return; var text = d as TextBox; if (text == null) return; text.GotFocus += (s, e) => { text.SelectionStart = 0; text.SelectionLength = text.Text.Length; }; }
As you can see, the actual code is quite simple and allows you to do most of the things you could do with subclassing a TextBox control (except for accessing and overriding protected members, that is, which isn’t much of a limitation these days). The hardest part about all this is realizing that Attached Properties offer this kind of flexibility.
With this new property in place, you can now use the new property like this:
< TextBox my:Ex.
This is very cool, because you have now arbitrarily extended all TextBox controls with this new capability, which we can access simply by setting that property. (You could have made the code more generic to also handle other text entry controls.)
There is however one fly in the ointment: If you already have thousands of text boxes in use throughout the application, updating every single one of them to set this property is going to be quite painful and labor intensive. Luckily, there is a better approach! You have already discovered that you can style Attached Properties just like any other property in XAML. Therefore, you can create a style that sets this property for you:
<Style TargetType="TextBox"> <Setter Property="my:Ex.SelectOnEntry" Value="True"/> </Style>
If this style is put into a place that is accessible in the entire application (such as App.xaml or a resource dictionary that is globally accessible), this style automatically and implicitly applies to all TextBox controls. Therefore, all text boxes in the entire application will now select their text when they receive focus. How cool and easy is that?
Example: Fixing Password Boxes
At this point, you’ve seen how to arbitrarily store values with objects and then access them for things like security. You’ve seen how to add behavior that triggers when the value is set. You’ve also seen how that behavior can do advanced things like hook events that can trigger various things later. You’ve even seen how to use styling and binding for great effect. You can use this for a wide range of features limited only by your imagination. I can’t possibly list all the scenarios I have used it for.
However, there is one more usage scenario I would like to point out, to give you some more ideas: In WPF, the PasswordBox control allows setting a Password through a property of the same name. The control very much behaves like a TextBox, except it shows only dots for each character typed so the password doesn’t become visible. One problem with this control is that the Password property isn’t bindable, since Microsoft chose to implement it as a regular property rather than a Dependency Property. This is very inconvenient in MVVM (Model, View, View-Model Pattern) scenarios (among others).
I wouldn’t be telling you all of this if you couldn’t fix this using Attached Properties. Listing 2 shows an implementation of the solution. An Attached Property called "Value" is used to provide a new, bindable property. (The extra parameters are passed to the meta-data to change the exact behavior for the binding mode as well as the exact timing of binding updates.) There is a change-handler for this new Attached Property that takes whatever the Value is set to and sticks it into the actual Password property of the PasswordBox the Attached Property is associated with. This means that you can now update the password box’s Password property simply by setting the attached Value property.
Furthermore, there is an event handler hookup that listens to changes in the password box control. This handler picks up changes in the Password properties and syncs them back into the attached Value property. This is important, since you need to make sure that the Value property is updated when the user types a password, so whatever is bound to the Value property can be updated properly. (There also is a simple check to make sure the whole thing doesn’t become cyclical.)
And that’s it! You just fixed the PasswordBox control!
Conclusion
Attached Properties are a relatively trivial feature in XAML. They are simple syntactical sugar for associating named values with objects. Due to this simplicity, the feature often goes overlooked. However, due to some implementation details (such as having a changed-handler that provides access to the full object), Attached Properties end up as one of the most powerful features in XAML. I’ve helped many customers with WPF and other XAML projects that suffered from exceptional high complexity, with very sophisticated features (such as global event systems) used to implement relatively simple features. Using Attached Properties, I was able to greatly simplify these projects (such as the select-on-entry feature for text boxes) and make these projects not just easier to understand and maintain, but in many cases also improve performance.
The only downside to Attached Properties is that they are somewhat harder to discover than regular properties. This is a factor that you may want to consider, but in my experience, many alternative techniques (such as subclassing or global event handlers) are no easier to discover.
I have used Attached Properties as described here with great success. In fact, CODE Framework (CODE Magazine’s free and open-source business application development framework) uses the concept quite heavily and you can explore it for further inspiration. (See the CODE Framework sidebar for more details). I‘m confident that you will see similar success with this approach. | https://www.codemag.com/article/1405061 | CC-MAIN-2020-24 | refinedweb | 3,691 | 52.7 |
/clisp/clisp/src
In directory usw-pr-cvs1:/tmp/cvs-serv29683
Modified Files:
format.lisp ChangeLog
Log Message:
(formatter-main-1): fixed quoting (colon-p and atsign-p must be under comma)
hi Arseny,
> * In message <1652724537.20011002205648@...>
> * On the subject of "Re[2]: New printing behavior & console"
> * Sent on Tue, 2 Oct 2001 20:56:48 +1000
> * Honorable Arseny Slobodjuck <ampy@...> writes:
>
> (loop for i from 1 to 21 do (princ "0123456789"))
> (printed three lines 70 chars wide each)
> Test occurs in NT 4.
confirmed.
this is controlled by *pprint-first-newline*.
it was present since forever, but its behavior was (IMO) faulty.
it is now "fixed" :-)
> Maybe that behaviour it's ok in sh - like program (even then I want to
> use every pixel of the screen!), but not with random screen access.
if you are doing random screen access, you should not be using
pretty-print to screen!
turn pretty printing off:
(setq *print-pretty* nil)
> I changed strm_wr_ch_lpos to 0 for window streams, but it's a dirty
> hack (and only work for window streams, not for normal output).
is this in the patch you sent?
--
Sam Steingold ()
Support Israel's right to defend herself! <>
Read what the Arab leaders say to their people on <>
Nostalgia isn't what it used to be.
Update of /cvsroot/clisp/clisp/tests
In directory usw-pr-cvs1:/tmp/cvs-serv18184
Modified Files:
path.tst
Log Message:
test non-simple-string arg to make-pathname
Hello Sam,
Tuesday, October 02, 2001, 3:58:52 AM, you wrote:
>> Cannot build without UNICODE defined. There is a
>> problem in pseudofun.d, maybe in macro definitions.
Sam> Please send us the error messages you are getting.
spvw.i.c
lispbibl.d(8170) : error C2059: syntax error : '}'
spvw.d(60) : error C2065: 'pseudodata_tab' : undeclared identifier
NMAKE : fatal error U1077: 'cl' : return code '0x2'
Stop.
Sam> Please see src/ChangeLog entries for 2001-06-16 (Bruno) and
Sam> 2001-04-06 (me) (i.e., I fixed it and it worked for me, Bruno did not
Sam> like my patch and replaced it with his).
When I define nls_ascii_table in pseudofun.d in non-unicode case,
compile error disappears (but link error arises). I believe that
pseudodata_tab in lispbibl.d becomes empty and MSVC doesn't like it.
#undef PSEUDO
#define PSEUDO PSEUDO_B
extern struct pseudodata_tab_ {
#include "pseudofun.c"
} pseudodata_tab;
#undef PSEUDO
I needn't non-unicode version however, just wanted to test the
console.
--
Best regards,
Arseny mailto:ampy@...
Hello Sam,
Tuesday, October 02, 2001, 3:42:30 AM, you wrote:
>> How can I tell the printer not insert newline at beginning
>> of output every 70 characters ?
Sam> huh? what is the test case?
Just two lines:
(loop for i from 1 to 21 do
(princ "0123456789"))
(printed three lines 70 chars wide each)
Test occurs in NT 4.
Same for
(with-open-file (trash "trash" :direction :output)
(loop for i from 1 to 21 do
(princ "0123456789" trash)))
I have a NT console of 100 chars wide, but if count of chars printed
to stream (according to strm_wr_ch_lpos) exceeds 70, new print begins
with NL (in 2.27 it wasn't).
Maybe that behaviour it's ok in sh - like program (even then I want
to use every pixel of the screen!), but not with random screen access.
I changed strm_wr_ch_lpos to 0 for window streams, but it's a dirty hack
(and only work for window streams, not for normal output).
-- | https://sourceforge.net/p/clisp/mailman/clisp-devel/?viewmonth=200110&viewday=2 | CC-MAIN-2017-09 | refinedweb | 578 | 74.39 |
HP
• HP2-B102
HP Imaging and Printing Sales Fundamentals
Click the link below to get full version
Questions & Answers: 07
Question: 1 A customer is preparing to migrate data between their data centers. They need to perform deduplicalion at the data creation point and eliminate the need for specialized deduplication hardware in their data centers. How does HP StoreOnce Catalyst enable the customer to achieve this goal? A. by performing source-side deduplication B. by performing switch-side deduplication C. by performing server-side deduplication D. by performing target-side deduplication
Answer: A Explanation: By enabling source-side deduplication (Dedupe 2.0), HP has the advantage of performing deduplication at the data creation point. Source-side deduplication eliminates the need for specialist deduplication hardware at remote and branch office sites. StoreOnce Catalyst allows customers to align backup with data protection needs, such as minimizing bandwidth utilization when moving data between sites or data centers. Reference: Dedupe 2.0: What HP Has In Store(Once)
Question: 2 A customer is planning to migrate a database to a new site. They need to create an extended SAN in a single fabric namespace by using a dedicated link over a distance of 200 km. Which network protocol should they use to achieve this goal? A. FCIP B. iFC C. FCoE D. iSCSI
Answer: A Explanation: FCIP connects Fibre Channel fabrics over IP-based networks to form a unified SAN in a single fabric. FCIP relies on IP-based network services to provide connectivity between fabrics over LANs, MANs, or WANs. Note: HP SAN extension technologies include:
Page 2
* FCIP (greater than 10 km to 20,000 km) * FC-ATM * FC-SONET * WDM (greater than 35 km to 100–500 km) * Fibre Channel using long-wave transceivers (10 km–35 km)
Question: 3 A customer is extending a SAN by using wavelength division multiplexing (WDM) to migrate their master database to a new data center. They need to ensure that there is adequate line speed to perform the migration in the event of a primary path failure. What should the customer do to achieve this goal? A. Ensure that all SAN devices use the same Fibre Channel speed. B. Ensure that all SAN switches have the same core PID number. C. Ensure that there are separate WDM paths for transmitting and receiving packets D. Ensure that a secondary path with sufficient buffer-to-buffer credits is available
Answer: D Explanation: WDM devices extend the distance between two FICON directors. The devices are transparent to the directors and do not count as an additional hop. To accommodate WDM devices, you must have enough Fibre Channel BB_credits to maintain linespeed performance. WDM supports Fibre Channel speeds of 10 Gb/s, 8 Gb/s, 4 Gb/s, 2 Gb/s, and 1 Gb/s. When planning for SAN extension, BB_credits are an important consideration in WDM network configurations. Typical WDM implementations for storage replication include a primary and secondary path. You must have enough BB_credits to cover the distances for both the primary path and secondary path so that performance is not affected if the primary path fails. Reference: HP Mainframe Connectivity Design Guide
Question: 4 A customer plans to migrate data from a third-party storage system to an HP 3PAR solution. The customer has leased data center space. Which HP 3PAR Storage solution must be integrated into an HP 3PAR rack? A. HP 3PAR StoreServ 10800 B. HP 3PAR StoreServ 10400 C. HP 3PAR StoreServ F200 D. HP 3PAR StoreServ 7400
Page 3
Answer: D Explanation: Note: *: 5 A company needs to migrate from a third-party storage system to an environment that can grow by scaling out. The solution must also allow use of a server for downloading upgrades. Which solution meets the company's requirements? A. HP 3PAR StoreServ 7000 B. HP StoreOnce 6200 C. HP StoreVirtual 4000 D. HP 3PAR StoreServ 10000
Answer: A Explanation: Note: * HP MPX200 Multi-protocol and Heterogeneous Data Migration: 6 A storage administrator needs to perform a gradual migration of large backup volumes and jobs to a remote D2D system over time. Because this data is being replicated for the first time, the
Page 4
administrator needs to populate the target device with all the relevant hash codes. Which method should the administrator use to achieve this goal? A. using co-location B. using removable media C. seeding data over the WAN link D. setting initializer to migration
Answer: C Explanation: * prior to being able to replicate only unique data between source and target D2D, we must first ensure that each site has the same hash codes or “bulk data” loaded on it – this can be thought of as the reference data against which future backups are compared to see if the hash codes exist already on either source or target. The process of getting the same bulk data or reference data loaded on the D2D source and D2D target is known as “seeding”. * Seeding is generally is a one-time operation which must take place before steady-state, low bandwidth replication can commence. Seeding can take place in a number of ways: / Over the WAN link – although this can take some time for large volumes of data / Using co-location where two devices are physically in the same location and can use a GbE replication link for seeding. After seeding is complete, one unit is physically shipped to its permanent destination. / Using a form of removable media (physical tape or portable USB disks) to “ship data” between sites.
Question: 7 A small-business customer needs to implement a high IOPS storage system for an application that uses approximately 7 TB of space. Because the customer does not expect a large amount of data growth in the next few years, they need to minimize costs. Which storage system should the customer implement? A. HP 3PAR StoreServ 7400 B. HP 3PAR StoreServ 10000 C. HP 3PAR StoreServ F200 D. HP 3PAR StoreServ 7200
Answer: C Explanation: HP 3PAR F200 9.6TB capacity Incorrect:
Page 5
Not A: 432 TB RAW Not B: 2.2 TB capacity Not D: 250 TB RAW
Page 6
HP
• HP2-B102
HP Imaging and Printing Sales Fundamentals | https://issuu.com/examcertifyofficial/docs/hp2-b102_certification_exam__pdf_ | CC-MAIN-2017-09 | refinedweb | 1,036 | 55.13 |
In this post, we will implement the functionality to access Visualforce Page without Login using Sites in Salesforce. There might be a requirement where we need to expose some information for Customers or take some inputs from the Customer like Feedback. In such cases, we can expose Visualforce Page publically to show some information or to get some input from the Customer.
Implementation
In this implementation, we will create a Visualforce page that will show the Account details for a particular Account. This page will be accessible publically and any user can access this Visualforce Page without login.
First, create a Visualforce Page PublicPage to display some fields of Account like Name, Account Number, Phone, and Website. Use apex:outputField to display the fields.
PublicPage.vfp
<apex:page <apex:form> <apex:pageBlock <apex:pageBlockSection <apex:outputField <apex:outputField <apex:outputField <apex:outputField </apex:pageBlockSection> </apex:pageBlock> </apex:form> </apex:page>
Then, create an Apex Class PublicController that will query the Account details. For the purpose of this implementation, I have hardcoded Id of the Account to query.
PublicController.apxc
public class PublicController { public Account objAccount {get; set;} public PublicController(){ objAccount = [SELECT Name, AccountNumber, Phone, Website FROM Account WHERE Id = '0012x000008VEr5' LIMIT 1]; } }
Sites in Salesforce
Sites in Salesforce enables us to create Public Web Applications which are directly integrated with our Salesforce Organization. It does not require users to log in with username and password to see the pages hosted with Sites in Salesforce. Sites can be used to publically expose some information for Customers or take some inputs from the Customer like Feedback.
- First, type Sites in the Quick Find box and click on Sites.
- Add the Domain name for Site and click on Check Availability.
- If it is available, then click on Register My Salesforce Site Domain.
It should look something like below:
Once the domain is registered, we need to create a new Site.
- Click on New under Sites, Enter Label and Name.
- For Active Site Home Page, select the Visualforce Page we created earlier which will be the Home page for our Site.
- Check the Active Checkbox and keep the other fields as default. Click Save.
Once the Site is created, it should look like this:
This is enough to access Visualforce Page without Login using Sites in Salesforce. But if we are using Apex Controller or showing Salesforce records on Page, we need to provide appropriate access to Site Profile. For every Site, a Profile is created.
- Go to the Site page, and click on Public Access Settings. This will open the Site Profile.
- Provide the access to Apex Class that we created earlier.
- Provide Read access to Account Object and necessary fields if required.
That is all we need.
Access Visualforce Page without Login
We can just open the Site Url in the Browser. Under the Custom URLs section, copy the Domain URL and paste it in Browser. It should open the Visualforce Page without Login.
Site Domain URL:
This is how we can access Visualforce Page without Login using Sites in Salesforce.
If you don’t want to miss new posts, please Subscribe here. If you want to know more about Sites in Salesforce, check Salesforce official documentation here.
Some handpicked topics for you:
See you in the next implementation. | https://niksdeveloper.com/salesforce/access-visualforce-page-without-login/ | CC-MAIN-2022-27 | refinedweb | 547 | 55.64 |
piyush121 + 65 comments
Easy and fast as hell --> O(n) time and O(1) Space........
public static void main(String[] args) { Scanner in = new Scanner(System.in); int N = in.nextInt(); int K = in.nextInt(); int Q = in.nextInt(); int rot = K % N; int[] arr = new int[N]; for (int i = 0; i < N; i++) arr[i] = in.nextInt(); for (int i = 0; i < Q; i++) { int idx = in.nextInt(); if (idx - rot >= 0) System.out.println(arr[idx - rot]); else System.out.println(arr[idx - rot + arr.length]); } }
Thanks for the votes guys. I never thought my solution would be on the top of the discussion forums. Anyways here is the
modbased solution which many people found useful. Godspeed!
public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int k = in.nextInt(); int q = in.nextInt(); int[] a = new int[n]; for(int a_i=0; a_i < n; a_i++){ a[a_i] = in.nextInt(); } for(int a0 = 0; a0 < q; a0++){ int m = in.nextInt(); System.out.println(a[(n - (k % n)+ m) % n]); } }
chikithapaleti99 + 1 comment[deleted] c650Alpha + 1 comment
Ah. I was doing some weird modular arithmetic, which led to funky outcomes for the unknown test cases.
Dinesh1306 + 3 comments[deleted] roman_balzer + 6 comments
In your solution, you could use another modular arithmetic, to ensure the index stays within the boundaries of the array.
queries.forEach(m => { // Modulo to stay inside the boundaries of the array console.log(arr[(n+m-k)%n]); });
Dinesh1306 + 2 comments
In my solution the index are within the boundaries.
hackernitp + 13 comments
n; int k; int q; cin >> n >> k >> q; vector<int> a(n); for(int a_i = 0;a_i < n;a_i++){ cin >> a[a_i]; } for(int i=0;i<k;i++){ for(int j=0;j<n;j++){ if(j+1!=n) { a[j+1]=a[j]; } else { a[0]=a[n-1]; } } } for(int a0 = 0; a0 < q; a0++){ int m; cin >> m; cout<<a[m]<<endl;} return 0; }
can you plz tell me what is wrong with my code???
JamesTre + 0 comments
Hi, dk201966. Probably you are facing problems because the order you are doing the changes. If you copy the value from the 1st position to the 2nd one and, after this, you copy the 2nd to the 3rd, at the end you will have all the positions with the 1st value. And don't forget saving the last value before overwriting it! PS: you'll probably face other issues with this challenge...
pratikgk98 + 1 comment
The main problem in the code is the order of the for loop used for rotating the elenets in the array. You should store the last element value in some temporary variable..and then use a decreasing counter in the mentioned for loop ..then u can store the value in temporary variable to the first element The error in ur code of overwriting values is eleminated by this code!👍
upendra28 + 3 comments
why my code gives wrong answers ...?
vector <int> circularArrayRotation(vector <int> a, vector <int> m,int n,int k, int q) { while(k!=0) { int end=a[n-1]; for(int i=n-2;i>=0;i--) { a[i+1]=a[i]; } a[0]=end; k--; } return a; }
anisaxena53 + 0 comments
you are not supposed to print the whole array a but only the elements at positions in queries
abhilash_brains1 + 1 comment
how come!, It's not working???
Saurav_Gaikwad1 + 0 comments
in your code there is assignment of values....shifting of values is absent..
kusaalpokharel + 0 comments
I did the same and realized the values of the elements of the array that you assign later has already replaced the other element in the array. For eg: a[1]=a[0] then a[2]=a[1]=a[0] which is equal to each other
wildmud + 2 comments
#include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm> using namespace std; int main() { /* Enter your code here. Read input from STDIN. Print output to STDOUT */ int n,k,q,i,s=0,e=0, t=0,b; cin>>n>>k>>q; int a[n]; for(i=0;i<n;i++) { cin>>a[i]; } k=k%n; s=0,e=n-k-1; while(s<e) { t=a[s]; a[s]=a[e]; a[e]=t; s++;e--; } s=n-k,e=n-1; while(s<e) { t=a[s]; a[s]=a[e]; a[e]=t; s++;e--; } s=0,e=n-1; while(s<e) { t=a[s]; a[s]=a[e]; a[e]=t; s++;e--; } for(i=0;i<q;i++) { cin>>b; cout<<a[b]<<endl; } return 0; }
rvrishav7 + 0 comments
faster than urs i guess.. happy coding ; }
rvrishav7 + 2; }
it will work
NiceBuddy + 0 comments
simpler one:
#include <iostream> #include <vector> #include <algorithm> #include <iterator> int main() { int n; std::cin >> n; // array size int k; std::cin >> k; //no of rotations int q; std::cin >> q; std::vector<int> vec; vec.reserve(n); std::copy_n(std::istream_iterator<int>(std::cin), n, back_inserter(vec)); k %= n; k = n-k; std::rotate(vec.begin(), vec.begin()+k,vec.end()); while(q--) { int index; std::cin >> index; std::cout << vec[index] << std::endl; } return 0; }
rvrishav7 + 0; }
refer this
nilanjan172nsvi1 + 0 comments
same to you. but if you dry run the code ,then can you understand what is wrong with the code !!
Dinesh1306 + 3 comments
I don't know why you guys are having problem. I have submitted my solution without any problem.
Anyway if you are having issues than you should upload you code than only i will know what the problem is.
piyushb9 + 6 comments
i actually tried running the code in eclipse, and it is matching the expected output, expect in the last case where it doesnot return the result before enter is pressed, as when the return key is pressed the output becomes fully correct.
here is my code----------------------
import java.io.; import java.util.;
public class Solution {
public static void main(String[] args) { Scanner s1=new Scanner(System.in); int n=s1.nextInt(); int k=s1.nextInt(); int q=s1.nextInt(); int a[]=new int[n]; for(int i=0;i<n;i++) { a[i]=s1.nextInt(); } for(int i=0;i<q;i++) { int m=s1.nextInt(); System.out.println(a[(n-k+m)%n]); } }
}
Dinesh1306 + 1 comment piyushb9 + 2 comments
I am not concernerd with the working of the solution, the issue is why is it not working in that particular case, what is the issue? because i am facing the same issue in another problem which is Bigger is Greater in the implementation section.
Dinesh1306 + 1 comment
I can't see any problem in your code. And your code is working fine, though i haven't run it on eclipse but i have tried it on an online compiler and it gave me the correct output.
comandorubogdan + 0 comments
This one is very good but, for the case when k is much more bigger than n it is not working, because an arrayoutofboundException will occure.
maybe this will help
if (m-k%n < 0) { posValue = m-k%n+n; } else posValue = m-k%n; System.out.println(a[posValue]);
vedesh_vedu1 + 1 comment
i got runtime error by using this
Vikramhunter + 1 comment
same problem
JamesTre + 0 comments
that's because in test case 4 the number of rotations is much bigger than the number of elements of the array. If you were really executing the rotations, it wouldn't cause any error, but would be very slow, causing errors in other cases (time out). If you use any algorythm that just subtacts the number of rotations from the size of the array, in fact you may try to access elements outside of the array. In case 4 we have an array with 515 elements which is rotated 100000 times. When I try to retrieve the 1st element, the algorythm will try to retriev the element with the index -9485(negative), giving an error of segmentation.
moucthemob + 0 comments
I posted my code, if u could look at it in the comment section above that would be great!
ankur8090889931 + 0 comments
include
using namespace std; int main() { int n,k,q,temp,i,p; cin>>n>>k>>q; int a[n]; for(i=0;i>a[i]; } while(k>0) { temp=a[n-1]; for(i=n-1;i>=0;i--) { a[i]=a[i-1]; } a[0]=temp; k--; for(i=0;i>p; cout<
return 0;
}
problem : Terminated due to time out??
danutzp + 1 comment
I came up with the exact same solution but "Test case #4" fails with a run time error. I downlowded the input and the output for this test case and ran it locally in Visual Studio (C#) and it does not fail.
pratikgk98 + 3 comments
In test case 4 ..it should be noted that the value of k is greater than the no. of elements in the array(n) try using mod operator fr this problem k=k%n;👍
aayushrangwala + 1 comment
i tried the same thing but the test case #4 is showing runtime error
danutzp + 1 comment
The strange thing is that I uploaded the same solution in C#, C++ and Java 8 and for all of them "Test case #4" fails with a run time error.
hervedonner + 2 comments
The "Runtime Error" for test case #4 occurs because k can be much bigger than n and in such cases the code tries to access a negative index.
For test case 4: k = 100000, n = 515.
The solution is to apply an extra modulo n on k before substracting:
queries.forEach(m => { // Modulo to stay inside the boundaries of the array console.log(arr[(n + m - (k % n)) % n]); });
Dinesh1306 + 2 comments
Modular arithmetic is the best way to do this problem. No need to actually rotate the array. Take a look here. Circular Array Rotation
cjreasoner + 3 comments
Python solution using mod:
ls = [int(x) for x in input().split()] n = ls[0] k = ls[1] queries = ls[2] array = [int(x) for x in input().split()] result = [] for q in range(queries): q = int(input()) result.append(array[q-k % n]) for e in result: print(e)
hassan2020mainul + 1 comment
Here is another Python Solution:-
n,k,q = input().strip().split(' ') n,k,q = [int(n),int(k),int(q)] a = [int(a_temp) for a_temp in input().strip().split(' ')]
for i in range(q): m = int(input().strip()) print(a[(m-k)%len(a)])
vasilij_kolomie1 + 2 comments
what is the faster?
def circularArrayRotation(a, k, queries):
a = a[-k:] + a[:-k]
return [a[i] for i in queries]
logical_beast + 0 comments
or we can simply do like this by using python collections
import collections def circularArrayRotation(a, k, queries): d = collections.deque(a) d.rotate(k) return (d[i] for i in queries)
logical_beast + 1 comment ghassan_karwchan + 2 comments
def circularArrayRotation(a, k, queries): return [a[(val - k) % len(a)] for val in queries]
Hiren_Italiya + 2 comments
he's doing mod caz after n rotation all elements will come at same position as starting point.
karthik1301 + 1 comment
Failed for one test case.
anuj_pancholi_1 + 2 comments
Test case 4? Segmentation fault?
karthik1301 + 1 comment
Yes
anuj_pancholi_1 + 3 comments
Here's what worked for me in C++. After taking values of n and k:
while(k greater than n)k=k-n;
The value of k can be several times greater than n, so by putting this can lead to referencing an index in the array greater than n, which causes segmentation fault. And the output will be correct because performing 5 rotations in an array of 4 elements is same as performing 1 rotation.
astromahi + 2 comments
you could use k=k%n, what if the k is doulble or trible the size of n. subtraction always reduce one rotation.
Consider if k = 3n => k = 3n -n => k = 2n. but in case of modular it reduces to n rotation. k = 3n => k = 3n%n => k = 0.
anuj_pancholi_1 + 1 comment
If K=2n then the loop will run twice and k will be set to 0. If k is a multiple of n then the array after k rotations will be same as initial array. k=k%n could also do the trick.
jonathan_cheung1 + 1 comment
This helped me out thanks
jonathan_cheung1 + 1 comment
You can determine where in the array each element will end up with a simple calculation using the number of elements (n), the number of times you perform the operation (k), and an incrementing variable (use a for loop). Try to figure out the equation and use that answer as the array assignment. You need to use the while loop right before the array assignment because sometimes the calculation you perform will leave you with an array assignment value that is greater (hint) than the amount of array values so you just repeat the same equation until you get a valid value (something < n).
AhmadAliSabir + 7 comments
int n,k,q,temp,m,z; cin>>n>>k>>q; int a[n]; for(int x=0;x<n;x++) { cin>>a[x]; } k=k%n; for(int y=0;y<k;y++) { temp = a[n-1]; for(int q=n-1;q>=1;q--) { a[q]=a[q-1]; } a[0]=temp; } for(int u=0;u<q;u++) { cin>>m; cout<<a[m]<<endl; }
"Terminated due to timed out" on Case 5,12,13,14 ... help please .. Thanks !
rjdp3 + 1 comment
I was using the nested loop algo, was getting timed out. Then i used this (JAVA) Collections.rotate(Arrays.asList(array_name), k);
i am a beginner, can anyone tell me is this a bad way to solve a problem, if yes why?
Stephen26 + 0 comments
The Actual vision of the problem is to solve it by without actually making any rotations.because rotation takes more time and space.thats why you may get timeout errors if you try to solve it by rotations .we gotta have to solve it by doing calculations .Example: write a simple logic to determine what will be the output after k rotations.if K is equal to N then no rotations needed coz the array will be the same as original after rotations.
Dinesh1306 + 1 comment
No need to rotate the array you can do this problem much faster. Take a look here, i have explained it in detail. Circular Array Rotation
Hope it helped you
poonam2992 + 1 comment
hi @Dinesh1306, I tried your solution, but I get this error when I try for large inputs as given in test cases: Sorry, we can't accept your submission. The custom input size should not exceed 50Kb.
Dinesh1306 + 1 comment poonam2992 + 0 comments
hi @Dinesh1306 your solution got accepted. thanks :) but yes when i try the solution with large inputs, it gives me that custom input size error.
ashishgulati21 + 2 comments
Same for me :/ someone please help
int main() { int n,k,q,r=0; cin >>n>>k>>q;
int b[n]; int m[q]; for (int i=0;i<n;i++) { cin >> b[i]; } for (int i=0;i<q;i++) { cin >> m[i]; } for (int i=0;i<k;i++) { r=b[n-1]; for(int j=n-1;j>0;j--) { b[j]=b[j-1]; } b[0]=r; } for(int i=0;i<q;i++) { cout << b[m[i]]<<endl; } return 0;
}
Dinesh1306 + 1 comment chikithapaleti99 + 0 comments
int main() { long int n; long int k; long int q,i,t; scanf("%li %li %li", &n, &k, &q); long int a [n]; for (long int a_i = 0; a_i < n; a_i++) { scanf("%li",&a[a_i]); } long int *m = malloc(sizeof(long int) * q); for (long int m_i = 0; m_i < q; m_i++) { scanf("%li",&m[m_i]); } long int b[n]; for(i=0;i<n;i++){ t=i+k; t=t%n; b[t]=a[i]; } for (long int m_i = 0; m_i < q; m_i++) { printf("%ld\n",b[m[m_i]]); } return 0; }
gozer_goose + 0 comments
You can make another array and use modular arithematic approach to insert values in that array. In this way you will solve it in O(n) and there will be no timeout errors :D
teodora_vasilas + 0 comments
i used this code insted of the second for:
rotate(a.begin(),a.end()-k,a.end());
polmki99 + 7 comments
What's the space and time efficiency for this code?
n, k, m = map(int, input().strip().split()) arr = list(map(int, input().strip().split())) k %= n arr = arr[-k:] + arr[:-k] for i in range(m): print(arr[int(input().strip())])
nstoddar + 1 comment
Thanks polmki99! Very clever implementation.
Here it is in Javascript for those interested.
function processData(input) { //Enter your code here var inputList = input.split('\n'); var details = inputList[0].split(" "); var arrayIdx = inputList.slice(2); var targetArray = inputList[1].split(" "); var k = details[1]%details[0]; targetArray = [].concat(targetArray.slice(-k),targetArray.slice(0,-k)); arrayIdx.map(function(elem){return console.log(targetArray[elem]);}); }
murnun + 1 comment
Nice one liner using the built in functions! Unfortunately, this still fails when
kis higher than
n(test case #4). You seem to be getting around it by overwriting
kwith:
var k = details[1]%details[0];
I'm curious to see if there are other, more algorithmic solutions to this in Javascript.
matt_heisig + 1 comment
This works in Javascript for all test cases:
function main() { var n_temp = readLine().split(' '); var n = parseInt(n_temp[0]); var k = parseInt(n_temp[1]); var q = parseInt(n_temp[2]); a = readLine().split(' '); a = a.map(Number); for (var i=0; i<q; i++) { var m = readLine(); console.log(a.slice(m-k%n)[0]); } }
mehul_sachdeva7 + 1 comment
when i ran my code only sample test cleared and rest failed
function circularArrayRotation(a, k, queries) { var res = []; var resind = 0; while (k > 0) { rotateArr(); k--; } function rotateArr() { var i = 0; var lastel = a[a.length - 1]; var temp1 = a[0]; var temp2; while (i < a.length-1) { temp2 = a[i + 1]; a[i + 1] = temp1; temp1 = temp2; i++; } a[0] = lastel; } for(var j in queries) { res[resind] = a[j]; resind++; } return res; }
I think some issue is in how i am returning the values, please help identify the error
mehul_sachdeva7 + 1 comment
found out what the problem was, new JS code:
function circularArrayRotation(a, k, queries) { var rarr = []; var aind = []; var res = []; for (var i = 0; i < a.length; i++){ if ((i + k) < a.length) { aind[i] = i + k; } else { aind[i] = (i + k) % a.length; } } for (var j = 0; j < a.length; j++) { rarr[aind[j]] = a[j]; } for (var k = 0; k < queries.length; k++){ res[k] = rarr[queries[k]]; } return (res); }
mail2karthk07 + 0 comments
how about my solution is it good..
n,k,q = list(map(int, input().strip().split())) a = list(map(int, input().strip().split())) a = a[n-(k%n):]+a[0:n-(k%n)] [print(a[int(input())]) for _ in range(q)]
pooja1989mehta + 0 comments
Wow! Lovely Solution.. But what if the direction of rotation is left?
I have given it a try, please let me know if I am wrong:
if(idx+rot <= arr.length) System.out.println(arr[idx+rot]); else System.out.println(arr[arr.length-(idx+rot)]);
Stephen26 + 1 comment
no Rotation but still does the job :v CHECK THIS out c++ lovers
im noob in coding so i wrote it so big :(
int b,n,k,q,ans=0; cin>>n>>k>>q; int a[n]; for(int i=0;i<n;i++) cin>>a[i]; if(k==n){ for(int i=0;i<q;i++) { cin>>b; cout<<a[b]<<endl; } } else if(k>n) { while(k>n) k=k-n; for(int i=0;i<q;i++) { ans=n-k; cin>>b; ans+=b; if(ans>n-1) ans-=n; cout<<a[ans]<<endl; } } else if(k<n) { for(int i=0;i<q;i++) { ans=n-k; cin>>b; ans+=b; if(ans> n-1) ans-=n; cout<<a[ans]<<endl; } } return 0;
}
rajatchauhan + 0 comments
smart one more way u could have actually stored rotated version of array while taking array input.
Sultan_of_Bits + 1 comment
Very elegant solution!
(\(\ (=':') (,(")(")
Dinesh1306 + 1 comment
You can further reduce the time. No need to rotate the array just apply simple logic and you can do it in O(1) complexity per query. Check this solution Circular Array Rotation
bulicicnenad + 0 comments
Query will be O(1) time complexity regardless of the implementation or correctness since you are accessing an indexed array. Your solution did not make accessing more efficient anymore than it is, but since you are inserting into an array in O(n) time and correctly accessing it passes all tests.
aashirshukla + 1 comment
How is that O(1) space? You're using an n element array.
Dinesh1306 + 1 comment
It is O(1) per query. This problem can be solved without actually rotating the array.
Here is my solution Circular Array Rotation
aashirshukla + 1 comment
Yeah, my bad. It's O(1) per query, but an O(n) space complexity algorithm in the end.
Dinesh1306 + 1 comment
No,the algorithmn(which is the main part)is of O(1) time complexity you cannot optimize it any further. Space complexity is also optimized you cannot avoid using an array as the problem is based on it.
aashirshukla + 1 comment
You cannot say that. And you cannot call it O(1),
For example if a problem asks us to sum up all numbers in an array, we can simply take an input and add it to a variable sum, or we can take all inputs in an array. What you just said implies that we cannot distinguish between these 2 algorithms, but we can(One is O(1), other is O(n).
Of course it isn't sensible to assume that we can solve this problem without the array in this particular case, but I still think that O(1) complexity is wrong.
Dinesh1306 + 1 comment
Again you misunderstood what i am saying. I am saying that the algorithmn part(i.e. the if and else statment) is O(1) not the whole code, since in this problem we cannot run away from array. And i am claming that my solution is the most optimized one.
And BTW the example that you gave has O(n)time complexity not O(1) since for ever new element you are doing a computational opperation(i.e. adding the sum and the new input)
aashirshukla + 1 comment
I'm talking about space complexity. The space complexity is not O(1), and the space complexities in my 2 examples were different(one is O(n) as we are using a WHOLE n element array, while in the other we're using one variable).
You on the other hand, are talking about the time complexity. It is O(1) per query and that's correct. But how is space in any way O(1) ? There exists an n element array.
Dinesh1306 + 1 comment
Seems like we missunderstood each other. I was talking about time complexity the whole time. As for the space complexity yes it is O(n), O(1) is not possible in this problem since we have to store the elements.
vasukapil2196 + 1 comment
hi bro,i am a beginner.How to understand these concepts of time complexity and space complexity.
Dinesh1306 + 1 comment poonam2992 + 1 comment
thanku :)
ZEUS_DEVELOPER + 1 comment
n,k,q=raw_input().split() n,k,q=[int(n),int(k),int(q)] s=raw_input() s=map(int,s.split()) while(q!=0): m=input() print s[(m-k)%n] q=q-1
bitch plz python rules
morettocarlo + 1 comment
I really like this one Zeus! I tried to make it more compact:
n, k, q = map(int, input().strip().split()) a = [int(i) for i in input().strip().split(' ')] for i in range(q): m = int(input()) print(a[(m-k)%n])
beckwithdylan + 0 comments
and if you wanted to make it really compact..
n,k,q = list(map(int, input().split())) a = list(map(int, input().split())) print(*(a[(int(input())-k)%n] for _ in range(q)),sep='\n')
ZEUS_DEVELOPER + 1 comment
n,k,q=raw_input().split() n,k,q=[int(n),int(k),int(q)] s=raw_input() s=map(int,s.split()) while(q!=0): m=input() print s[(m-k)%n] q=q-1
bitch plz python rules
davinci_coder75 + 1 comment
@piyush121 I would like to understand what you did with the modulo operation and how does rot help us in solving thiis code. I am beginner so please be patient and help me.
tenzin392 + 0 comments
I was faithfully doing a for loop for 1 rotation inside a dummy for loop to have k rotations. Then another for loop to output the m'th position. It worked, but O(n) became n*k; too long for 6 test cases. This comment inspired me to use modulus by n and only print the mth position numbers. I tried to understand it in my own way and shorten your code to:
System.out.println(a[(m-k+n)%n]);
harptheshark + 0 comments
Hi piyush can you explain the math behind your solution that would be great
octy40 + 1 comment
This is pretty much more elegant than my solution of using a queue, but a question that I'm having is why use k%n? Can someone please explain this logic?
Tx
john_manuel_men1 + 0 comments
Because when the number of rotations(k) equals the size of the array(n), the array returns to it's original setup. For example, after 3 rotations, this array is back to original:
[1,2,3] 0 original [3,1,2] 1 rotation [2,3,1] 2 rotations [1,2,3] 3 rotations //Back to original setup [3,1,2] 4 rotations // same as 1 rotation
Therefore if you have 4 rotations, then 4 % 3 is 1, so you will get the same result as if you only had 1 rotation to begin with.
thieugiatri4492 + 1 comment
Can you guys explain to me this line
if(idx-rot>=0) System.out.println(arr[idx-rot]); else System.out.println(arr[idx-rot+arr.length]);
I don't understand why it can refer to the roation element of array?
JamesTre + 0 comments
It's just because in the '(rot-1)th' elements of the array you will have negative indexes, causing errors or, even, segmentation error (as in test case 4). So, doing this, you add N to negative indexes to retrive the correct values from the former array, without rotating it at all.
simran1707 + 1 comment
Can anyone explain this easily from start ?
JamesTre + 2 comments
Well, let's try explain using an example case:
If I have an array(a) with, let's say, N=5 elements: a0=0, a1=1, a2=2, a3=3 and a4=4.
Let's consider that I have to rotate this array K=6 times.
So:
a0={0, 1, 2, 3, 4} (initial array)
When I right-rotate 6 times this array, following the rules estipulated in this problem, I'll get this:
a1={4, 0, 1, 2, 3} (1st rotation)
a2={3, 4, 0, 1, 2} (2nd rotation)
a3={2, 3, 4, 0, 1} (3th rotation)
a4={1, 2, 3, 4, 0} (4th rotation)
a5={0, 1, 2, 3, 4} (5th rotation)
a6={4, 0, 1, 2, 3} (6th rotation)
Observe that at the 5th rotation, we got the original array again (a5 == a0). In fact, at every N rotations we'll get the original array. So, we don't need to rotate the array K times if K is greater than N. We can just rotate it (K%N) times (rest of integer division, or just 'modulo operation'). In my example 6%5 = 1.
And, even in this case, I don't need to actually rotate the array, but just retrieve an element from the array whose position is 1 lower than it was in the original array.
WHAT???
Yes, let's consider the initial array and after 1 or 6 rotations, as follows:
a0={0, 1, 2, 3, 4}
a1={4, 0, 1, 2, 3} (1st and 6th rotation)
If I try to retrieve the:
- 4th element, I'll get 3;
- 3rd element, I'll get 2, and so on.
Thus, if I subtract the number of rotations(K) from the index of the element, I'll get its value after K rotations, correct?
YES and NO!!!
a1[4] == a0[3]
a1[3] == a0[2]
Hummm... aK[m] == a0[m-(k%N)]! Great. No rotation is necessary, but just a simple calculation...
Let's continue...
a1[2] == a0[1]
a1[1] == a0[0]
a1[0] == a0[-1] OPS! There's no a[-1]!!!
Here we get an error type "Index out of bounds". It's because we are trying to acces memory positions that are out of the array's boundaries. Probably, it is being used by another variable. A "Segmentation error" message may appear when K is much bigger than N (something like k=100000 and n=515, as in test case 4). It's because we are trying to acces memory positions that are out of the program boundaries and, probably, are being used by another application.
To correct these errors, we can use the same solution we used to simulate the K>N rotations: modulo operation!
The answer would be like this:
aK[m] == a0[(m+N-(K%N))%N].
We add N to the index to assure it will never be negative.
Tricky, fast and easy.
Try implementing this in your application.
I hope be helpful ;-)
thieugiatri4492 + 1 comment
Sorry, but in this line
aK[m] == a0[(m+N-(K%N))%N].
you %N again after +N for what purpose? If i think correctly, it's for rotate again the m index?
shubamdadhwal + 0 comments
That extra %N was added so that if our ans turns out to be greater than the number of elements in the array , the indexing should again start from index '0'. eg. for m=1, our the later formula without using %N will result to 5. but our ans should be equal to 0 and not 5 thats why we do modulus. so that it could become 0
Taylor40 + 0 comments
Wow, its crazy to see how yours is so similar yet so different from my code. I instead placed the integer at the index it was going to be at from the very beginning
public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int cycles = in.nextInt(); int queries = in.nextInt(); int[] a = new int[n]; for(int a_i=0; a_i < n; a_i++){ a[(a_i+cycles) % n] = in.nextInt(); } for(int a0 = 0; a0 < queries; a0++){ int m = in.nextInt(); System.out.println(a[m]); } }
gauravkochar_gk + 0 comments
if (idx - rot >= 0) System.out.println(arr[idx - rot]); else System.out.println(arr[idx - rot + arr.length]); } will u plz make me understand ur logic?? how did it come to ur mind?
vishprajapati191 + 0 comments
Just wanted to ask how do you came up with the one line solution to the problem.
System.out.println(a[(n - (k % n)+ m) % n]);
rohillakriti21 + 0 comments
why do we have to use (k%n) instead of just k i.e. (n-k+m)? All TCs except for TC#4 succeed with k.
Nathan3Fantom + 0 comments
shouldnt you circulate the whole array instead of manipulating the indexes to find the answer.
rahulrajpl + 0 comments
O(m) time and O(1) space.
def circularArrayRotation(a, k, queries): return [a[((len(a) - (k % len(a)))+q)%len(a)] for q in queries]
With Love, Python
deeputhakurgbn + 0 comments
u did amazing job...what i do...i cirular shift the whole array...can you suggest me the way...when you see a problem like this.
kn_neelalohitha + 0 comments
Nice one!
Here is my code in C.
#include <stdio.h> #include <stdlib.h> int main(){ int n,k,q; scanf("%d %d %d",&n,&k,&q); int a[100000],m,i = 0; while(i < n) scanf("%d",&a[(i++ + k) % n]); while(q--){ scanf("%d",&m); printf("%d\n",a[m]); } return 0; }
Position resulting because of rotation is computed first using
((i++ + k) % n).Then the array elements are stored in these positions in the read ing stage itself using
scanf(). Suggestions are welcome!
nsh777 + 0 comments
Why am I getting this error when I have not even touched main() ?
Here's my function code
int i,j,arr[queries_count]; *result_count=queries_count; k = k % a_count; for(i=1;i<=queries_count;i++) { j=queries[i]-k; if(j<0) arr[i]=a[a_count+j]; else arr[i]=a[j]; } return arr;
}
Segmentation Fault Error (stderr)
GDB trace: Reading symbols from solution...done. [New LWP 2592] Core was generated by `solution'. Program terminated with signal SIGSEGV, Segmentation fault.
0 fprintf (__fmt=0x400cc4 "%d", __stream=0x155d010)
at /usr/include/x86_64-linux-gnu/bits/stdio2.h:97
97 return __fprintf_chk (__stream, __USE_FORTIFY_LEVEL - 1, __fmt,
0 fprintf (__fmt=0x400cc4 "%d", __stream=0x155d010)
at /usr/include/x86_64-linux-gnu/bits/stdio2.h:97
1 main () at solution.c:97
1634810124_coe3b + 1 comment
sir can you please help me.Whats wrong with my code ??
import java.io.; import java.math.; import java.security.; import java.text.; import java.util.; import java.util.concurrent.; import java.util.regex.*;
public class Solution {
// Complete the circularArrayRotation function below. static int[] circularArrayRotation(int[] a, int k, int[] queries) { int last=a[a.length-1]; for(int i=a.length-2;i>=0;i--) { a[i+1]=a[i]; } a[0]=a[last]; for(int i=0;i<queries.length;i++) { int c=queries[i]; int m=a[c]; return m; break; } } private static final Scanner scanner = new Scanner(System.in); public static void main(String[] args) throws IOException { BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(System.getenv("OUTPUT_PATH"))); String[] nkq = scanner.nextLine().split(" "); int n = Integer.parseInt(nkq[0]); int k = Integer.parseInt(nkq[1]); int q = Integer.parseInt(nkq[2]); int[] a = new int[n]; String[] aItems = scanner.nextLine().split(" "); scanner.skip("(\r\n|[\n\r\u2028\u2029\u0085])?"); for (int i = 0; i < n; i++) { int aItem = Integer.parseInt(aItems[i]); a[i] = aItem; } int[] queries = new int[q]; for (int i = 0; i < q; i++) { int queriesItem = scanner.nextInt(); scanner.skip("(\r\n|[\n\r\u2028\u2029\u0085])?"); queries[i] = queriesItem; } int[] result = circularArrayRotation(a, k, queries); for (int i = 0; i < result.length; i++) { bufferedWriter.write(String.valueOf(result[i])); if (i != result.length - 1) { bufferedWriter.write("\n"); } } bufferedWriter.newLine(); bufferedWriter.close(); scanner.close(); }
}
logical_beast + 0 comments
hi @1634810124_coe3b you have not used the k variable and also you have wrote a[0]=a[last]; do not assign like this ..because you are using a[0] in the loop afterwards. Tip-->Assign the values in a[0] using temporary variable. since you are using 2 for loops and probably you need one while loop or nested for loop for this kind of approach ...so you may face timeout in some testcases. good luck
toygartanyel + 0 comments
It's for C (''Godspeed!")
#include <stdio.h> int main() { int n, k, m, mx; scanf("%d %d %d", &n, &k, &m); int nums[n]; for (int i = 0; i < n; i++) scanf("%d", &nums[i]); while (m--) { scanf("%d", &mx); printf("%d\n", nums[(n + mx - (k % n)) % n]); } return 0; }
michaelteter + 0 comments
Wow. The core of the solution is two lines in Ruby (or one line if you don't mind calling a.length three times).
len = a.length queries.collect { |i| a[((len - k) % len + i) % len] }
It may be possible to simplify my expression, but I wasn't able to see how to reduce it.
GizemN + 20 comments
Here is another Java solution
public static void main(String[] args) { Scanner in = new Scanner(System.in); int n, k ,q; n = in.nextInt(); k = in.nextInt(); q = in.nextInt(); int[] arr = new int[n]; for(int i=0; i<n; i++) { arr[(i+k)%n] = in.nextInt(); } for(int i=0; i<q; i++) { System.out.println(arr[in.nextInt()]); } }
ucm_jwu + 1 comment
can i just say...
your 'arr[(i+k)%n]' is GENIUS!
mad respect. thank you for thisss.
lhademmor + 1 comment
It essentially does the 'rotations' before even assigning the array values in the first place, then uses modular arithmetics:
1) First, (i+k) ensures that each array value is 'moved' k positions to the right. Obviously, this means that towards the end of the array, values will exceed the array size, i+k > n-1, which we cannot allow to happen.
2) Then, by using %n ('mod n'), we ensure that whenever (i+k) exceeds the max array position (n-1), we loop back to position a[0] and continue from there. Essentially this works in the same was as a clock 'starts over' when the hands of the clock reach 12.
Modulo is a tricky mathematical concept, but once you understand how it's used here I think you will agree that this solution is absolutely BRILLIANT. If you don't know what modulo is, there's a pretty good introduction here:
amateur_rb + 1 comment[deleted] shauntrick + 0 comments
Very creative solution ! I did it with deque but this is awesome ,It helped me to learn a lot !
AshishSinha5 + 0 comments
one of most creative solution i have ever seen for this type of problem......a complete genius
shaikhriyaz434 + 1 comment
you are a genius brother....
georgecartridge + 1 comment
Could you possibly explain, or point to another resource, where I can better understand what is going on with
(i+k)%n
namit2saxena + 0 comments
That's a genius solution. wow! How are you guys able to think this far? What books do you guys follow?
JetFault + 1 comment
JS Math solution
function processData(input) { var lines = input.split('\n'); var arr = lines[1].split(' '), rot = lines[0].split(' ')[1], n = arr.length; for (var i = 2; i < lines.length; i++) { var m = lines[i], oldM = (n + (m - (rot % n))) % n console.log(arr[oldM]); } }
aseefm25 + 2 comments
can anyone please tell that what is teminated due to timeout
#include <math.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <assert.h> #include <limits.h> #include <stdbool.h> int main(){ int n,k,q,temp,i,j,a[100000],b[500]; scanf("%d %d %d",&n,&k,&q); for(i=0;i<n;i++){ scanf("%d",&a[i]); } for(j=0;j<k;j++) { temp=a[n-1]; for(i=n-1;i>0;i--) a[i]=a[i-1]; a[0]=temp; } for(i=0;i<q;i++) scanf("%d",&b[i]); for(i=0;i<q;i++) printf("%d\n",a[b[i]]); return 0; }
rupakpanigrahy88 + 0 comments
Looks cool but its not optimized.
m is the iterator loop index, hence k%n will be calculated everytime (which is supposed to be a constant)!! Everything else is fine.
Though its O(n) logic, but not optimized.
hvogue + 3 comments
We don't need to actually rotate the array. We can just use basic math to pull the entry from the original array:
int shift = n - (k % n); int index = (shift + m ) % n;
D4RIO + 1 comment vasilij_kolomie1 + 1 comment
really better Python:
def circularArrayRotation(a, k, queries):
a = a[-k:] + a[:-k]
return [a[i] for i in queries]
nishantmaharishi + 2 comments
static int[] circularArrayRotation(int[] a, int k, int[] queries) { int arr[] = new int[a.length]; for(int i=0 ; i<a.length ; i++) arr[(i+k)%a.length] = a[i]; for(int i=0 ; i<queries.length ; i++) queries[i] = arr[queries[i]]; return queries; }
rafal_niedziela1 + 0 comments
(i + k) % a.Length Such ... simple ... I don't know why I don't see it. Short and brilliant. Thank you.
bazzynator + 0 comments
(Python) Perfect timing on this question. I was just reading about deques which comes with the built-in function rotate(). :O)
from collections import deque # Complete the circularArrayRotation function below. def circularArrayRotation(a, k, queries): dq = deque(a, len(a)) dq.rotate(k) return [dq[num] for num in queries]
Sort 1607 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/circular-array-rotation/forum | CC-MAIN-2019-43 | refinedweb | 6,738 | 65.01 |
It is common practice in O/RMs to delete an entity without actually loading id, just by knowing its id. This saves one SELECT and is great for performance. For example, using Entity Framework Code First:
1: ctx.Entry(new Project { ProjectId = 1 }).State = EntityState.Deleted;
2: ctx.SaveChanges();
This, however will fail if there is a [Required] reference (might be defined in fluent configuration, it doesn’t matter) to another entity (many-to-one, one-to-one)! For example, if our Product entity would have something like this:
1: public class Project
2: {
3: public Int32 ProjectId { get; set; }
4:
5: [Required]
6: public virtual Customer Customer { get; set; }
7:
8: //...
9: }
In this case, the required constraint is treated like a concurrency check and the above query will fail miserably. From conversations with the EF team, I understood this is related with some EF internals, which apparently are difficult to change.
There are three workarounds, though:
The third option would be to have something like:
1: public class Project
2: {
3: public Int32 ProjectId { get; set; }
4:
5: [Required]
6: public virtual Customer Customer { get; set; }
7:
8: [ForeignKey("Customer")]
9: public Int32 CustomerId { get; set; }
10:
11: //...
12: }
It really shouldn’t be required to have this foreign key – that’s what navigation properties are meant to replace – but it is. Let’s hope that some future version of EF will fix this. | http://weblogs.asp.net/ricardoperes/entity-framework-pitfalls-deleting-detached-entities-with-required-references | CC-MAIN-2015-32 | refinedweb | 236 | 61.16 |
Unfortunately I did not find a way (and I do not know if there is one) to really publish the posts without the user having to interact, now he has to log in in the browser and copy some code.
From a high level view the procedure in this version is as follows:
Via the Google Developer Console a new project has to be created and the Blogger API has to be enabled. Information about this step and the general usage of the API can be found here.
Then, to be able to use the API, we need to authenticate via OAuth 2.0. For this, we have to send a request to Google. We can do so by calling a specific URL in the browser, as parameters we specify amongst others the ID of our project, the scope for which we want to use the access etc. The user then logs in in the browser and a authorization code is presented. This code the program then sends via an HTTP request to Google and we now get back a token. With this we can eventually call the API and thus publish posts on Blogger. This procedure is decribed here.
After this high level overview, let us come to concrete implementation: First we create a new project in the Google Developer Console. Then we look for the API Blogger API v3 in the menu APIs and enable it.
Now, in our C# program, we first have to call an URL to do our initial request. The needed URL is. One parameter we pass over is scope, which describes for which application we want to authenticate, for this we send. The next parameter is the redirect URL (redirect_uri), which determines to where the answer is send. When setting this to urn:ietf:wg:oauth:2.0:oob the answer is shown in the opened browser. Via response_type=code we determine to get a code back. As the last parameter we set client_id to the ID of our created project in the Google Developer Console. This is NOT the Project ID which can be found on the mainpage, but to get the client id one has to navigate to APIs & auth - Credentials and then click on (if not already done) Add credentials - OAuth 2.0 client ID. Then we select Other (because we are designing a native application) and click on Create - then we get the client id.
All in all the URL to be called should look like this:
We simply use Process.Start on the URL to start the default browser with it. In this, the user is presented with a login and consent screen. If he clicks accept, a success code is presented. We copy this.
With the code, we can get an access token for using the Blogger API. For this we now have to do a HTTP Post Request. The URL to be called is, as parameters we have to send the obtained code (code), the client id (client_id), the redirect url (redirect_uri, same as before), the grant type (grant_type=authorization_code - describes how we want to authenticate) and the client secret (client_secret). The latter we can see when clicking on the client id under Credentials.
With the method HTTPPost() presented in the linked post this looks as follows:
string Code = "code=" + Code1 + "&";
string ID = "client_id=id&";
string uri = "redirect_uri=urn:ietf:wg:oauth:2.0:oob&";
string grant = "grant_type=authorization_code&";
string secret = "client_secret=secret";
string Code2 = HTTPPost("", Code + ID + uri + grant + secret);
We read out the answer of the server since this contains (if successful) the access token. The answer is given in the JSON format, we use the library Newtonsoft.Json to interpret this. Maybe I will write a post about the library, for now I just refer to this post where it is also used.
Thus we obtain the access token via the following code, where AccessToken is a custom class with the desired attribute:
AccessToken JsonAccToken = (AccessToken)JsonConvert.DeserializeObject(Code2, typeof(AccessToken));With this token we can now use the API to publish posts on Blogger. We use a WebRequest to send the correct POST request to the Blogger server. As the target adress we select.
string StrAccToken = JsonAccToken.access_token;
First we set the correct content type, select the authentication header etc:
var http = (HttpWebRequest)WebRequest.Create(new Uri("" + sid + "/posts/"));
http.Accept = "application/json";
http.ContentType = "application/json";
http.Method = "POST";
http.Headers.Add("Authorization", "Bearer " + token);
sid is the ID of the blog to which we want to publish. Next we describe the post in JSON format, for that we first write it as a string and then convert it.
var vm = new { kind = "blogger#post", blog = new { id = sid }, title = stitle, content = scontent };
var dataString = JsonConvert.SerializeObject(vm);
string parsedContent = dataString;
In the code stitle denotes the title of the post, scontent the content (which is expected in the HTML format).
Eventually we upload this via the WebRequest.
The complete code looks as follows:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Net;
using System.IO;
using Newtonsoft.Json;
namespace Blogger
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
System.Diagnostics.Process.Start("");
}
private string HTTPPost(string url, string postparams)
{
string responseString = "";
// performs the desired http post request for the url and parameters";
request.ContentLength = data.Length;
using (var stream = request.GetRequestStream())
{
stream.Write(data, 0, data.Length);
}
var response = (HttpWebResponse)request.GetResponse();
responseString = new StreamReader(response.GetResponseStream()).ReadToEnd();
return responseString;
}
private void Form1_Click(object sender, EventArgs e)
{
string Code = "code=" + textBox1.Text + "&";
string ID = "client_id=client-id&";
string uri = "redirect_uri=urn:ietf:wg:oauth:2.0:oob&";
string grant = "grant_type=authorization_code&";
string secret = "client_secret=secret";
string Code2 = HTTPPost("", Code + ID + uri + grant + secret);
AccessToken JsonAccToken = (AccessToken)JsonConvert.DeserializeObject(Code2, typeof(AccessToken));
string StrAccToken = JsonAccToken.access_token;
JSONPublish(BlogID, "Testpost", "This is a <b>Test</b>.", StrAccToken);
}
private void JSONPublish(string sid, string stitle, string scontent, string token)
{
var http = (HttpWebRequest)WebRequest.Create(new Uri("" + sid + "/posts/"));
http.Accept = "application/json";
http.ContentType = "application/json";
http.Method = "POST";
http.Headers.Add("Authorization", "Bearer " + token);
var vm = new { kind = "blogger#post", blog = new { id = sid }, title = stitle, content = scontent };
var dataString = JsonConvert.SerializeObject(vm);
string parsedContent = dataString;
Byte[] bytes = Encoding.UTF8.GetBytes(parsedContent);
Stream newStream = http.GetRequestStream();
newStream.Write(bytes, 0, bytes.Length);
newStream.Close();
var response = http.GetResponse();
var stream = response.GetResponseStream();
var sr = new StreamReader(stream);
var content = sr.ReadToEnd();
}
public class AccessToken
{
[JsonProperty(PropertyName = "access_token")]
public string access_token { get; set; }
}
}
} | http://csharp-tricks-en.blogspot.de/2015/08/ | CC-MAIN-2018-09 | refinedweb | 1,110 | 51.44 |
Created on 2003-07-09 18:36 by mdoudoroff, last changed 2005-01-15 20:48 by facundobatista. This issue is now closed.
Running the following code under Linux will result in a
crash on the 508th thread started. The error is
OSError: [Errno 24] Too many open files
The nature of the bug seems to be that Python isn't
closing filedescriptors cleanly when running a thread.
---------------------------------------
import os
from threading import Thread
class Crash(Thread):
def run(self):
a = os.popen4('ls')
b = a[1].read()
# uncommenting these lines fixes the problem
# but this isn't really documented as far as
# we can tell...
# a[0].close()
# a[1].close()
for i in range(1000):
t = Crash()
t.start()
while t.isAlive():
pass
print i
---------------------------------------
The same code without threads (Crash as a plain class)
doesn't crash, so the descriptor must be getting taken
care of when the run() method is exited.
import os
class Crash:
def run(self):
a = os.popen4('ls')
b = a[1].read()
for i in range(1000):
t = Crash()
t.run()
print i
Logged In: YES
user_id=33168
I can't duplicate this on Redhat 9. What OS, what version
of glibc and what kernel are you using? Does it always
crash on the 508th iteration?
I tested with both 2.2.3 and 2.3b2 from CVS without
problems. I even used ulimit to set my open files to 10.
Can you try the patch in bug #761888 to see if that helps?
Logged In: YES
user_id=139865
Duplicated with Python 2.3 on Red Hat 7.3 using
glibc-2.2.5-43. Popen3.{poll,wait} are written under the
incorrect assumption that waitpid can monitor any process in
the same process group, when it only works for immediate
children. _active.remove is never called, so Popen3 objects
are never destroyed and the associated file descriptors are
not returned to the operating system.
A general solution for Popen[34] is not obvious to me. With
patch #816059, popen2.popen[234] plugs the _active leak,
which in turn returns the file descriptors to the operating
system when the file objects that popen2.popen[234] return
go out of scope.
Logged
Logged In: YES
user_id=752496
Works fine to me:
Python 2.3.4 (#1, Oct 26 2004, 16:42:40)
[GCC 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)] on linux2
with glibc-2.3.4-2 | http://bugs.python.org/issue768649 | crawl-003 | refinedweb | 409 | 86.6 |
)v4.
Stateid: A stateid is a 128-bit quantity returned by a server that
uniquely identifies.; |
| | |
| | Describes LOCK lengths. |
| | |
|. |
| | |
| nfs_lease4 | typedef uint32_t nfs_lease4; |
| | |
| | Duration of a lease in seconds. |
| | |
|. open_to_lock_owner4.. Section 7.
4.1.. if the file system in
whole has been destroyed, or if the file system has simply been
removed from the server's namespace the operation mechanism to construct a set of operations
like:
RENAME A B
LOOKUP B
GETFH
Note that the COMPOUND procedure does not provide atomicity. This
example only reduces the overhead of recovering from an expired
filehandle.
5..0.0 Section 11 for further
discussion.
Named attributes are accessed by theers; however,. either should be an accurate time or should not be
supported by the server. At times this will be difficult for
clients, but a client is better positioned to decide whether and how
to fabricate or construct an attribute or whether to do without the
attribute.
5.3. might be the target of delegations. However, since
granting of delegations is at the server's discretion, a server need
not support delegations on named attributes..0,:.-file system attributes are:
supported_attrs,, and time_delta
o, and mounted_on_fileid
For quota_avail_hard, quota_avail_soft, and quota_used, see their
definitions below for the appropriate classification.
5. such as readdir() [readdir_api], the return results are directory
entries, each with a component name and a fileid. The fileid of the
mount point's directory entry will be different from the fileid that
the stat() .0 nothing mounted on top...
5.8.2.25.
may exist server-side rules as to which other files or directories.
5.8.2.26.".
5.8.2.27., [read_api], [readdir_api],
[write_api].. either
ownership or acls.. ASCII-
encoded.0.0;
All four bit types a system |
| | | ALARM (system |
| | | dependent) when any |
| | | access attempt is |
| | | made to a file or |
| | | directory for the |
| | | access methods |
| | | specified in |
| | | acemask4. |
+------------------------------+--------------+---------------------+
The "Abbreviation" column denotes how the types will be referred to
throughout the rest of this section.
6.2.1.2..
6.2.1.3... attributes
Discussion:
Permission to execute a file.
Servers SHOULD allow a user the ability to read the data of the
file when only the ACE4_EXECUTE access mask bit is set.
OPEN
REMOVE
RENAME
LINK
CREATE
Discussion:
Permission to traverse/search a directory.
ACE4_DELETE_CHILD
Operation(s) affected:
REMOVE
RENAME
Discussion:
Permission to delete a file or directory within a directory.
See Section 6.2.1.3.2 for information on how, and_DELETE
Operation(s) affected:
REMOVE
Discussion:
Permission to delete the file or directory. See
Section 6.2.1.3.2 for information on ACE4_DELETE and
ACE4_DELETE_CHILD interact.
ACE4_READ_ACL
Operation(s) affected:
GETATTR of acl.
6.2.1.3.2. ACE4_DELETE versus the.
6.2.1.4..
6.2.1.4.1. Discussion of Flag Bits
ACE4_FILE_INHERIT_ACE
Any non-directory file in any subdirectory
notes
Section 6.2.1.5.
6.2.1.5.
understand. |
+---------------+---------------------------------------------------+
Table 5: Special Identifiers.. types AUDIT and
ALARM. As such, it is desirable to leave these ACEs unmodified when
modifying the ACL attributes.
Also note that the requirement may be met by discarding the acl Section 6.4.3).
6.4.1.2. Setting ACL and Not mode
When setting the acl and not setting the mode attribute, Section 6.3
Section 6.3.2 to the ACL attribute.
6.4.3..., i. This makes it simpler to modify the
effective permissions on the directory without modifying the ACE that
is to be inherited to the new directory's children.
7. NFS Server Namespace
7 sends a
string that identifies an object in the exported namespace, and the
server returns the root filehandle for it. The MOUNT protocol
supports an EXPORTS procedure that will enumerate the server's
exports.
7.2. Browsing Exports
The NFSv42 and NFSv3
protocols. The client expects all LOOKUP operations to remain within
a single-server file system. For example, the device attribute will
not change. This prevents a client from taking namespace paths that
span exports..:
/ (placeholder.
7.8. Security Policy and Namespace Presentation
Because NFSv4 clients possess the ability to change the security
mechanisms used, after determining what is allowed, by using SECINFO:
/ (placeholder alternative locations, will result in an error,
NFS4ERR_MOVED. Note that if the server ever returns the error
NFS4ERR_MOVED, it MUST support the fs_locations attribute. simply never existed.. fs_locations attribute, the following
attributes SHOULD be available on absent file systems. In the case
of RECOMMENDED attributes, they should be available at least to the
same degree that they are available on present file systems.mask.:
o If the attribute set requested includes fs_locations, then the
fetching of attributes proceeds normally, and no NFS4ERR_MOVED
indication is returned even when the rdattr_error attribute is
requested.
o If the attribute set requested does not include fs_locations, then
if the rdattr_error attribute is requested, each directory entry
for the root of an absent file system will report NFS4ERR_MOVED as
the value of the rdattr_error attribute.
o If the attribute set requested does not include either of the
attributes fs_locations or rdattr_error, then the occurrence of
the root of an absent file system within the directory will result
in the READDIR failing with an NFS4ERR_MOVED error.
o...
8.6. issuing lookup caching. Clients should
periodically purge this data for referral points in order to detect
changes in location information..
The examples given in the sections below are somewhat artificial in
that an actual client will not typically do a multi-component lookup
but will have cached information regarding the upper levels of the
name hierarchy. However, these example are chosen to make the
required behavior clear and easy to put within the scope of a small
number of requests, without getting unduly into details of how
specific clients might choose to cache things..
o PUTROOTFH
o LOOKUP "this"
o LOOKUP "is"
o LOOKUP "the"
o LOOKUP "path"
o GETFH
o GETATTR(fsid, fileid, size, time_modify)
Under the given circumstances, the following will be the result:
o PUTROOTFH --> NFS_OK. The current fh is now LOOKUP "path" --> NFS_OK. The current fh is for /this/is/the/path
and is within a new, absent file system, but ... the client will
never see the value of that fh.
o processing of the COMPOUND.
Given the failure of the GETFH, the client has the job of determining
the root of the absent file system and where to find that file
system, i.e., the server and path relative to that server's root fh.
Note here that in this example, the client did not obtain filehandles
and attribute information (e.g., fsid) for the intermediate
directories, so"). The fs_locations attribute also
gives the client the actual location of the absent file system so
that the referral can proceed. The server gives the client the bare
minimum of information about the absent file system so that there
will be very little scope for problems of conflict between
information sent by the referring server and information of the file
system's home. No filehandles and very few attributes are present on
the referring server, and the client can treat those it receives as:
o PUTROOTFH
o LOOKUP "this"
o LOOKUP "is":
o PUTROOTFH
o LOOKUP "this"
o LOOKUP "is"):
o PUTROOTFH
o LOOKUP "this"
o LOOKUP "is":
o rdattr_error (value: NFS_OK)
o fs_locations
o mounted_on_fileid (value: unique fileid within referring file
system)
o.
To support Win32 share reservations, it is necessary to atomically
OPEN or CREATE files and apply the appropriate locks in the same
operation.. and an opaque owner
string. For each client, the set of distinct owner values used with
that client constitutes the set of forv4 client, it should contain additional
information to distinguish the client from other user-level
clients running on the same host, such as a universally unique
identifier (UUID).
o:
Anonymous State.
READ Bypass Stateid: requests..
9.1.4.4..
Stateids associated with byte-range locks are an exception. They
remain valid even if a LOCKU frees all remaining locks, so long as
the open file with which they are associated remains open.:.
o.
o..
themselves, such as open modes and byte ranges.
9.1.4.5. Stateid Use for I/O Operations
Clients performing Input/Output ..
9.1.4.6. Stateid Use for SETATTR Operations.
9.1.5. Lock-Owner
When requesting a lock, the client must present to the server the
client ID and an identifier for the owner of the requested lock.
These two fields comprise the lock-owner and are defined as follows:
o A client ID.
byte-range lock or share reservation -- the anonymous stateid is
used. Regardless of whether an anonymous stateid. Section 9.6.2 below, the server may employ certain
optimizations during recovery that work effectively only when the
client's behavior during lock recovery is similar to the client's
locking behavior prior to server failure. requests. However, clients are not
required to perform this courtesy, and servers must not depend on
them doing so. Also, clients must be prepared for the possibility
that this final locking request will be accepted. ID (NFS4ERR_STALE_CLIENTID error) will not be
valid, hence preventing spurious renewals.
This approach allows for low-overhead lease renewal, which scales
well. In the typical case, no extra RPCs.., that are associated with the old client ID that was
derived from the old verifier.
Note that the verifier must have the same uniqueness properties of
the verifier for the COMMIT operation. ID invalidated by reboot or restart. When either of these is
received, the client must establish a new client ID (see
Section 9.1.1) either CLAIM_PREVIOUS or CLAIM_DELEGATE_PREV)..
9.6.3. Network Partitions and Recovery
If the duration of a network partition is greater than the lease
period provided by the server, the server will have not received a
lease renewal from the client. If this occurs, the server may cancel
the lease and.:
o In the case of a courtesy lock that is not a delegation, it MUST
free the courtesy lock and grant the new request.
o In the case of a lock or an I/O request that conflicts with a
delegation that is being held as a courtesy lock, the server MAY
delay resolution of the request but MUST NOT reject the request
and MUST free the delegation and grant the new request eventually.
o:
o Client's responsibility: A client MUST NOT attempt to reclaim any
locks that it did not hold at the end of its most recent
successfully established client lease.
o:
1. Client A acquires a lock.
2. Client A and the server experience mutual network partition, such
that client A is unable to renew its lease.
3. Client A's lease expires, so the server releases the lock.
4. Client B acquires a lock that would have conflicted with that of
client A.
5. Client B releases the lock.
6. The server reboots.
7. The network partition between client A and the server heals.
8. Client A issues a RENEW operation and gets back..
10.:
o the client's id string.
o a boolean that indicates if the client's lease expired or if there
was administrative intervention (see Section 9.8) to revoke a
byte-range lock, share reservation, or delegation.
o or
NFS4ERR_RECLAIM_BAD. or NFS4ERR_RECLAIM_BAD.
Regardless of the level and approach to record keeping, the server
MUST implement one of the following strategies (which apply to
reclaims of share reservations, byte-range locks, and delegations):
1. Reject all reclaims with NFS4ERR_NO_GRACE. This is extremely
harsh but is necessary if the server does not want to record lock
state in stable storage.
2..
10. The server reboots a second time.
11. The network partition between client A and the server heals.
12. Client A issues a RENEW operation and gets back an
NFS4ERR_STALE_CLIENTID.
13..,; if there is no relevant I/O pending, a
zero-length read specifying the stateid associated with the lock in
question can be synthesized to trigger the renewal. were revoked by the server, and the
client must notify the owner.
9.9. Share Reservations
A share reservation is a mechanism to control access to a file. It
is a separate and independent mechanism from byte-range locking.
When a client opens a file, it issues;), issuing a CLOSE.. of open-owner, that is
not a retransmission.
o The time that an open-owner that the CLOSE is indeed a
retransmission and avoid error logging in most cases.
9.11. .v4 protocol.
10. Client-Side Caching
Client-side caching of data, file attributes, and filenames). be acted on until the
recall is complete. The recall is considered complete when the
client returns the delegation or the server times out protocol:
o Upon reclaim, a client claiming.. file system. However, they may not work with
the NFSv4..
10.3.4. protocol, there is now the possibility of having.
o If GETATTR directed to the two filehandles returns different
values for the fileid attribute, then they are distinct objects.
o Otherwise, they are the same object. read access to others. It MUST, however, continue
to send all requests to open a file for writing to the server.) open Section 9.9... to the server data. One way that this may be
accomplished is by tracking the expiration time of credentials and
flushing data well in advance of their expiration.
10.4.2. Open Delegation and File Locks
When a client holds an OPEN_DELEGATE_WRITE.
10.4.3.:
o The value of the change attribute will be obtained from the server
and cached. Let this value be represented by c.
o The client will create a value greater than c that will be used
for communicating that would make sure that
breaks down.
It should be noted that the server is under no obligation to use
CB_GETATTR; therefore, the server MAY simply recall the delegation to
avoid its use.
10.4.4. Recall of Open Delegation
The following events necessitate byte-range cannot rely on operations,
except for RENEW, that take a stateid, to renew delegation leases
across callback path failures. The client that wants to keep
delegations in force across callback path failures must use RENEW
to do so.
10.4 Section 10.5.1 for additional
details. on which an application depends may be
violated. Depending on how errors are typically treated for the
client operating environment, further levels of notification,
including logging, console messages, and GUI pop-ups, may be
appropriate.
10.5.1. other cases,.
The.rency. a local file or is being accessed an OPEN_DELEGATE.
Note that in this section, the key words "MUST", "SHOULD", and "MAY"
retain their normal meanings. However, in deriving this
specification from implementation patterns, we document [RFC3530], resulting in a situation in which
those involved in the implementation may no longer be involved in.
12.2. Limitations on Internationalization-Related Processing in the
NFSv4 Context
There are a number of noteworthy circumstances that limit the degree
to which internationalization-related processing may may even vary between processes on the same client
system.] 12.5)..
12.5. Normalization
The client and server operating environments may differ in their
policies and operational methods with respect to character
normalization (see [UNICODE] filenames filename because it does not
conform to a particular normalization form, as this may deny access
to clients that use a different normalization form.
12.6..
These are as follows:
o Server names as they appear in the fs_locations attribute. Note
that for most purposes, such server names will only be sent by the
server to the client. The exception is the use of the
fs_locations attribute
Section 5.9.
12.7. Universal Multiple-Octet Coded
Character Set (UCS) character.
Requirements for server handling of component names that are not
valid UTF-8, when a server does not return NFS4ERR_INVAL in response
to receiving them, are described in Section 12.8.. filename in an unexpected fashion,
rendering the file inaccessible to the application or client that
created or renamed the file and to others expecting the original
filename.
filename, normalization may corrupt the filename with respect to
that character set, rendering the file inaccessible to the
application that created it and others expecting the original
filename. , |
| | |
| | |
| LOOKUP | |
| | |
| LOOKUPP | NFS4ERR_ACCESS, NFS4ERR_BADHANDLE, |
| | NFS4ERR_DELAY,, NFS4ERR.
16.15.5..16. Operation 18: OPEN - Open a Regular File
16.16.1. SYNOPSIS
(cfh), seqid, share_access, share_deny, owner, openhow, claim ->
(cfh), stateid, cinfo, rflags, attrset, delegation
16.16.2. ARGUMENT
/*
*;
};
16.16.3.;
};
16.16.4. Warning to Client Implementers.
16.16.5.4, GUARDED4, or EXCLUSIVE4.44. file systems that do not
provide a mechanism for the storage of arbitrary file attributes, the
server may use one or more elements of the object metadata file system environment, the expected storage
location for the verifier on creation is the metadata (timestamps) of
the object. For this reason, an exclusive object create may not
include initial attributes because the server would have nowhere to
store the verifier.. occurs for the
open_owner4 within the lease period. In this case, the OPEN state is
canceled and disposal of the open_owner4 can occur..;
};
16.24.4. DESCRIPTION
The READDIR operation retrieves a variable number of entries from a
file system directory and for each entry returns attributes that were
requested. file system. the SETCLIENTID call, then the server returns.
o The client's reuse discussions of record processingv4.
One suggested way to use<>;
};
17.2.4. Section 15.2..
17.2.5..
David Noveck (editor)
Dell
300 Innovative Way
Nashua, NH 03062
United States
Phone: +1 781 572 8038
Previous: RFC 7529 - Non-Gregorian Recurrence Rules in the Internet Calendaring and Scheduling Core Object Specification (iCalendar)
Next: RFC 7531 - Network File System (NFS) Version 4 External Data Representation Standard (XDR) Description | http://www.faqs.org/rfcs/rfc7530.html | CC-MAIN-2017-17 | refinedweb | 2,945 | 55.24 |
Instructions for Form 1040NR
- Emmeline Heath
- 2 years ago
- Views:
Transcription
1 2010 Instructions for Form 1040NR U.S. Nonresident Alien Income Tax Return Department of the Treasury Internal Revenue Service Section references are to the Internal $13,170. The credit is now refundable Divorced or separated parents. A Revenue Code unless otherwise noted. and is claimed on line 66. See Form custodial parent who has revoked his or her previous release of a claim to a General Instructions Alternative minimum tax (AMT) child s exemption must include a copy exemption amount increased. The of the revocation with his or her return. What s New for 2010 AMT exemption amount has increased See page 9. to $47,450 ($72,450 if a qualifying Due date of return. If you generally Domestic production activities widow(er); $36,225 if married filing must file Form 1040NR by April 15, the income. The percentage rate for 2010 separately). due date for your 2010 Form 1040NR is increases to 9%. However, the April 18, The due date is April 18, Repayment of first-time homebuyer deduction is reduced if you have instead of April 15, because of the credit. If you claimed the first-time oil-related qualified production activities Emancipation Day holiday in the District homebuyer credit for a home you income. See page 20. of Columbia even if you do not live in bought in 2008, you generally must begin repaying it on your 2010 return. Decedents who died in For the District of Columbia. See When To In addition, you generally must repay special rules that may apply to File on page 5. any credit you claimed for 2008 or 2009 decedents who died in 2010, including Limits on personal exemptions and if you sold your home in 2010 or the rules for property acquired from a overall itemized deductions ended. home stopped being your main home in decedent who died in 2010, see new For 2010, you will no longer lose part of See the instructions for line 58 on Pub your deduction for personal exemptions page 27. and itemized deductions, regardless of Expired tax benefits. The following Roth IRAs and designated Roth the amount of your adjusted gross tax benefits have expired and are not accounts. Half of any income that income (AGI). available for results from a rollover or conversion to The exclusion from income of up to Self-employment tax. You must pay a Roth IRA from another retirement $2,400 in unemployment self-employment tax on your plan in 2010 is included in income in compensation. All unemployment self-employment income if an 2011, and the other half in 2012, unless compensation you received in 2010 international social security agreement you elect to include all of it in The generally is taxable. in effect between your country of tax same rule applies to a rollover after residence and the United States September 27, 2010, to a designated Government retiree credit. provides that you are covered under Roth account in the same plan. See Alternative motor vehicle credit for the U.S. social security system. Enter Form qualified hybrid motor vehicles bought the tax on line 54. Deduct one-half of after 2009, except cars and light trucks You now can make a qualified your self-employment tax on line 27. with a gross vehicle weight rating of rollover contribution to a Roth IRA Attach Schedule SE (Form 1040). See 8,500 pounds or less. regardless of the amount of your the Instructions for Schedule SE (Form modified AGI. Extra $3,000 IRA deduction for 1040) for additional information. These employees of bankrupt companies. instructions and lines in the form were Standard mileage rates. The 2010 rate for business use of your vehicle is Credit to holders of clean renewable added to clarify how nonresident aliens reduced to 50 cents a mile. The 2010 energy bonds issued after should report and pay their rate for use of your vehicle to move is Decreased estimated tax payments self-employment tax. reduced to 16 1 /2 cents a mile. The 2010 for certain small businesses. Dividend equivalent payments. All rate for use of your vehicle to do dividend equivalent payments received Disclosure of information by paid volunteer work for certain charitable after September 13, 2010, are U.S. preparers. If you use a paid preparer organizations is still 14 cents a mile. source dividends. to file your return, the preparer is Personal casualty and theft loss limit allowed, in some cases, to disclose Self-employed health insurance reduced. Each personal casualty or certain information from your return, deduction. Effective March 30, 2010, theft loss is limited to the excess of the such as your name and address, to if you were self-employed and paid for loss over $100 (instead of the $500 certain other parties, such as the health insurance, you may be able to limit that applied for 2009). See Form preparer s professional liability include in your deduction on line 29 any insurance company or the publisher of premiums you paid to cover your child Corrosive drywall losses. If you paid a tax newsletter. For details, see who was under age 27 at the end of for repairs to your personal residence Revenue Rulings and , even if the child was not your or household appliances because of You can find Revenue Ruling dependent. For 2010, the line 29 corrosive drywall that was installed on page 309 of Internal Revenue deduction is also allowed on Schedule between 2001 and 2008, you may be Bulletin at SE (Form 1040). See the instructions able to deduct those amounts paid on _IRB/ar08.html. You can find for line 29 on page 17. line 8 of Schedule A of Form 1040NR. Revenue Ruling on page 312 of Adoption credit. The maximum See Form 4684 and its instructions for Internal Revenue Bulletin at adoption credit has increased to details. Cat. No V
2 Items to Note Service. You can download them at 3. You have proof that the letter IRS.gov. Also see Quick and Easy was received by the USCIS. Form 1040NR-EZ. You may be able Access to Tax Help and Tax Products to use Form 1040NR-EZ if your only on page 43 for other ways to get them Keep a copy of the letter and the income from U.S. sources is wages, (as well as information on receiving IRS proof that the letter was received. salaries, tips, refunds of state and local assistance in completing the forms). income taxes, and scholarship or Until you have proof your letter fellowship grants. For more details, see Resident Alien or! was received, you remain a Form 1040NR-EZ and its instructions. resident for tax purposes even if Special rules for former U.S. citizens Nonresident Alien the USCIS would not recognize the and former U.S. long-term residents. If you are not a citizen of the United validity of your green card because it is If you renounced your U.S. citizenship States, specific rules apply to determine more than ten years old or because you or terminated your long-term resident if you are a resident alien or a have been absent from the United status, you may be subject to special nonresident alien for tax purposes. States for a period of time. rules. Different rules apply based on Generally, you are considered a the date you renounced your citizenship resident alien if you meet either the For more details, including special or terminated your long-term residency green card test or the substantial rules that apply if you give up your green card after holding it in at least 8 of the prior 15 years, see Pub in the United States. See Special Rules for Former U.S. Citizens and Former presence test for (These tests are explained on this page and on page U.S. Long-Term Residents 3.) Even if you do not meet either of (Expatriates) on page 7. these tests, you may be able to choose Substantial Presence Test Social security or Medicare taxes You are considered a U.S. resident if withheld in error. If you are a foreign of See First-Year Choice in you meet the substantial presence test student on an F1, J1, M, or Q visa, and chapter 1 of Pub. 519 for details. for You meet this test if you were social security or Medicare taxes were Generally, you are considered a physically present in the United States withheld on your wages in error, you nonresident alien for the year if you are for at least: to be treated as a U.S. resident for part may want to file Form 843, Claim for not a U.S. resident under either of days during 2010, and Refund and Request for Abatement, to these tests. However, even if you are a days during the period 2010, request a refund of these taxes. For U.S. resident under one of these tests, 2009, and 2008, using the following more information, see Refund of Taxes you still may be considered a chart. Withheld in Error in chapter 8 of Pub. nonresident alien if you qualify as a 519, U.S. Tax Guide for Aliens. resident of a treaty country within the (a) (b) (c) (d) Other reporting requirements. You meaning of an income tax treaty Year Days of Multiplier Testing also may have to file other forms, between the United States and that physical days including the following: country. You can download the presence (multiply Form 8833, Treaty-Based Return complete text of most U.S. tax treaties (b) times Position Disclosure Under Section 6114 at IRS.gov. Enter tax treaties in the (c)) or 7701(b). search box at the top of the page. Form 8840, Closer Connection Technical explanations for many of Exception Statement for Aliens. those treaties are also available at that Form 8843, Statement for Exempt site. Individuals and Individuals With a For more details on resident and Medical Condition. nonresident status, the tests for Total testing days (add For more information, and to see if you residence, and the exceptions to them, column (d))... must file one of these forms, see see Pub chapter 1 of Pub Green Card Test Generally, you are treated as Additional Information You are a resident for tax purposes if present in the United States on any day you were a lawful permanent resident that you are physically present in the If you need more information, our free (immigrant) of the United States at any country at any time during the day. publications may help you. Pub. 519 time during 2010 and you took no steps However, there are exceptions to this will be the most important, but the to be treated as a resident of a foreign rule. In general, do not count the following publications also may help. country under an income tax treaty. following as days of presence in the (However, see Dual-Status Taxpayers United States for the substantial Pub. 501 Exemptions, Standard on page 5.) In most cases you are a presence test. Deduction, and Filing lawful permanent resident if the U.S. Days you commute to work in the Information Citizenship and Immigration Services United States from a residence in Pub. 525 Taxable and Nontaxable (USCIS) (or its predecessor Canada or Mexico if you regularly Income organization, INS) has issued you an commute from Canada or Mexico. Pub. 529 Miscellaneous Deductions alien registration card, also known as a Days you are in the United States for Pub. 552 Recordkeeping for Individuals green card. less than 24 hours when you are in Pub. 597 Information on the United If you surrender your green card, transit between two places outside the States Canada Income Tax your status as a resident for tax United States. Treaty Pub. 901 U.S. Tax Treaties purposes will change as of the date you Days you are in the United States as Pub. 910 IRS Guide to Free Tax surrender your green card if all of the a crew member of a foreign vessel. Services (includes a list of all following are true. Days you intend, but are unable, to publications) 1. You mail a letter stating your leave the United States because of a intent to surrender your green card. medical condition that arose while you These free publications and the 2. You send this letter by certified were in the United States. forms and schedules you will need are mail, return receipt requested (or the Days you are an exempt individual available from the Internal Revenue foreign equivalent). (defined on page 3). -2- Instructions for Form 1040NR (2010)
3 You may need to file Form You have substantially complied Write-in taxes or recapture taxes,! to exclude days of presence in with your visa requirements. including uncollected social security the United States for the and Medicare or RRTA tax on tips you substantial presence test. For more You must file a fully completed Form reported to your employer or on information on the requirements, see 8843 with the IRS to claim the closer group-term life insurance and additional Form 8843 in chapter 1 of Pub connection exception. See Form 8843 taxes on health savings accounts. See in chapter 1 of Pub Exempt individual. For these the instructions for line 59 on page 27. purposes, an exempt individual is You cannot use the closer You had net earnings from generally an individual who is a:! connection exception to remain self-employment of at least $400 and Foreign government-related a nonresident alien indefinitely. you are a resident of a country with individual; You must have in mind an estimated whom the United States has an Teacher or trainee who is temporarily departure date from the United States international social security agreement. present under a J or Q visa; in the near future. See the instructions for line 54 on page Student who is temporarily present 26. under an F, J, M, or Q visa; or Who Must File Exceptions. You do not need to Professional athlete who is File Form 1040NR if any of the file Form 1040NR if: temporarily in the United States to following four conditions applies to you. 1. Your only U.S. trade or business compete in a charitable sports event. 1. You were a nonresident alien was the performance of personal Note. Alien individuals with Q visas engaged in a trade or business in the services; and are treated as either students, teachers, United States during You must a. Your wages were less than or trainees and, as such, are exempt file even if: $3,650; and individuals for purposes of the a. You have no income from a trade b. You have no other need to file a substantial presence test if they or business conducted in the United return to claim a refund of overwithheld otherwise qualify. Q visas are issued States, taxes, to satisfy additional withholding to aliens participating in certain b. You have no U.S. source income, at source, or to claim income exempt or international cultural exchange or partly exempt by treaty; or programs. c. Your income is exempt from U.S. 2. You were a nonresident alien See Pub. 519 for more details tax under a tax treaty or any section of student, teacher, or trainee who was regarding days of presence in the the Internal Revenue Code. temporarily present in the United States United States for the substantial However, if you have no gross under an F, J, M, or Q visa, and presence test. income for 2010, do not complete the you have no income that is subject to schedules for Form 1040NR. Instead, tax under section 871 (that is, the Closer Connection to Foreign attach a list of the kinds of exclusions income items listed on page 1 of Form Country you claim and the amount of each. 1040NR, lines 8 through 21, and on Even though you otherwise would meet 2. You were a nonresident alien not page 4, Schedule NEC, lines 1 through the substantial presence test, you can engaged in a trade or business in the 12). be treated as a nonresident alien if you: United States during 2010 and: 3. You were a partner in a U.S. Were present in the United States for a. You received income from U.S. partnership that was not engaged in a fewer than 183 days during 2010, sources that is reportable on Schedule trade or business in the United States Establish that during 2010 you had a NEC, lines 1 through 12, and during 2010 and your Schedule K-1 tax home in a foreign country, and b. Not all of the U.S. tax that you (Form 1065) includes only income from Establish that during 2010 you had a owe was withheld from that income. U.S. sources that is reportable on closer connection to one foreign Schedule NEC, lines 1 through You represent a deceased country in which you had a tax home person who would have had to file than to the United States unless you Form 1040NR. If the partnership withholds had a closer connection to two foreign 4. You represent an estate or trust! taxes on this income in 2011 countries. that has to file Form 1040NR. and the tax withheld and See Pub. 519 for more information. reported on line 9 of Form 1042-S is Other situations when you must file. more or less than the tax due on the Closer connection exception for You must file a return for 2010 if you income, you will need to file Form foreign students. If you are a foreign owe any special taxes, including any of 1040NR for 2011 to pay the student in the United States, and you the following. underwithheld tax or claim a refund of have met the substantial presence test, Alternative minimum tax. the overwithheld tax. you still may be able to claim you are a Additional tax on a qualified plan, nonresident alien. You must meet both including an individual retirement Even if you do not otherwise of the following requirements. TIP arrangement (IRA), or other tax-favored have to file a return, you should 1. You establish that you do not account. But if you are filing a return file one to get a refund of any intend to reside permanently in the only because you owe this tax, you can federal income tax withheld. You also United States. The facts and file Form 5329 by itself. should file if you are engaged in a U.S. circumstances of your situation are Household employment taxes. But if trade or business and are eligible for considered to determine if you do not you are filing a return only because you any of the following credits. intend to reside permanently in the owe this tax, you can file Schedule H Additional child tax credit. United States. The facts and by itself. Credit for federal tax on fuels. circumstances include the following. Social security and Medicare tax on Adoption credit. a. Whether you have taken any tips you did not report to your employer Refundable credit for prior year steps to change your U.S. immigration or on wages you received from an minimum tax. status to lawful permanent resident. employer who did not withhold these Health coverage tax credit. b. During your stay in the United taxes. Exception for certain children under States, whether you have maintained a Recapture of first-time homebuyer age 19 or full-time students. If your closer connection with a foreign country credit. See the instructions for line 58 child was under age 19 at the end of than with the United States. on page or was a full-time student under Instructions for Form 1040NR (2010) -3-
4 age 24 at the end of 2010, had income Page 1. Enter your name, identifying United States. See Pub. 901 for more only from interest and dividends that number (defined on page 8), and all information on tax treaties. are effectively connected with a U.S. address information requested at the Page 2 lines 53 and 59. Enter on trade or business, and that income top of page 1. If your income is not line 53 the tax on income not effectively totaled less than $9,500, you may be exempt from tax by treaty, leave the connected with a U.S. trade or able to elect to report your child s rest of page 1 blank. If your income is business from page 4, Schedule NEC, income on your return. To do so, use exempt from tax by treaty, enter the line 15. Enter your total income tax Form If you make this election, exempt income on line 22 and leave liability on line 59. your child does not have to file a return. the rest of page 1 blank. For details, including the conditions for Page 4 Schedule NEC, lines 1a Line 60a. Enter the total amount of children under age 24, see Form through 12. Enter the amounts of U.S. tax withheld from Form(s) A child born on January 1, 1987, is gross income you received from considered to be age 24 at the end of dividends, interest, royalties, pensions, Line 60d. Enter the total amount of Do not use Form 8814 for such a annuities, and other income. If any U.S. tax withheld on income not child. income you received was subject to effectively connected with a U.S. trade Filing a deceased person s return. backup withholding or withholding at or business from Form(s) 1042-S. The personal representative must file source, you must include all gross the return for a deceased person who income of that type that you received. Line 68. Add lines 60a through 67. This was required to file a return for A The amount of each type of income is the total tax you have paid. personal representative can be an should be shown in the column under executor, administrator, or anyone who the appropriate U.S. tax rate, if any, Lines 69 and 70a. Enter the difference is in charge of the deceased person s that applies to that type of income in between line 59 and line 68. This is property. your particular circumstances. your total refund. Filing for an estate or trust. If you If you are entitled to a reduced rate You can have the refund deposited are filing Form 1040NR for a of, or exemption from, withholding on into more than one account. See Lines nonresident alien estate or trust, the income pursuant to a tax treaty, the 70a through 70e Amount refunded to change the form to reflect the appropriate rate of U.S. tax is the same you beginning on page 28 for more provisions of Subchapter J, Chapter 1, as the treaty rate. Use column (d) if the details. of the Internal Revenue Code. You may appropriate tax rate is other than 30%, Line 70e. You may be able to have find it helpful to refer to Form 1041 and 15%, or 10%, including 0%. your refund check mailed to an address its instructions. Example. Mary is a nonresident that is not shown on page 1. See Line If you are filing Form 1040NR alien individual. The only U.S. source 70e on page 29.! for a foreign trust, you may have income she received during the year Signature. You must sign and date to file Form 3520-A, Annual was as follows. your tax return. See Sign Your Return Information Return of Foreign Trust 4 dividend payments. on page 39. With a U.S. Owner, on or before March 12 interest payments. Documentation. You must attach 15, For more information, see the All payments were reported to Mary acceptable proof of the withholding for Instructions for Form 3520-A. on Form(s) 1042-S. On one of the which you are claiming a refund. If you dividend payments, the withholding are claiming a refund of backup Simplified Procedure for agent incorrectly withheld at a rate of withholding tax based on your status as Claiming Certain Refunds 30% (instead of 15%). There were no a nonresident alien, you must attach a You can use this procedure only if you other withholding discrepancies. Mary copy of the Form 1099 that shows the meet all of the following conditions for must report all four dividend payments. income and the amount of backup the tax year. She is not required to report any of the withholding. If you are claiming a refund You were a nonresident alien. interest payments. of U.S. tax withheld at source, you must You were not engaged in a trade or Note. Payments of gross proceeds attach a copy of the Form 1042-S that business in the United States at any from the sale of securities or regulated shows the income and the amount of time. futures contracts are generally exempt U.S. tax withheld. Attach the forms to You had no income that was from U.S. tax. If you received such the left margin of page 1. effectively connected with the conduct payments and they were subjected to Additional Information of a U.S. trade or business. backup withholding, specify the type of Your U.S. income tax liability was payment on line 12 and show the Portfolio interest. If you are claiming fully satisfied through withholding of tax amount in column (d). a refund of U.S. tax withheld from at source. portfolio interest, include a description You are filing Form 1040NR solely to Lines 13 through 15. Complete of the relevant debt obligation, including claim a refund of U.S. tax withheld at these lines as instructed on the form. the name of the issuer, CUSIP number source. Page 5 Schedule OI. You must (if any), interest rate, and the date the Example. John is a nonresident answer all questions. For item L, debt was issued. alien individual. The only U.S. source identify the country, tax treaty article(s) Withholding on distributions. If you income he received during the year was under which you are applying for a are claiming an exemption from dividend income from U.S. stocks. The refund of tax, and the amount of withholding on a distribution from a U.S. dividend income was reported to him on exempt income in the current year. Also corporation with respect to its stock Form(s) 1042-S. On one of the dividend attach Form 8833 if required. because the corporation had insufficient payments, the withholding agent Note. If you are claiming a reduced earnings and profits to support dividend incorrectly withheld at a rate of 30% rate of, or exemption from, tax based treatment, you must attach a statement (instead of 15%). John is eligible to use on a tax treaty, you generally must be a that identifies the distributing the simplified procedure. resident of the particular treaty country corporation and provides the basis for If you meet all of the conditions listed within the meaning of the treaty and the claim. earlier for the tax year, complete Form you cannot have a permanent If you are claiming an exemption 1040NR as follows. establishment or fixed base in the from withholding on a distribution from -4- Instructions for Form 1040NR (2010)
5 a mutual fund or real estate investment trust (REIT) with respect to its stock because the distribution was designated as long-term capital gain or a nondividend distribution, you must attach a statement that identifies the mutual fund or REIT and provides the basis for the claim. If you are claiming an exemption from withholding on a distribution from a U.S. corporation with respect to its stock because, in your particular circumstances, the transaction qualifies as a redemption of stock under section Where To File Individuals. Mail Form 1040NR to: Department of the Treasury Internal Revenue Service Center Austin, TX U.S.A. Estates and trusts. Mail Form 1040NR to: Department of the Treasury Internal Revenue Service Center Cincinnati, OH U.S.A. If you make this election, you! may forfeit the right to claim benefits otherwise available under a U.S. tax treaty. For more information about the benefits that otherwise might be available, see the specific treaty. Dual-Status Taxpayers Note. If you elect to be taxed as a resident alien (discussed on this page), the special instructions and restrictions discussed here do not apply. Private Delivery Services 302, you must attach a statement that Dual-Status Tax Year You can use certain private delivery describes the transaction and presents services designated by the IRS to meet A dual-status year is one in which you the facts necessary to establish that the the timely mailing as timely filing/ change status between nonresident payment was a complete redemption, a paying rule for tax returns and and resident alien. Different U.S. disproportionate redemption, or not payments. These private delivery income tax rules apply to each status. essentially equivalent to a dividend. services include only the following. Most dual-status years are the years DHL Express (DHL): DHL Same Day of arrival or departure. Before you When To File Service. arrive in the United States, you are a Federal Express (FedEx): FedEx Individuals. If you were an employee nonresident alien. After you arrive, you Priority Overnight, FedEx Standard and received wages subject to U.S. may or may not be a resident, Overnight, FedEx 2Day, FedEx income tax withholding, file Form depending on the circumstances. International Priority, and FedEx 1040NR by the 15th day of the 4th International First. If you become a U.S. resident, you month after your tax year ends. A United Parcel Service (UPS): UPS stay a resident until you leave the return for the 2010 calendar year is due Next Day Air, UPS Next Day Air Saver, United States. You may become a by April 18, (The due date is April UPS 2nd Day Air, UPS 2nd Day Air nonresident alien when you leave if you 18, instead of April 15, because of the A.M., UPS Worldwide Express Plus, meet both of the following conditions. Emancipation Day holiday in the District and UPS Worldwide Express. After leaving (or after your last day of of Columbia even if you do not live in The private delivery service can tell you lawful permanent residency if you met the District of Columbia.) If you file how to get written proof of the mailing the green card test) and for the after this date, you may have to pay date. remainder of the calendar year of your interest and penalties. See page 42. departure, you have a closer Private delivery services cannot If you did not receive wages as an connection to a foreign country than to! deliver items to P.O. boxes. You employee subject to U.S. income tax the United States. must use the U.S. Postal withholding, file Form 1040NR by the During the next calendar year you Service to mail any item to an IRS P.O. 15th day of the 6th month after your tax are not a U.S. resident under either the box address. year ends. A return for the 2010 green card test or the substantial calendar year is due by June 15, presence test. Election To Be Taxed as Estates and trusts. If you file for a See Pub. 519 for more information. nonresident alien estate or trust that a Resident Alien has an office in the United States, file You can elect to be taxed as a U.S. What and Where to File for a the return by the 15th day of the 4th resident for the whole year if all of the Dual-Status Year month after the tax year ends. If you file following apply. If you were a U.S. resident on the last for a nonresident alien estate or trust You were married. day of the tax year, file Form that does not have an office in the Your spouse was a U.S. citizen or Enter Dual-Status Return across the United States, file the return by the 15th resident alien on the last day of the tax top and attach a statement showing day of the 6th month after the tax year year. your income for the part of the year you ends. You file a joint return for the year of were a nonresident. You can use Form the election using Form 1040, 1040A, Note. If the due date for filing falls on 1040NR as the statement; enter or 1040EZ. a Saturday, Sunday, or legal holiday, Dual-Status Statement across the top. To make this election, you must attach file by the next business day. Do not sign Form 1040NR. Mail your the statement described in Pub. 519 to return and statement to: Extension of time to file. If you your return. Do not use Form 1040NR. Department of the Treasury cannot file your return by the due date, Your worldwide income for the whole Internal Revenue Service Center you should file Form 4868 to get an year must be included and will be taxed Austin, TX U.S.A. automatic 6-month extension of time to under U.S. tax laws. You must agree to file. You must file Form 4868 by the keep the records, books, and other If you were a nonresident on the last regular due date of the return. information needed to figure the tax. If day of the tax year, file Form 1040NR. you made the election in an earlier Enter Dual-Status Return across the An automatic 6-month extension year, you can file a joint return or top and attach a statement showing! of time to file does not extend separate return for If you file a your income for the part of the year you the time to pay your tax. If you separate return, use Form 1040 or were a U.S. resident. You can use do not pay your tax by the original due Form 1040A. Your worldwide income Form 1040 as the statement; enter date of your return, you will owe for the whole year must be included Dual-Status Statement across the top. interest on the unpaid tax and may owe whether you file a joint or separate Do not sign Form Mail your penalties. See Form return. return and statement to: Instructions for Form 1040NR (2010) -5-
6 Department of the Treasury cannot use the Single Tax Table of line 60 enter Tax from Form Internal Revenue Service Center column or Section A of the Tax 1040NR and the amount. Austin, TX U.S.A. Computation Worksheet. If you are filing Form 1040NR, enter Statements. Any statement you file Deduction for exemptions. As a the tax from the Tax Table, Tax with your return must show your name, dual-status taxpayer, you usually will be Computation Worksheet, Qualified address, and identifying number entitled to your own personal Dividends and Capital Gain Tax (defined on page 8). exemption. Subject to the general rules Worksheet, Schedule D Tax Former U.S. long-term residents are for qualification, you are allowed Worksheet, Schedule J (Form 1040), or required to file Form 8854, Initial and exemptions for your spouse and Form 8615 on Form 1040NR, line 42, Annual Expatriation Statement, with dependents in figuring taxable income and the tax on the noneffectively their dual-status return for the last year for the part of the year you were a connected income on line 53. of U.S. residency. To determine if you resident alien. The amount you can are a former U.S. long-term resident, claim for these exemptions is limited to Credit for taxes paid. You are see Expatriation Tax in chapter 4 of your taxable income (determined allowed a credit against your U.S. Pub without regard to exemptions) for the income tax liability for certain taxes you part of the year you were a resident paid or are considered to have paid or Income Subject to Tax for alien. You cannot use exemptions that were withheld from your income. Dual-Status Year (other than your own) to reduce taxable These include: As a dual-status taxpayer not filing a income to below zero for that period. 1. Tax withheld from wages earned in joint return, you are taxed on income Special rules apply for exemptions the United States and taxes withheld from all sources for the part of the year for the part of the year a dual-status at the source from various items of you were a resident alien. Generally, taxpayer is a nonresident alien if the income from U.S. sources other than you are taxed on income only from U.S. taxpayer is a resident of Canada, wages. This includes U.S. tax sources for the part of the year you Mexico, or South Korea; a U.S. withheld on dispositions of U.S. real were a nonresident alien. However, all national; or a student or business property interests. income effectively connected with the apprentice from India. conduct of a trade or business in the See Pub. 519 for more information. When filing Form 1040, show the United States is taxable. total tax withheld on line 61. Enter Tax credits. You cannot take the Income you received as a amounts from the attached statement earned income credit, the credit for the dual-status taxpayer from sources elderly or disabled, or any education (Form 1040NR, line 60a through 60d) outside the United States while a credit unless you elect to be taxed as a in the column to the right of line 61 resident alien is taxable even if you resident alien (see Election To Be and identify and include them in the became a nonresident alien after Taxed as a Resident Alien on page 5) amount on line 61. receiving it and before the close of the instead of a dual-status taxpayer. When filing Form 1040NR, show the tax year. Conversely, income you See chapter 6 of Pub. 519 for total tax withheld on lines 60a received from sources outside the information on other credits. through 60d. Enter the amount from United States while a nonresident alien the attached statement (Form 1040, is not taxable in most cases even if you How To Figure Tax for became a resident alien after receiving line 61) in the column to the right of Dual-Status Year it and before the close of the tax year. line 60a, and identify and include it in Income from U.S. sources is taxable When you figure your U.S. tax for a the amount on line 60a. whether you received it while a dual-status year, you are subject to 2. Estimated tax paid with Form nonresident alien or a resident alien. different rules for the part of the year 1040-ES or Form 1040-ES (NR). you were a resident and the part of the year you were a nonresident. 3. Tax paid with Form 1040-C at the All income for the period of time of departure from the United residence and all income that is States. When filing Form 1040, Standard deduction. You cannot take effectively connected with a trade or include the tax paid with Form the standard deduction even for the business in the United States for the 1040-C with the total payments on part of the year you were a resident period of nonresidence, after allowable line 72. Identify the payment in the alien. deductions, is combined and taxed at area to the left of the entry. Head of household. You cannot use the same rates that apply to U.S. the Head of household Tax Table citizens and residents. For the period of column or Section D of the Tax residence, allowable deductions include How To Report Income Computation Worksheet. all deductions on Schedule A of Form Joint return. You cannot file a joint 1040, including medical expenses, real on Form 1040NR return unless you elect to be taxed as a property taxes, and certain interest. resident alien (see Election To Be Community Income Taxed as a Resident Alien on page 5) See the Instructions for Schedule A (Form 1040). If either you or your spouse (or both instead of a dual-status taxpayer. you and your spouse) were nonresident Tax rates. If you were married and a Income that is not effectively aliens at any time during the tax year nonresident of the United States for all connected with a trade or business in and you had community income during or part of the tax year and you do not the United States for the period of the year, treat the community income make the election to be taxed as a nonresidence is subject to the flat 30% according to the applicable community resident alien as discussed on page 5, rate or lower treaty rate. No deductions property laws except as follows. you must use the Married filing are allowed against this income. Earned income of a spouse, other separately column in the Tax Table or If you were a resident alien on the than trade or business income or Section C of the Tax Computation last day of the tax year and you are partnership distributive share income. Worksheet to figure your tax on income filing Form 1040, include the tax on the The spouse whose services produced effectively connected with a U.S. trade noneffectively connected income in the the income must report it on his or her or business. If you were married, you total on Form 1040, line 60. To the left separate return. Restrictions for Dual-Status Taxpayers -6- Instructions for Form 1040NR (2010)
7 Trade or business income, other than Income You Can Elect To Special Rules for Former partnership distributive share income. Treat as Effectively U.S. Citizens and Former Treat this income as received by the spouse carrying on the trade or Connected With a U.S. Trade U.S. Long-Term Residents business and report it on that spouse s or Business (Expatriates) return. You can elect to treat some items of The expatriation tax provisions apply to Partnership distributive share income income as effectively connected with a certain U.S. citizens who have lost their (or loss). Treat this income (or loss) as U.S. trade or business. The election citizenship and long-term residents who received by the spouse who is the applies to all income from real property have ended their residency. You are a partner and report it on that spouse s located in the United States and held former U.S. long-term resident if you return. for the production of income and to all were a lawful permanent resident of the Income derived from the separate income from any interest in such United States (green-card holder) in at property of one spouse that is not property. This includes: least 8 of the last 15 tax years ending earned income, trade or business Gains from the sale or exchange of with the year your residency ends. income, or partnership distributive such property or an interest therein, Different expatriation tax rules apply share income. The spouse with the Gains on the disposal of timber, coal, to individuals based on the date of separate property must report this or iron ore with a retained economic expatriation. The dates are: income on his or her separate return. interest, Before June 4, 2004; After June 3, 2004, and before June See Pub. 555, Community Property, Rents from real estate, or 17, 2008; and for more details. Rents and royalties from mines, oil or After June 16, gas wells, or other natural resources. For more information on the Kinds of Income expatriation tax provisions, see The election does not apply to You must divide your income for the tax Expatriation Tax in chapter 4 of Pub. dispositions of U.S. real property 519; the Instructions for Form 8854; year into the following three categories. interests, discussed earlier. and Notice (for expatriation 1. Income effectively connected with after June 16, 2008), I.R.B. a U.S. trade or business. This income To make the election, attach a 598, available at is taxed at the same rates that apply to statement to your return for the year of _IRB/ar10.html. U.S. citizens and residents. Report this the election. Include the following items income on page 1 of Form 1040NR. in your statement. Pub. 519 describes this income in 1. That you are making the election. greater detail. Line Instructions for 2. U.S. income not effectively 2. A complete list of all of your real property, or any interest in real Form 1040NR connected with a U.S. trade or property, located in the United States business. This income is taxed at 30% (including location). Give the legal unless a treaty between your country identification of U.S. timber, coal, or Name and Address and the United States has set a lower iron ore in which you have an interest. rate that applies to you. Report this Individuals. Enter your name, street income on Schedule NEC on page 4 of 3. The extent of your ownership in address, city or town, and country on Form 1040NR. Pub. 519 describes this the real property. the appropriate lines. Include an income in greater detail. 4. A description of any substantial apartment number after the street improvements to the property. address, if applicable. Check the box Note. Use line 57 to report the 4% tax 5. Your income from the property. for Individual. on U.S. source gross transportation 6. The dates you owned the Estates and trusts. Enter the name income. property. of the estate or trust and check the box 3. Income exempt from U.S. tax. If for Estate or Trust. You must include the income is exempt from tax by 7. Whether the election is under different information for estates and treaty, complete item L of Schedule OI section 871(d) or a tax treaty. trusts that are engaged in a trade or on page 5 of Form 1040NR and line Details of any previous elections business in the United States. on page 1. and revocations of the real property Not engaged in a trade or election. business. Attach a statement to Form 1040NR with your name, title, address, Dispositions of U.S. Real and the names and addresses of any Property Interests Foreign Income Taxed by the U.S. grantors and beneficiaries. Gain or loss on the disposition of a U.S. United States Engaged in a trade or business in real property interest (see Pub. 519 for You may be required to report some the United States. Attach a statement definition) is taxed as if the gain or loss income from foreign sources on your to Form 1040NR with your name, title, were effectively connected with the U.S. return if it is effectively connected address, and the names and addresses conduct of a U.S. trade or business. with a U.S. trade or business. For this of all beneficiaries. foreign income to be treated as P.O. box. Enter your box number only Report gains and losses on the effectively connected with a U.S. trade if your post office does not deliver mail disposition of U.S. real property or business, you must have an office or to your home. interests on Schedule D (Form 1040) other fixed place of business in the Foreign address. Enter the and Form 1040NR, line 14. Also, net United States to which the income can information in the following order: City, gains may be subject to the alternative be attributed. For more information, province or state, and country. Follow minimum tax. See the instructions for including a list of the types of foreign the country s practice for entering the line 43 on page 22. See Pub. 519, source income that must be treated as postal code. In some countries the chapter 4, Real Property Gain or Loss, effectively connected with a U.S. trade postal code may come before the city for more information. or business, see Pub or town name. Instructions for Form 1040NR (2010) -7-
8 Country. Enter the full name of the Note. An ITIN is for tax use only. It child and who do not live with their country in uppercase letters in English. does not entitle you to social security spouse can file as single. If you meet Address change. If you plan to move benefits or change your employment or all five of the following tests and you after filing your return, use Form 8822, immigration status under U.S. law. are a married resident of Canada or Change of Address, to notify the IRS of Mexico, or you are a married U.S. If, after reading these your new address. national, check the box on line 1. If you instructions and our free meet the tests below and you are a Name change. If you changed your publications, you are not sure married resident of South Korea, check name because of marriage, divorce, how to complete the applications or the box on line 2. etc., and your identifying number is a have additional questions, see Calling the IRS on page You file a separate return from social security number, be sure to your spouse. report the change to the Social Security Employer identification number 2. You paid over half the cost of Administration (SSA) before filing your (EIN). If you are filing Form 1040NR keeping up your home for return. This prevents delays in for an estate or trust, enter the EIN of 3. You lived apart from your spouse processing your return and issuing the estate or trust. If the entity does not for the last 6 months of refunds. It also safeguards your future have an EIN, you must apply for one by Temporary absences for special social security benefits. See Social filing Form SS-4, Application for circumstances, such as for business, security number (SSN) below for how to Employer Identification Number. For medical care, school, or military contact the SSA. details on how to get an EIN, see Form service, count as time lived in the SS-4 and its instructions. Form SS-4 is home. available at IRS.gov. Click on Forms 4. Your home was the main home of Identifying Number and Publications. your child, stepchild, or foster child for An incorrect or missing identifying more than half of Temporary number can increase your tax, reduce absences by you or the child for special your refund, or delay your refund. Filing Status circumstances, such as school, The amount of your tax depends on vacation, business, or medical care, Social security number (SSN). If you your filing status. Before you decide count as time the child lived in the are an individual, in most cases you are which box to check, read the following home. If the child was born or died in required to enter your SSN. If you do explanations. 2010, you still can file as single as long not have an SSN but are eligible to get Were you single or married? as the home was that child s main one, you should apply for it. Get Form home for the part of the year he or she SS-5, Application for a Social Security Single. You can check the box on was alive in Card, online at line 1 or line 2 if any of the following 5. You can claim a dependency from your local Social Security was true on December 31, exemption for the child or the child s Administration (SSA) office, or by You were never married. other parent claims him or her as a calling the SSA at You were legally separated under a dependent under the rules for children decree of divorce or separate Fill in Form SS-5 and bring it to your of divorced or separated parents. See maintenance. But if, at the end of 2010, local SSA office in person, along with Form 8332, Release/Revocation of your divorce was not final, you are original documentation showing your Release of Claim to Exemption for considered married and cannot check age, identity, immigration status, and Child by Custodial Parent. the box on line 1 or line 2. authority to work in the United States. If You were widowed before January 1, you are an F-1 or M-1 student, you also 2010, and did not remarry before the Adopted child. An adopted child is must show your Form I-20. If you are a end of But if you have a always treated as your own child. An J-1 exchange visitor, you also must dependent child, you may be able to adopted child includes a child lawfully show your Form DS use the qualifying widow(er) filing placed with you for legal adoption. It usually takes about 2 weeks to get status. See the instructions for line 6 on an SSN once the SSA has all the page 9. Foster child. A foster child is any evidence and information it needs. You meet the tests described under child placed with you by an authorized Check that your SSN on your Forms Married persons who live apart below. placement agency or by judgment, W-2 and 1099 agrees with your social decree, or other order of any court of Married. If you were married on security card. If not, see page 40 for competent jurisdiction. December 31, 2010, consider yourself more details. married for the whole year. Line 3 or 4 Married resident. If you IRS individual taxpayer identification If your spouse died in 2010, consider checked box 3 or 4, you must enter number (ITIN). If you do not have and yourself married to that spouse for the your spouse s first and last name and are not eligible to get an SSN, you must whole year, unless you remarried identifying number in the space enter your ITIN whenever an SSN is before the end of provided. requested on your tax return. If you are For federal tax purposes, a marriage required to include another person s You cannot check box 3 or 4 if your means only a legal union between a SSN on your return and that person spouse does not have an SSN or an man and a woman as husband and does not have and cannot get an SSN, ITIN. If your spouse is not eligible to wife. enter that person s ITIN. apply for an SSN, he or she must apply U.S. national. A U.S. national is an for an ITIN. For details on how to apply for an individual who, although not a U.S. ITIN, see Form W-7, Application for IRS citizen, owes his or her allegiance to If your spouse is a nonresident Individual Taxpayer Identification the United States. U.S. nationals alien, is not being claimed as an Number, and its instructions. Get Form! include American Samoans and exemption, and does not have W-7 online at IRS.gov. Click on Northern Mariana Islanders who chose an identifying number (SSN or ITIN), Individuals, then Individual Taxpayer to become U.S. nationals instead of enter NRA in the space for Spouse s Identification Number (ITIN). U.S. citizens. identifying number. Do not leave the It takes 6 to 10 weeks to get an Married persons who live apart. space blank. If you have applied for an ITIN. Some married persons who have a SSN or ITIN, enter Applied for. -8- Instructions for Form 1040NR (2010)
9 Line 6 Qualifying widow(er) with exemptions for your children and other For details on how your dependent child. You can check the dependents on the same terms as U.S. TIP dependent can get an box on line 6 if all of the following apply. citizens. If you were a resident of South identifying number, see 1. You were a resident of Canada, Korea, you can claim an exemption for Identifying Number on page 8. Mexico, or South Korea or were a U.S. any of your children who lived with you If your dependent child was born national. in the United States at some time and died in 2010 and you do not have 2. Your spouse died in 2008 or during an identifying number for the child, 2009 and you did not remarry before You can take an exemption for each enter Died in column (2) and attach a the end of of your dependents. If you have more copy of the child s birth certificate, 3. You have a child or stepchild than four dependents, include a death certificate, or hospital records. whom you claim as a dependent. This statement showing the required The document must show the child was does not include a foster child. information. born alive. 4. This child lived in your home for all of Temporary absences by For additional information on the Adoption taxpayer identification you or the child for special! definition of a qualifying child numbers (ATINs). If you have a circumstances, such as school, and whether you can claim an dependent who was placed with you for vacation, business, or medical care, exemption for a dependent, see legal adoption and you do not know his count as time lived in the home. Exemptions for Dependents in Pub. or her SSN, you must get an ATIN for 501. the dependent from the IRS. See Form A child is considered to have lived W-7A, Application for Taxpayer with you for all of 2010 if the child was Children who did not live with you Identification Number for Pending U.S. born or died in 2010 and your home due to divorce or separation. If you Adoptions, for details. If the dependent was the child s main home for the entire checked filing status box 1 or 3 and are is not a U.S. citizen or resident alien, time he or she was alive. claiming as a dependent a child who apply for an ITIN instead, using Form 5. You paid over half the cost of did not live with you under the rules for W-7. See page 8. keeping up your home. To find out what children of divorced or separated is included in the cost of keeping up a parents, include with your return Form Line 7c, column (4). Check the home, see Pub or a substantially similar box in this column if your dependent is 6. You were a resident alien or U.S. statement signed by the custodial a qualifying child for the child tax credit citizen the year your spouse died. This parent and whose only purpose is to (defined below). If you have at least refers to your actual status, not the release a claim to an exemption for a one qualifying child, you may be able to election that some nonresident aliens child. The form or statement must take the child tax credit on line 48 and can make to be taxed as U.S. release the custodial parent s claim to the additional child tax credit on line 62. residents. the child without any conditions. For Qualifying child for child tax 7. You could have filed a joint return example, the release must not depend credit. A qualifying child for purposes with your spouse the year he or she on the noncustodial parent paying of the child tax credit is a child who died, even if you did not actually do so. support. meets the following requirements. If the divorce decree or separation The child was under age 17 at the Adopted child. An adopted child is agreement went into effect after 1984 end of 2010 and younger than you or always treated as your own child. An and before 2009, the noncustodial any age and permanently or totally adopted child includes a child lawfully parent may be able to include certain disabled. placed with you for legal adoption. pages from the decree or agreement The child is your son, daughter, instead of Form See Form 8332 stepchild, foster child, brother, sister, for details. stepbrother, stepsister, half brother, half Exemptions sister, or a descendant of any of them Exemptions for estates and trusts are You must include the required (for example, your grandchild, niece, or described in the instructions for line 40! information even if you filed it nephew). on page 22. with your return in an earlier The child is not filing a joint return for Note. Residents of India who were year (or is filing a joint return for 2010 students or business apprentices may only as a claim for refund of withheld Release of exemption revoked. A be able to claim exemptions for their income tax or estimated tax paid). custodial parent who has revoked his or spouse and dependents. The child is a U.S. citizen, a U.S. her previous release of a claim to national, or a U.S. resident alien. See Pub. 519 for more details. exemption for a child must include a The child did not provide over half of copy of the revocation with his or her Line 7b Spouse. If you checked his or her own support for return. For details, see Form filing status box 3 or 4, you can take an The child lived with you for more than exemption for your spouse only if your Other dependent children. half of Temporary absences by spouse had no gross income for U.S. Include the total number of children you or the child for special tax purposes and cannot be claimed as who did not live with you for reasons circumstances, such as school, a dependent on another U.S. other than divorce or separation on the vacation, business, or medical care, taxpayer s return. (You can do this line labeled Dependents on 7c not count as time the child lived with you. A even if your spouse died in 2010.) If entered above. child is considered to have lived with you checked filing status box 4, do not Line 7c, column (2). You must you for all of 2010 if the child was born check line 7b if your spouse did not live enter each dependent s identifying or died in 2010 and your home was the with you in the United States at any number (SSN, ITIN, or adoption child s home for the entire time he or time during taxpayer identification number (ATIN)). she was alive. Line 7c Dependents. Only U.S. Otherwise, at the time we process your You can and do claim an exemption nationals and residents of Canada, return we may disallow the exemption for the child. Mexico, and South Korea can claim claimed for the dependent and reduce In addition, if a parent can claim the exemptions for their dependents. If you or disallow any other tax benefits (such child as a qualifying child, but no parent were a U.S. national or a resident of as the child tax credit) based on the does so claim the child, you cannot Canada or Mexico, you can claim dependent. claim the child as a qualifying child Instructions for Form 1040NR (2010) -9-
10 unless your AGI is higher than the compensation between U.S. and can prove that you received less. highest AGI of any parent of the child. non-u.s. sources. Allocated tips should be shown in box 8 An adopted child is always treated Compensation (other than certain of your Form(s) W-2. They are not as your own child. An adopted child fringe benefits) generally is sourced on included as income in box 1. See Pub. includes a child lawfully placed with you a time basis. To figure your U.S. source 531, Reporting Tip Income, for more for legal adoption. income, divide the number of days you details. performed labor or personal services You may owe social security within the United States by the total Rounding Off to Whole and Medicare tax on unreported number of days you performed labor or! or allocated tips. See the personal services within and without the Dollars instructions for line 55 on page 26. United States. Multiply the result by You can round off cents to whole your total compensation (other than dollars on your return and schedules. If certain fringe benefits). Dependent care benefits, which you do round to whole dollars, you should be shown in box 10 of your must round all amounts. To round, drop Fringe benefits. Certain fringe Form(s) W-2. But first complete Form amounts under 50 cents and increase benefits (such as housing and 2441 to see if you can exclude part or amounts from 50 to 99 cents to the next educational expenses) are sourced on all of the benefits. dollar. For example, $1.39 becomes $1 a geographic basis. The source of the Employer-provided adoption benefits, and $2.50 becomes $3. fringe benefit compensation generally is which should be shown in box 12 of your principal place of work. The If you have to add two or more your Form(s) W-2 with code T. But see amount of the fringe benefit amounts to figure the amount to enter the Instructions for Form 8839 to find compensation must be reasonable and on a line, include cents when adding out if you can exclude part or all of the you must keep records that are the amounts and round off only the benefits. You also may be able to adequate to support the fringe benefit total. exclude amounts if you adopted a child compensation. with special needs and the adoption You may be able to use an became final in Income Effectively TIP alternative method to determine Excess salary deferrals. The amount the source of your deferred should be shown in box 12 of Connected With U.S. compensation and/or fringe benefits if your Form W-2, and the Retirement the alternative method more properly plan box in box 13 should be checked. Trade or Business determines the source of the If the total amount you deferred for Pub. 519 explains how income is compensation under all plans was more than classified and what income you should $16,500 (excluding catch-up report here. The instructions for this For 2010, if your total compensation contributions as explained below), section assume you have decided that (including fringe benefits) is $250,000 include the excess on line 8. This limit the income involved is effectively or more and you allocate your compensation using an alternative is (a) $11,500 if you only have SIMPLE connected with a U.S. trade or method, check the Yes boxes in item plans, or (b) $19,500 for section 403(b) business in which you were engaged. K of Schedule OI on page 5. Also plans if you qualify for the 15-year rule But your decision may not be easy. attach to Form 1040NR a statement in Pub Although designated Roth Interest, for example, may be effectively that contains the following information. contributions are subject to this limit, do connected with a U.S. trade or not include the excess attributable to business, it may not be, or it may be 1. The specific compensation or the such contributions on line 8. They tax-exempt. The tax status of income specific fringe benefit for which an already are included as income in box 1 also depends on its source. Under alternative method is used. of your Form W-2. some circumstances, items of income 2. For each such item, the from foreign sources are treated as alternative method used to allocate the A higher limit may apply to effectively connected with a U.S. trade source of the compensation. participants in section 457(b) deferred or business. Other items are reportable 3. For each such item, a compensation plans for the 3 years as effectively connected or not computation showing how the before retirement age. Contact your effectively connected with a U.S. trade alternative allocation was computed. plan administrator for more information. or business, depending on how you 4. A comparison of the dollar If you were age 50 or older at the elect to treat them. amount of the compensation sourced end of 2010, your employer may have Line 8 Wages, salaries, tips, etc. within and without the United States allowed an additional deferral (catch-up Enter the total of your effectively under both the alternative method and contributions) of up to $5,500 ($2,500 connected wages, salaries, tips, etc. the time or geographical method for for section 401(k)(11) and SIMPLE Only U.S. source income is included on determining the source. plans). This additional deferral amount line 8 as effectively connected wages. is not subject to the overall limit on You must keep documentation showing For most people, the amount to enter elective deferrals. why the alternative method more on this line should be shown in box 1 of properly determines the source of the their Form(s) W-2. You cannot deduct the amount compensation.! deferred. It is not included as Do not include on line 8 Also include on line 8. income in box 1 of your Form! amounts exempted under a tax Wages received as a household W-2. treaty. Instead, include these employee for which you did not receive Disability pensions shown on Form amounts on line 22 and complete item a Form W-2 because your employer 1042-S or Form 1099-R if you have not L of Schedule OI on page 5 of Form paid you less than $1,700 in reached the minimum retirement age 1040NR. Also, enter HSH and the amount not set by your employer. Disability Services performed partly within reported on a Form W-2 on the dotted pensions received after you reach and partly without the United States. line next to line 8. minimum retirement age and other If you performed services as an Tip income you did not report to your payments shown on Form 1042-S or employee both inside and outside the employer. Also include allocated tips Form 1099-R (other than payments United States, you must allocate your shown on your Form(s) W-2 unless you from an IRA*) are reported on lines 17a -10- Instructions for Form 1040NR (2010)
11 and 17b. Payments from an IRA are from a mutual fund or other regulated date. The ex-dividend date is the first reported on lines 16a and 16b. investment company. Do not include date following the declaration of a Corrective distributions from a interest earned on your IRA, health dividend on which the purchaser of a retirement plan shown on Form 1042-S savings account, Archer or Medicare stock is not entitled to receive the next or Form 1099-R of excess salary Advantage MSA, or Coverdell dividend payment. When counting the deferrals and excess contributions (plus education savings account. Also, do not number of days you held the stock, earnings). But do not include include interest from a U.S. bank, include the day you disposed of the distributions from an IRA* on line 8. savings and loan association, credit stock but not the day you acquired it. Instead, report distributions from an IRA union, or similar institution (or from See the examples on this page and on lines 16a and 16b. certain deposits with U.S. insurance page 12. Also, when counting the Wages from Form 8919, line 6. companies) that is exempt from tax number of days you held the stock, you *This includes a Roth, SEP, or under a tax treaty or under section cannot count certain days during which SIMPLE IRA. 871(i) because the interest is not your risk of loss was diminished. See effectively connected with a U.S. trade Pub. 550 for more details, Missing or incorrect Form W-2. or business. Your employer is required to provide or Dividends attributable to periods send Form W-2 to you no later than Line 10a Ordinary dividends. Each totaling more than 366 days that you January 31, If you do not receive payer should send you a Form received on any share of preferred it by early February, ask your employer 1099-DIV. Enter your total ordinary stock held for less than 91 days during for it. Even if you do not get a Form dividends from assets effectively the 181-day period that began 90 days W-2, you still must report your earnings connected with a U.S. trade or before the ex-dividend date. When on line 8. If you lose your Form W-2 or business on line 10a. This amount counting the number of days you held it is incorrect, ask your employer for a should be shown in box 1a of Form(s) the stock, you cannot count certain new one DIV. days during which your risk of loss was Capital gain distributions. If you diminished. See Pub. 550 for more Line 9a Taxable interest. Report received any capital gain distributions, details. Preferred dividends attributable on line 9a all of your taxable interest see the instructions for line 14 on page to periods totaling less than 367 days income from assets effectively 12. are subject to the 61-day holding period connected with a U.S. trade or Nondividend distributions. Some rule above. business. distributions are a return of your cost Dividends on any share of stock to If you received interest not (or other basis). They will not be taxed the extent that you are under an effectively connected with a U.S. trade until you recover your cost (or other obligation (including a short sale) to or business, report it on Schedule NEC, basis). You must reduce your cost (or make related payments with respect to page 4, unless it is tax exempt under a other basis) by these distributions. After positions in substantially similar or treaty and the withholding agent did not you get back all of your cost (or other related property, and withhold tax on the payment. If the basis), you must report these Payments in lieu of dividends, but interest is tax exempt under a treaty, distributions as capital gains on only if you know or have reason to include the tax exempt amount on line Schedule D (Form 1040). know that the payments are not 22 and complete item L of Schedule OI See Pub. 550 for more details. qualified dividends. on page 5. If the interest is tax exempt under a Dividends on insurance policies Example 1. You bought 5,000 treaty but the withholding agent TIP are a partial return of the shares of XYZ Corp. common stock on withheld tax, report the interest on premiums you paid. Do not July 8, XYZ Corp. paid a cash Schedule NEC, line 2. Use column d report them as dividends. Include them dividend of 10 cents per share. The and show 0% for the appropriate rate of in income on line 21 only if they exceed ex-dividend date was July 16, tax. the total of all net premiums you paid Your Form 1099-DIV from XYZ Corp. for the contract. shows $500 in box 1a (ordinary See Pub. 901 for a quick reference Line 10b Qualified dividends. dividends) and in box 1b (qualified guide to the provisions of U.S. tax Enter your total qualified dividends on dividends). However, you sold the treaties. line 10b. Qualified dividends also are 5,000 shares on August 11, You Interest from a U.S. bank, savings included in the ordinary dividend total held your shares of XYZ Corp. for only and loan association, credit union, or required to be shown on line 10a. 34 days of the 121-day period (from similar institution, and from certain Qualified dividends are eligible for a July 9, 2010, through August 11, 2010). deposits with U.S. insurance lower tax rate than other ordinary The 121-day period began on May 17, companies, is tax exempt to a income. Generally, these dividends are 2010 (60 days before the ex-dividend nonresident alien if it is not effectively shown in box 1b of your Form(s) date), and ended on September 14, connected with a U.S. trade or 1099-DIV You have no qualified dividends business. from XYZ Corp. because you held the See Pub. 550 for the definition of Interest credited in 2010 on deposits XYZ stock for less than 61 days. qualified dividends if you received that you could not withdraw because of dividends not reported on Form Example 2. Assume the same facts the bankruptcy or insolvency of the 1099-DIV. as in Example 1 except that you bought financial institution may not have to be Exception. Some dividends may the stock on July 15, 2010 (the day included in your 2010 income. be reported as qualified dividends in before the ex-dividend date), and you See Pub. 550 for more details. box 1b of Form 1099-DIV but are not sold the stock on September 16, Line 9b Tax-exempt interest. qualified dividends. These dividends You held the stock for 63 days (from Certain types of interest income from include: July 16, 2010, through September 16, investments in state and municipal Dividends you received as a 2010). The $500 of qualified dividends bonds and similar instruments are not nominee. See chapter 1 in Pub. 550, shown in box 1b of Form 1099-DIV are taxed by the United States. If you Dividends you received on any share all qualified dividends because you held received such tax-exempt interest of stock that you held for less than 61 the stock for 61 days of the 121-day income, report the amount on line 9b. days during the 121-day period that period (from July 16, 2010, through Include any exempt-interest dividends began 60 days before the ex-dividend September 14, 2010). Instructions for Form 1040NR (2010) -11-
12 Example 3. You bought 10,000 If the grant was reported on Form(s) the tax treaty, all of your scholarship shares of ABC Mutual Fund common 1042-S, you generally must include the income is exempt from tax because stock on July 8, ABC Mutual amount shown in box 2 of Form(s) ABC University is a nonprofit Fund paid a cash dividend of 10 cents 1042-S on line 12. However, if any or educational organization. a share. The ex-dividend date was July all of that amount is exempt by treaty, Note. Many tax treaties do not permit 16, The ABC Mutual Fund do not include the treaty-exempt an exemption from tax on scholarship advises you that the portion of the amount on line 12. Instead, include the or fellowship grant income unless the dividend eligible to be treated as treaty-exempt amount on line 22 and income is from sources outside the qualified dividends equals 2 cents per complete item L of Schedule OI on United States. If you are a resident of a share. Your Form 1099-DIV from ABC page 5 of Form 1040NR. treaty country, you must know the Mutual Fund shows total ordinary Attach any Form(s) 1042-S you terms of the tax treaty between the dividends of $1,000 and qualified received from the college or institution. United States and the treaty country to dividends of $200. However, you sold If you did not receive a Form 1042-S, claim treaty benefits on Form 1040NR. the 10,000 shares on August 11, attach a statement from the college or See the instructions for item L, You have no qualified dividends from institution (on their letterhead) showing Schedule OI, beginning on page 37 for ABC Mutual Fund because you held the details of the grant. details. the ABC Mutual Fund stock for less than 61 days. For more information about When completing Form 1040NR: scholarships and fellowships in general, Enter $0 on line 12. The $9,000 Be sure you use the Qualified see Pub reported to you in box 2 of Form TIP Dividends and Capital Gain Tax Example 1. You are a citizen of a 1042-S is reported on line 22 (not line Worksheet or the Schedule D country that does not have an income 12). Tax Worksheet, whichever applies, to tax treaty in force with the United Enter $9,000 on line 22. figure your tax. See the instructions for States. You are a candidate for a Enter $0 on line 31. Because none of line 42 on page 22 for details. degree at ABC University (located in the $9,000 reported to you in box 2 of Line 11 Taxable refunds, credits, the United States). You are receiving a Form 1042-S is included in your or offsets of state and local income full scholarship from ABC University. income, you cannot exclude it on taxes. If you received a refund, credit, The total amounts you received from line 31. or offset of state or local income taxes ABC University during 2010 are as Include on line 60d any withholding in 2010, you may receive a Form follows: shown in box 9 of Form 1042-S G. If you chose to apply part or all Provide all the required information in of the refund to your 2010 estimated Tuition and fees $25,000 item L, Schedule OI, on page 5 of Form state or local income tax, the amount Books, supplies, 1040NR. applied is treated as received in and equipment 1,000 Line 13 Business income or (loss). Room and If you operated a business or practiced None of your refund is taxable if, board 9,000 your profession as a sole proprietor, TIP in the year you paid the tax, you $35,000 report your effectively connected did not itemize deductions on income and expenses on Schedule C Schedule A. If you were a student or The Form 1042-S you received from or Schedule C-EZ (Form 1040). business apprentice from India in 2009 ABC University for 2010 shows $9,000 and you claimed the standard in box 2 and $1,260 (14% of $9,000) in Include any income you received as deduction on your 2009 tax return, box 9. a dealer in stocks, securities, and none of your refund is taxable. See commodities through your U.S. office. If Note. Box 2 shows only $9,000 Students and business apprentices you dealt in these items through an because withholding agents (such as from India in chapter 5 of Pub If independent agent, such as a U.S. ABC University) are not required to none of your refund is taxable, leave broker, custodian, or commissioned report section 117 amounts (tuition, line 11 blank. agent, your income may not be fees, books, supplies, and equipment) considered effectively connected with a For details on how to figure the on Form 1042-S. U.S. business. amount you must report as income, see When completing Form 1040NR: Recoveries in Pub Note. For more information on tax Enter on line 12 the $9,000 shown in provisions that apply to a small Line 12 Scholarship and fellowship box 2 of Form 1042-S. business, see Pub. 334, Tax Guide for grants. If you received a scholarship Enter $0 on line 31. Because Small Business (For Individuals Who or fellowship, part or all of it may be section 117 amounts (tuition, fees, Use Schedule C or C-EZ). taxable. books, supplies, and equipment) were not included in box 2 of your Form Line 14 Capital gain or (loss). If If you were a degree candidate, the 1042-S (and are not included on line 12 you had effectively connected capital amounts you used for expenses other of Form 1040NR), you cannot exclude gains or losses, including any than tuition and course-related any of the section 117 amounts on line effectively connected capital gain expenses (fees, books, supplies, and 31. distributions or a capital loss carryover equipment) are generally taxable. For Include on line 60d the $1,260 shown from 2009, you must complete and example, amounts used for room, in box 9 of Form 1042-S. attach Schedule D (Form 1040). But board, and travel are generally taxable. Example 2. The facts are the same see the Exception on page 13. Enter If you were not a degree candidate, as in Example 1 except that you are a the effectively connected gain or (loss) the full amount of the scholarship or citizen of a country that has an income from Schedule D (Form 1040) on line fellowship is generally taxable. Also, tax treaty in force with the United 14. amounts received in the form of a States that includes a provision that Gains and losses from disposing of scholarship or fellowship that are exempts scholarship income and you U.S. real property interests are reported payment for teaching, research, or were a resident of that country on Schedule D (Form 1040) and other services are generally taxable as immediately before leaving for the included on line 14 of Form 1040NR. wages even if the services were United States to attend ABC University. See Dispositions of U.S. Real Property required to get the grant. Also, assume that, under the terms of Interests on page Instructions for Form 1040NR (2010)
13 Exception. You do not have to file the part not rolled over on line 16b otherwise be included in your income. If Schedule D (Form 1040) if both of the unless Exception 2 applies to the part your IRA includes nondeductible following apply. not rolled over. Generally, a qualified contributions, the distribution is first The only amounts you have to report rollover must be made within 60 days considered to be paid out of otherwise on Schedule D (Form 1040) are after the day you received the taxable income. See Pub. 590 for effectively connected capital gain distribution. For more details on details. distributions from box 2a of Form(s) rollovers, see Pub. 590, Individual 1099-DIV or substitute statements. Retirement Arrangements (IRAs). You cannot claim a charitable None of the Form(s) 1099-DIV or! contribution deduction for any If you rolled over the distribution into substitute statements have an amount QCD not included in your a qualified plan other than an IRA or in box 2b (unrecaptured section 1250 income. you made the rollover in 2011, include gain), box 2c (section 1202 gain), or a statement explaining what you did. If a QCD is made in January box 2d (collectibles (28%) gain). TIP Exception 2. If any of the following 2011, you can elect to treat it as If both of the above apply, enter your apply, enter the total distribution on line made in See Pub total effectively connected capital gain 16a and see Form 8606 and its Exception 4. If the distribution is a distributions (from box 2a of Form(s) instructions to figure the amount to qualified health savings account (HSA) 1099-DIV) on line 14 and check the box enter on line 16b. funding distribution (HFD), enter the on that line. If you received capital gain 1. You received a distribution from total distribution on line 16a. If the total distributions as a nominee (that is, they an IRA (other than a Roth IRA) and you amount distributed is an HFD and you were paid to you but actually belong to made nondeductible contributions to elect to exclude it from income, enter someone else), report on line 14 only any of your traditional or SEP IRAs for -0- on line 16b. If only part of the the amount that belongs to you. Include 2010 or an earlier year. If you made distribution is an HFD and you elect to a statement showing the full amount nondeductible contributions to these exclude that part from income, enter you received and the amount you IRAs for 2010, also see Pub the part that is not an HFD on line 16b received as a nominee. 2. You received a distribution from a unless Exception 2 applies to that part. See chapter 1 of Pub. 550 for filing Roth IRA. But if either (a) or (b) below Enter HFD next to line 16b. requirements for Forms 1099-DIV and applies, enter -0- on line 16b; you do An HFD is a distribution made not have to see Form 8606 or its directly by the trustee of your IRA If you do not have to file instructions. (other than an ongoing SEP or SIMPLE TIP Schedule D (Form 1040), use a. Distribution code T is shown in IRA) to your HSA. If eligible, you the Qualified Dividends and box 7 of Form 1099-R and you made a generally can elect to exclude an HFD Capital Gain Tax Worksheet on page contribution (including a conversion) to from your income once in your lifetime. 21 to figure your tax. a Roth IRA for 2005 or an earlier year. You cannot exclude more than the limit b. Distribution code Q is shown in on HSA contributions or more than the Line 15 Other gains or (losses). If box 7 of Form 1099-R. amount that otherwise would be you sold or exchanged assets used in a 3. You converted part or all of a included in your income. If your IRA U.S. trade or business, see the traditional, SEP, or SIMPLE IRA to a includes nondeductible contributions, Instructions for Form Roth IRA in the HFD is first considered to be paid Lines 16a and 16b IRA 4. You had a 2009 or 2010 IRA out of otherwise taxable income. distributions. You should receive a contribution returned to you, with the See Pub. 969 for more details. Form 1099-R showing the total amount related earnings or less any loss, by the of any distribution from your individual due date (including extensions) of your The amount of an HFD reduces retirement arrangement (IRA) before tax return for that year.! the amount you can contribute income tax or other deductions were 5. You made excess contributions to your HSA for the year. If you withheld. This amount should be shown to your IRA for an earlier year and had fail to maintain eligibility for an HSA for in box 1 of Form 1099-R. Unless them returned to you in the 12 months following the month of otherwise noted in the line 16a and 16b 6. You recharacterized part or all of the HFD, you may have to report the instructions, an IRA includes a a contribution to a Roth IRA as a HFD as income and pay an additional traditional IRA, Roth IRA, simplified traditional IRA contribution, or vice tax. See Form 8889, Part III. employee pension (SEP) IRA, and a versa. See Pub. 590 for details. savings incentive match plan for employees (SIMPLE) IRA. Except as More than one exception applies. Exception 3. If the distribution is a provided in the following exceptions, If more than one exception applies, qualified charitable distribution (QCD), leave line 16a blank and enter the total include a statement showing the enter the total distribution on line 16a. If distribution (from Form 1099-R, box 1) amount of each exception, instead of the total amount distributed is a QCD, on line 16b. making an entry next to line 16b. For enter -0- on line 16b. If only part of the example: Line 16b $1,000 Rollover Exception 1. Enter the total distribution is a QCD, enter the part that and $500 HFD. distribution on line 16a if you rolled over is not a QCD on line 16b unless part or all of the distribution from one: Exception 2 applies to that part. Enter More than one distribution. If you IRA to another IRA of the same type QCD next to line 16b. received more than one distribution, (for example, from one traditional IRA figure the taxable amount of each A QCD is a distribution made directly to another traditional IRA), distribution and enter the total of the by the trustee of your IRA (other than SEP or SIMPLE IRA to a traditional taxable amounts on line 16b. Enter the an ongoing SEP or SIMPLE IRA) to an IRA, or total amount of those distributions on organization eligible to receive IRA to a qualified plan other than an line 16a. tax-deductible contributions (with IRA. certain exceptions). You must have You may have to pay an Also, enter Rollover next to line been at least age 70 1 /2 when the! additional tax if: (a) you received 16b. If the total distribution was rolled distribution was made. Your total QCDs an early distribution from your over in a qualified rollover, enter -0- on for the year cannot be more than IRA and the total was not rolled over, or line 16b. If the total distribution was not $100,000. The amount of the QCD is (b) you were born before July 1, 1939, rolled over in a qualified rollover, enter limited to the amount that would and received less than the minimum Instructions for Form 1040NR (2010) -13-
14 required distribution from your were withheld. This amount should be distribution that is not effectively traditional, SEP, and SIMPLE IRAs. shown in box 1 of Form 1099-R or in connected income is subject to 30% See the instructions for line 56 on page box 2 of Form 1042-S. Pension and withholding (unless reduced or 26 for details. annuity payments include distributions eliminated by treaty). Report this Lines 17a and 17b Pensions and from 401(k), 403(b), and governmental income on Schedule NEC, line 7. annuities. Use line 17a to report 457(b) plans. For details on rollovers Do not include the following certain pension distributions. Use line and lump-sum distributions, see page payments on lines 17a and 17b. 17b to report the taxable portion of 15. Instead, report them on line 8. those pension distributions. Report the part of any distribution Disability pensions received before You should receive a Form 1042-S that is effectively connected with the you reach the minimum retirement age or 1099-R showing the total amount of conduct of a trade or business in the set by your employer. your pension and annuity payments United States on lines 17a and 17b. In Corrective distributions (including any before income tax or other deductions general, the gross amount of any earnings) of excess salary deferrals or Simplified Method Worksheet Lines 17a and 17b Keep for Your Records Before you begin: If you are the beneficiary of a deceased employee or former employee who died before August 21, 1996, include any death benefit exclusion that you are entitled to (up to $5,000) in the amount entered on line 2 below. Note. If you had more than one partially taxable pension or annuity, figure the taxable part of each separately. Enter the total of the taxable parts on Form 1040NR, line 17b. Enter the total pension or annuity payments received in 2010 on Form 1040NR, line 17a. 1. Enter the total pension or annuity payments received in Also, enter this amount on Form 1040NR, line 17a Enter the appropriate number from Table 1 below. But if your annuity starting date was after 1997 and the payments are for your life and that of your beneficiary, enter the appropriate number from Table 2 below Divide line 2 by the number on line Multiply line 4 by the number of months for which this year s payments were made. If your annuity starting date was before 1987, skip lines 6 and 7 and enter this amount on line 8. Otherwise, go to line Enter the amount, if any, recovered tax free in years after If you completed this worksheet last year, enter the amount from line 10 of last year s worksheet Subtract line 6 from line Enter the smaller of line 5 or line Taxable amount. Subtract line 8 from line 1. Enter the result, but not less than zero. Also, enter this amount on Form 1040NR, line 17b. If your Form 1042-S or Form 1099-R shows a larger amount, use the amount on this line instead of the amount from Form 1042-S or Form 1099-R Was your annuity starting date before 1987? Yes. No. STOP Leave line 10 blank. Add lines 6 and 8. This is the amount you have recovered tax free through You will need this number when you fill out this worksheet next year IF the age at annuity starting date (see page 15) was Table 1 for Line 3 Above AND your annuity starting date was before November 19, 1996, after November 18, 1996, enter on line 3... enter on line or under or older Table 2 for Line 3 Above IF the combined ages at annuity starting date (see page 15) were... THEN enter on line or under or older Instructions for Form 1040NR (2010)
15 If you received a Form 1042-S the General Rule explained in Pub. 939 Rollovers. Generally, a qualified TIP or 1099-R that shows federal to figure the taxable part to enter on rollover is a tax-free distribution of cash income tax withheld, attach it to line 17b. But if your annuity starting or other assets from one retirement Form 1040NR. date (defined below) was after July 1, plan that is contributed to another plan Effectively connected pension 1986, see Simplified method below to within 60 days of receiving the distributions. If you performed find out if you must use that method to distribution. However, a qualified services in the United States while you figure the taxable part. rollover to a Roth IRA or a designated were a nonresident alien, your income Roth account generally is not a tax-free You can ask the IRS to figure the generally is effectively connected with a distribution. Use lines 17a and 17b to taxable part for you for a $1,000 fee. U.S. trade or business. (See section report a qualified rollover, including a For details, see Pub for details and exceptions.) If you direct rollover, from one qualified worked in the United States after If your Form 1099-R shows a taxable employer s plan to another or to an IRA December 31, 1986, the part of each amount, you can report that amount on or SEP. pension distribution that is attributable line 17b. But you may be able to report Enter on line 17a the distribution to the services you performed after a lower taxable amount by using the from box 1 of Form 1099-R or box 2 of 1986 is income that is effectively General Rule or the Simplified Method. Form 1042-S. From this amount, connected with a U.S. trade or If you received Form 1042-S, you must subtract any contributions (usually business. figure the taxable part by using the shown in box 5 of Form 1099-R or Example. You worked in the United General Rule or the Simplified Method. figured by you if you received Form States from January 1, 1980, through Simplified method. You must use 1042-S) that were taxable to you when December 31, 1989 (10 years). You the Simplified Method if (a) your annuity made. From that result, subtract the now receive monthly pension payments starting date (defined below) was after amount of the qualified rollover. Enter from your former U.S. employer s July 1, 1986, and you used this method the remaining amount on line 17b. If the pension plan. 70% of each payment is last year to figure the taxable part, or remaining amount is zero and you have attributable to services you performed (b) your annuity starting date was after no other distribution to report on line during 1980 through 1986 (7 years) and November 18, 1996, and both of the 17b, enter zero on line 17b. Also, enter 30% of each payment is attributable to following apply. Rollover next to line 17b. services you performed during 1987 The payments are from a qualified through 1989 (3 years). Include 30% of employee plan, a qualified employee See Pub. 575 for more details on each pension payment in the total annuity, or a tax-sheltered annuity. rollovers, including special rules that amount that you report on line 17a. On your annuity starting date, either apply to rollovers from designated Roth Include 70% of each payment in the you were under age 75 or the number accounts, partial rollovers of property, total amount that you report in the of years of guaranteed payments was and distributions under qualified appropriate column on Schedule NEC, fewer than five. See Pub. 575 for the domestic relations orders. line 7. definition of guaranteed payments. Rollovers to a Roth IRA or a If you must use the Simplified Method, In most cases, the effectively designated Roth account ( other complete the worksheet on page 14 to connected pension distribution will be than from a designated Roth figure the taxable part of your pension fully taxable in the United States, so account). Enter on line 17a the or annuity. you must enter it on line 17b. However, distribution from box 1 of Form 1099-R in some situations, you can report a See Pub. 575 for more details on or box 2 of Form 1042-S. See Form lower amount on line 17b. The most the Simplified Method and its instructions to figure the common situations are where: Annuity starting date. Your amount to enter on line 17b. All or a portion of your pension annuity starting date is the later of the Lump-sum distributions. If you payment is exempt from U.S. tax, first day of the first period for which you received a lump-sum distribution from a A portion of your pension payment is received a payment or the date the profit-sharing or retirement plan, your attributable to after-tax contributions to plan s obligations became fixed. Form 1099-R should have the Total the pension plan, or Age (or combined ages) at annuity distribution box in box 2b checked. The payment is rolled over to another starting date. If you are the retiree, You need to figure this on your own if retirement plan. use your age on the annuity starting you received Form 1042-S. You may See chapter 3 of Pub. 519; Pub. date. If you are the survivor of a retiree, owe an additional tax if you received an 575, Pension and Annuity Income; or use the retiree s age on his or her early distribution from a qualified Pub. 939, General Rule for Pensions annuity starting date. But if your annuity retirement plan and the total amount and Annuities, for more information. starting date was after 1997 and the was not rolled over in a qualified Fully taxable pensions and payments are for your life and that of rollover. For details, see the instructions annuities. Your payments are fully your beneficiary, use your combined for line 56 on page 26. taxable if (a) you did not contribute to ages on the annuity starting date. Enter the total distribution on line the cost (defined on this page) of your If you are the beneficiary of an 17a and the taxable part on line 17b. pension or annuity, or (b) you got your employee who died, see Pub If For details, see Pub entire cost back tax free before If there is more than one beneficiary, see your pension or annuity is fully taxable, Pub. 575 to figure each beneficiary s You may be able to pay less tax enter the total pension or annuity taxable amount. TIP on the distribution if you were payments on line 17b; do not make an Cost. Your cost is generally your born before January 2, 1936, or entry on line 17a. net investment in the plan as of the you are the beneficiary of a deceased If you received a Form RRB-1099-R, annuity starting date. It does not employee who was born before see Pub. 575 to find out how to report include pre-tax contributions. Your net January 2, For details, see Form your benefits. investment should be shown in box 9b Instructions for Form 1040NR (2010) -15-
16 Line 18 Rental real estate, return or other schedules. List the type or Form 1042-S. However, part or all of royalties, partnerships, trusts, etc. and amount of income. If necessary, your income from the cancellation of Report income or loss from rental real include a statement showing the debt may be nontaxable. See Pub. estate, royalties, partnerships, estates, required information. For more details, 4681 or go to IRS.gov and enter trusts, and residual interests in real see Miscellaneous Income in Pub canceled debt or foreclosure in the estate mortgage investment conduits Examples of income to report on line 21 search box. (REMICs) on line 18. Use Schedule E include the following. Income that is not effectively (Form 1040) to figure the amount to Taxable distributions from a connected. Report other income on enter on line 18 and attach Schedule E Coverdell education savings account Schedule NEC if it is not effectively (Form 1040) to your return. For more (ESA) or a qualified tuition program connected with a U.S. trade or detailed instructions for completing (QTP). Distributions from these business. Schedule E, see the Instructions for accounts may be taxable if (a) they are Schedule E (Form 1040). Line 22 Treaty-exempt income. more than the qualified higher Report on line 22 the total of all your If you are electing to treat education expenses of the designated income that is exempt from tax by an TIP income from real property beneficiary in 2010, and (b) they were income tax treaty, including both located in the United States as not included in a qualified rollover. See effectively connected income and not effectively connected with a U.S. trade Pub effectively connected income. Do not or business, see Income You Can Elect Nontaxable distributions from these include this exempt income on line 23. To Treat as Effectively Connected With accounts, including rollovers, do not You must complete item L of Schedule a U.S. Trade or Business on page 7 for have to be reported on Form 1040NR. OI on page 5 of Form 1040NR to report more details on the election statement income that is exempt from U.S. tax. You may have to pay an you must attach. If you do not make the additional tax if you received a election, report rental income on! taxable distribution from a Schedule NEC, line 6. See Income from Coverdell ESA or a QTP. See the Adjusted Gross Income Real Property in chapter 4 of Pub. 519 Instructions for Form for more details. Line 24 Educator expenses. If you Taxable distributions from a were an eligible educator in 2010, you Line 19 Farm income or (loss). health savings account (HSA) or an can deduct on line 24 up to $250 of Report farm income and expenses on Archer MSA. Distributions from these qualified expenses you paid in line 19. Use Schedule F (Form 1040) to accounts may be taxable if (a) they are You may be able to deduct expenses figure the amount to enter on line 19 more than the unreimbursed qualified that are more than the $250 limit on and attach Schedule F (Form 1040) to medical expenses of the account Schedule A (Form 1040NR), line 9. An your return. For more detailed beneficiary or account holder in 2010, eligible educator is a kindergarten instructions for completing Schedule F, and (b) they were not included in a through grade 12 teacher, instructor, see the Instructions for Schedule F qualified rollover. See Pub counselor, principal, or aide who (Form 1040). Also see Pub. 225, worked in a school for at least 900 Farmer s Tax Guide, for samples of You may have to pay an hours during a school year. filled-in forms and schedules and a list! additional tax if you received a of important dates that apply to taxable distribution from an HSA Qualified expenses include ordinary farmers. or an Archer MSA. See the Instructions and necessary expenses paid in for Form 8889 for HSAs or the connection with books, supplies, Line 20 Unemployment Instructions for Form 8853 for Archer equipment (including computer compensation. You should receive a MSAs. equipment, software, and services), Form 1099-G showing in box 1 the total Amounts deemed to be income and other materials used in the unemployment compensation paid to from an HSA because you did not classroom. An ordinary expense is one you in Report this amount on line remain an eligible individual during that is common and accepted in your 20. However, if you made contributions the testing period. See Form 8889, educational field. A necessary expense to a governmental unemployment Part III. is one that is helpful and appropriate for compensation program and you are not your profession as an educator. An itemizing deductions, reduce the Alternative trade adjustment expense does not have to be required amount you report on line 20 by those assistance (ATAA) or reemployment to be considered necessary. contributions. trade adjustment assistance (RTAA) payments. These payments should Qualified expenses do not include If you received an overpayment of be shown in box 5 of Form 1099-G. expenses for home schooling or for unemployment compensation in 2010 nonathletic supplies for courses in and you repaid any of it in 2010, Recapture of a charitable health or physical education. subtract the amount you repaid from contribution deduction relating to the total amount you received. Enter the contribution of a fractional You must reduce your qualified the result on line 20. Also, enter interest in tangible personal expenses by the following amounts. Repaid and the amount you repaid on property. See Fractional Interest in Excludable U.S. series EE and I the dotted line next to line 20. If, in Tangible Personal Property in Pub. savings bond interest from Form , you repaid unemployment 526, Charitable Contributions. Interest Nontaxable qualified tuition program compensation that you included in and an additional 10% tax apply to the earnings or distributions. gross income in an earlier year, you amount of the recapture. See the Any nontaxable distribution of can deduct the amount repaid on instructions for line 59 on page 27. Coverdell education savings account Schedule A (Form 1040NR), line 11. Recapture of a charitable earnings. But if you repaid more than $3,000, see contribution deduction if the Any reimbursements you received for Repayments in Pub. 525 for details on charitable organization disposes of these expenses that were not reported how to report the repayment. the donated property within 3 years to you in box 1 of your Form W-2. Line 21 Other income. Use line 21 of the contribution. See Recapture if For more details, see Pub to report any other income effectively no exempt use in Pub Line 25 Health savings account connected with your U.S. business that Canceled debts. These amounts (HSA) deduction. You may be able to is not reported elsewhere on your may be shown in box 2 of Form 1099-C take this deduction if contributions -16- Instructions for Form 1040NR (2010)
17 (other than employer contributions, Line 27 One-half of 2010, even if the child was not your rollovers, and qualified HSA funding self-employment tax. If you were dependent. distributions from an IRA) were made to self-employed and owe One of the following statements your HSA for See Form self-employment tax, fill in Schedule SE must be true. Line 26 Moving expenses. (Form 1040) to figure the amount of You were self-employed and had a Employees and self-employed persons your deduction. See the instructions for net profit for the year. (including partners) can deduct certain Schedule SE (Form 1040) for more You used one of the optional moving expenses. The move must be in information. methods to figure your net earnings connection with employment that from self-employment on Schedule SE Line 28 Self-employed SEP, generates effectively connected (Form 1040). SIMPLE, and qualified plans. If you income. were self-employed or a partner, you A child includes your son, daughter, If you moved in connection with your may be able to take this deduction. See stepchild, adopted child, or foster child job or business or started a new job, Pub. 560, Retirement Plans for Small (defined on page 8). you may be able to take this deduction. Business; or, if you were a minister, The insurance plan must be But your new workplace must be at Pub. 517, Social Security and Other established under your business. Your least 50 miles farther from your old Information for Members of the Clergy personal services must have been a home than your old home was from and Religious Workers. material income-producing factor in the your old workplace. If you had no business. former workplace, your new workplace Line 29 Self-employed health must be at least 50 miles from your old insurance deduction. You may be But if you were also eligible to home. The deduction generally is able to deduct the amount you paid for participate in any subsidized health limited to moves to or within the United health insurance for yourself, your plan maintained by your or your States or its possessions. If you meet spouse, and your dependents. Effective spouse s employer for any month or these requirements, see Pub Use March 30, 2010, the insurance also can part of a month in 2010, amounts paid Form 3903 to figure the amount to cover your child (defined on this page) for health insurance coverage for that enter on this line. who was under age 27 at the end of month cannot be used to figure the deduction. In addition, effective March 30, 2010, if you were eligible for any month or part of a month, to participate Self-Employed Health Insurance in any subsidized health plan Deduction Worksheet Line 29 Keep for Your Records maintained by the employer of either your dependent or your child who was under age 27 at the end of 2010, do not Before you begin: If, during 2010, you were an eligible trade adjustment use amounts paid for coverage for that assistance (TAA) recipient, alternative TAA (ATAA) month to figure the deduction. recipient, reemployment trade adjustment assistance (RTAA) recipient, or Pension Benefit Guaranty Example If you were eligible to Corporation (PBGC) pension recipient, see the Note on participate in a subsidized health plan this page. maintained by your spouse s employer Be sure you have read the Exception on this page to see from September 30 through December if you can use this worksheet instead of Pub. 535, 31, you cannot use amounts paid for Business Expenses, to figure your deduction. health insurance coverage for September through December to figure 1. Enter the total amount paid in 2010 for health insurance coverage your deduction. established under your business for 2010 for you, your spouse, Note. If, during 2010, you were an and your dependents. Effective March 30, 2010, your insurance eligible trade adjustment assistance can also cover your child who was under age 27 at the end of (TAA) recipient, alternative TAA (ATAA) 2010, even if the child was not your dependent. But do not include recipient, reemployment trade amounts for any month you were eligible to participate in an adjustment assistance (RTAA) employer-sponsored health plan (explained on this page) recipient, or Pension Benefit Guaranty 2a. Enter your net profit* and any other earned income** from the Corporation (PBGC) pension recipient, business under which the insurance plan is established (excluding you must complete Form 8885 before the self-employed health insurance deduction), minus any completing the worksheet on this page. deduction on Form 1040NR, line 28. Do not include Conservation When figuring the amount to enter on Reserve Program payments exempt from self-employment tax... 2a. line 1 of the worksheet on this page, do 2b. If you pay self-employment tax, complete Schedule SE (Form not include: 1040) as a worksheet for purposes of this line. When completing Any amounts you included on Form Section A, line 3, or Section B, line 3, of the worksheet Schedule 8885, line 4, SE, treat the amount from Form 1040NR, line 29, as zero. Enter Any qualified health insurance on this line the amount shown on that worksheet Schedule SE, premiums you paid to U.S. Section A, line 6, or Section B, line b. Treasury-HCTC, or 2c. Subtract line 2b from line 2a... 2c. Any health coverage tax credit 3. Self-employed health insurance deduction. Enter the smaller advance payments shown in box 1 of of line 1 or line 2c here and on Form 1040NR, line Form 1099-H. If you qualify to take the deduction, use the worksheet on this page to *If you used either optional method to figure your net earnings from self-employment, figure the amount you can deduct. do not enter your net profit. Instead, enter the amount from Schedule SE (Form Exception. Use Pub. 535 instead 1040), Section B, line 4b. of the worksheet on this page to figure **Earned income includes net earnings and gains from the sale, transfer, or your deduction if either of the following licensing of property you created. However, it does not include capital gain income. applies. Instructions for Form 1040NR (2010) -17-
18 You had more than one source of Line 30 Penalty on early Line 31 Scholarship and fellowship income subject to self-employment tax. withdrawal of savings. The Form grants excluded. If you received a You are using amounts paid for 1099-INT or Form 1099-OID you scholarship or fellowship grant and qualified long-term care insurance to received will show the amount of any were a degree candidate, enter figure the deduction. penalty you were charged. amounts used for tuition and course-related expenses (fees, books, supplies, and equipment), but only to IRA Deduction Worksheet Line 32 Keep for Your Records! If you were age 70 1 /2 or older at the end of 2010, you cannot deduct any contributions made to your traditional IRA or treat them as nondeductible contributions. Do not complete this worksheet for anyone age 70 1 /2 or older at the end of Before you begin: Be sure you have read the list on page 19. You may not be able to use this worksheet. Figure any write-in adjustments to be entered on the dotted line next to line 35 (see the instructions for line 35 on page 21). If you checked filing status box 3, 4, or 5, and you lived apart from your spouse for all of 2010, enter D on the dotted line next to Form 1040NR, line 32. If you do not, you may get a math error notice from the IRS. 1. Were you covered by a retirement plan (see page 20)?... Yes No Next. If you checked No on line 1, skip lines 2 through 6, enter the applicable amount below on line 7, and go to line 8. $5,000, if under age 50 at the end of $6,000, if age 50 or older but under age 70 1 /2 at the end of Otherwise, go to line Enter the amount shown below that applies to you. } Single or you checked filing status box 3, 4, or 5 and you lived apart from your spouse for all of 2010, enter $66,000 Qualifying widow(er), enter $109, You checked filing status box 3, 4, or 5 and you lived with your spouse at any time in 2010, enter $10, Enter the amount from Form 1040NR, line Enter the total of the amounts from Form 1040NR, lines 24 through 31, plus any write-in adjustments you entered on the dotted line next to line Subtract line 4 from line Is the amount on line 5 less than the amount on line 2? No. None of your IRA contributions are deductible. For details on STOP nondeductible IRA contributions, see Form Yes. Subtract line 5 from line 2. Follow the instruction below that applies to you. } If single, or you checked filing status box 3, 4, or 5, and the result is $10,000 or more, enter the applicable amount below on line 7 and go to line 8. i. $5,000, if under age 50 at the end of ii. $6,000, if age 50 or older but under age 70 1 /2 at the end of Otherwise, go to line If qualifying widow(er), and the result is $20,000 or more, enter the applicable amount below on line 7 and go to line 8. i. $5,000, if under age 50 at the end of ii. $6,000 if age 50 or older but under age 70 1 /2 at the end of Otherwise, go to line Instructions for Form 1040NR (2010)
19 IRA Deduction Worksheet Line 32 (continued) 7. Multiply line 6 by the percentage below that applies to you. If the result is not a multiple of $10, increase it to the next multiple of $10 (for example, increase $ to $500). If the result is $200 or more, enter the result. But if it is less than $200, enter $200. Single or you checked filing status box 3, 4, or 5, multiply by 50% (.50) (or by } 60% (.60) if you are age 50 or older at the end of 2010) Qualifying widow(er), multiply by 25% (.25) (or by 30% (.30) if you are age 50 or older at the end of 2010). But if you checked No on line 1, then multiply by 50% (.50) (or by 60% (.60) if age 50 or older at the end of 2010) Enter the total of your wages, salaries, tips, etc. Generally, this is the amount reported in box 1 of Form W-2. See below for exceptions Enter the earned income you received as a self-employed individual or a partner. Generally, this is your net earnings from self-employment if your personal services were a material income-producing factor, minus any deductions on Form 1040NR, lines 27 and 28. If zero or less, enter -0-. For more details, see Pub Add lines 8 and Enter traditional IRA contributions made, or that will made by April 18, 2011, for 2010 to your IRA Enter the smallest of line 7, 10, or 11. This is the most you can deduct. Enter this amount on Form 1040NR, line 32. Or, if you want, you can deduct a smaller amount and treat the rest as a nondeductible contribution (see Form 8606) the extent the amounts are included on contributions made to your traditional 1099-MISC. If it is not, contact your line 12. See the examples in the IRA for 2010 or treat them as employer or the payer for the amount of instructions for line 12 on page 12. nondeductible contributions. the income. Line 32 IRA deduction. If you 2. You cannot deduct contributions 6. You cannot deduct contributions made contributions to a traditional to a Roth IRA. But you may be able to to your spouse s IRA. individual retirement arrangement (IRA) take the retirement savings 7. Do not include qualified rollover for 2010, you may be able to take an contributions credit (saver s credit). See contributions in figuring your deduction. IRA deduction. But you must have had the instructions for line 47 on page 23. Instead, see the instructions for lines earned income to do so. If you were 3. You cannot deduct elective 16a and 16b on page 13. self-employed, earned income is deferrals to a 401(k) plan, 403(b) plan, 8. Do not include trustees fees that generally your net earnings from section 457 plan, SIMPLE plan, or the were billed separately and paid by you self-employment if your personal federal Thrift Savings Plan. These for your IRA. These fees can be services were a material amounts are not included as income in deducted only as an itemized deduction income-producing factor. See Pub. 590 box 1 of your Form W-2. But you may on Schedule A. for more details. be able to take the retirement savings 9. If the total of your IRA deduction A statement should be sent to you contributions credit. See the on line 32 plus any nondeductible by May 31, 2011, that shows all instructions for line 47 on page 23. contribution to your traditional IRAs contributions to your traditional IRA for 4. If you made contributions to your shown on Form 8606 is less than your IRA in 2010 that you deducted for total traditional IRA contributions for 2009, do not include them in the 2010, see Pub. 590 for special rules. If you made any nondeductible worksheet. TIP contributions to a traditional IRA 5. If you received income from a for 2010, you must report them By April 1 of the year after the nonqualified deferred compensation TIP on Form year in which you turn age 70 1 /2, plan or nongovernmental section 457 you must start taking minimum Use the worksheet on page 18 and plan that is included in box 1 of your required distributions from your this page to figure the amount, if any, of Form W-2, or in box 7 of Form traditional IRA. If you do not, you may your IRA deduction. But read the 1099-MISC, do not include that income have to pay a 50% additional tax on the following list before you fill in the on line 8 of the worksheet. The income amount that should have been worksheet. should be shown in (a) box 11 of your distributed. For details, including how to 1. If you were age 70 1 /2 or older at Form W-2, (b) box 12 of your Form W-2 figure the minimum required the end of 2010, you cannot deduct any with code Z, or (c) box 15b of Form distribution, see Pub Instructions for Form 1040NR (2010) -19-
20 Were you covered by a retirement The person for whom the expenses plan? If you were covered by a Line 33 Student loan interest were paid must have been an eligible retirement plan (qualified pension, deduction. You can take this student (see below). However, a loan is profit-sharing (including 401(k)), deduction only if all of the following not a qualified student loan if (a) any of annuity, SEP, SIMPLE, etc.) at work or apply. the proceeds were used for other through self-employment, your IRA You paid interest in 2010 on a purposes, or (b) the loan was from deduction may be reduced or qualified student loan (explained either a related person or a person who eliminated. But you still can make below). borrowed the proceeds under a contributions to an IRA even if you You checked filing status box 1, 2, or qualified employer plan or a contract cannot deduct them. In any case, the 6. purchased under such a plan. To find income earned on your IRA Your modified AGI is less than out who is a related person, see Pub. contributions is not taxed until it is paid $75,000. Use lines 2 through 4 of the 970. to you. worksheet below to figure your modified AGI. Qualified higher education The Retirement plan box in box 13 You are not claimed as a dependent expenses. Qualified higher education of Form W-2 should be checked if you on someone else s (such as your expenses generally include tuition, were covered by a plan at work even if parent s) 2010 tax return. fees, room and board, and related you were not vested in the plan. You expenses such as books and supplies. also are covered by a plan if you were Use the worksheet below to figure The expenses must be for education in self-employed and had a SEP, your student loan interest deduction. a degree, certificate, or similar program SIMPLE, or qualified retirement plan. at an eligible educational institution. An Qualified student loan. A qualified If you were covered by a retirement eligible educational institution includes student loan is any loan you took out to plan and you file Form 8815 or you most colleges, universities, and certain pay the qualified higher education exclude employer-provided adoption vocational schools. You must reduce expenses for any of the following benefits, see Pub. 590 to figure the the expenses by the following benefits. individuals. amount, if any, of your IRA deduction. Employer-provided educational 1. Yourself or your spouse. assistance benefits that are not Special rule for married 2. Any person who was your included in box 1 of Form(s) W-2. individuals. If you checked filing dependent when the loan was taken Excludable U.S. series EE and I status box 3, 4, or 5, and you were not out. savings bond interest from Form covered by a retirement plan but your 3. Any person you could have Any nontaxable distribution of spouse was, you are considered claimed as a dependent for the year the qualified tuition program earnings. covered by a plan unless you lived loan was taken out except that: Any nontaxable distribution of apart from your spouse for all of a. The person filed a joint return, Coverdell education savings account See Pub. 590 for more details. b. The person had gross income earnings. that was equal to or more than the Any scholarship, educational You may be able to take the exemption amount for that year ($3,650 assistance allowance, or other TIP retirement savings contributions for 2010), or payment (but not gifts, inheritances, credit. See the line 47 c. You could be claimed as a etc.) excluded from income. instructions on page 23. dependent on someone else s return. For more details on these expenses, see Pub Eligible student. An eligible student Student Loan Interest Deduction is a person who: Worksheet Line 33 Keep for Your Records Was enrolled in a degree, certificate, or other program (including a program Before you begin: Figure any write-in adjustments to be entered on the of study abroad that was approved for dotted line next to line 35 (see the instructions for line 35 credit by the institution at which the on page 21). student was enrolled) leading to a See the instructions for line 33 on this page. recognized educational credential at an eligible educational institution, and 1. Enter the total interest you paid in 2010 on qualified student loans Carried at least half the normal (see above). Do not enter more than $2, full-time workload for the course of 2. Enter the amount from Form 1040NR, line study he or she was pursuing. 3. Enter the total of the amounts from Form 1040NR, lines 24 through 32, plus any write-in adjustments Line 34 Domestic production you entered on the dotted line next to line activities deduction. You may be able to deduct up to 9% of your 4. Subtract line 3 from line qualified production activities income 5. Is line 4 more than $60,000? from the following activities. No. Skip lines 5 and 6, enter -0- on line 7, and go 1. Construction of real property to line 8. performed in the United States. Yes. Subtract $60,000 from line Engineering or architectural 6. Divide line 5 by $15,000. Enter the result as a decimal (rounded to at services performed in the United States least three places). If the result is or more, enter for construction of real property in the 7. Multiply line 1 by line United States. 8. Student loan interest deduction. Subtract line 7 from line 1. Enter 3. Any lease, rental, license, sale, the result here and on Form 1040NR, line 33. Do not include this exchange, or other disposition of: amount in figuring any other deduction on your return (such as on a. Tangible personal property, Schedule A (Form 1040NR), Schedule C (Form 1040), Schedule E computer software, and sound (Form 1040), etc.) recordings that you manufactured, produced, grew, or extracted in whole -20- Instructions for Form 1040NR (2010)
Instructions for Form 1040NR
2014 Instructions for Form 1040NR U.S. Nonresident Alien Income Tax Return Department of the Treasury Internal Revenue Service Contents Page What's New...1 General Instructions...2 Resident or Nonresident
Instructions for Form 1040NR-EZ
2012 Instructions for Form 1040NR-EZ Department of the Treasury Internal Revenue Service U.S. Income Tax Return for Certain Nonresident Aliens With No Dependents Section references are to the Internal
Instructions for Form 1040NR-EZ
2010 Instructions for Form 1040NR-EZ U.S. Income Tax Return for Certain Nonresident Aliens With No Dependents Department of the Treasury Internal Revenue Service Section references are to the Internal
Instructions for Form 1040NR-EZ U.S. Income Tax Return for Certain Nonresident Aliens With No Dependents
2009 Instructions for Form 1040NR-EZ U.S. Income Tax Return for Certain Nonresident Aliens With No Dependents Department of the Treasury Internal Revenue Service Section references are to the Internal
Instructions for Form 1040NR-EZ
2011 Instructions for Form 1040NR-EZ U.S. Income Tax Return for Certain Nonresident Aliens With No Dependents Department of the Treasury Internal Revenue Service Section references are to the Internal
Exemptions, Standard Deduction, and Filing Information
Department of the Treasury Internal Revenue Service Publication 501 Cat. No. 15000U Exemptions, Standard Deduction, and Filing Information For use in preparing 2013 Returns Contents What's New... 1 Reminders...
U.S. Tax Guide for Aliens
Publication 519 Contents U.S. Tax Guide for Aliens Introduction.................. 1 Cat. No. 15023T Department of the Treasury Internal Revenue Service For use in preparing 2015 Returns What's New... 2
2014 makes doing your taxes faster and easier.
1040A INSTRUCTIONS 2014 makes doing your taxes faster and easier. is the fast, safe, and free way to prepare and e-file your taxes. See. Get a faster refund, reduce errors, and save
U.S. Taxation of J-1 Exchange Visitors
U.S. Taxation of J-1 Exchange Visitors By Paula N. Singer, Esq. 1 ONESOURCE TAX INFORMATION REPORTING The J-1 Exchange Visitor Program has long been used by institutions of higher education, teaching hospitals
Instructions for Form 5329
2014 Instructions for Form 5329 Additional Taxes on Qualified Plans (Including IRAs) and Other Tax-Favored Accounts Department of the Treasury Internal Revenue Service Section references are to the Internal
Individual Retirement Arrangements (IRAs)
Department of the Treasury Internal Revenue Service Publication 590 Cat. No. 15160x Individual Retirement Arrangements (IRAs) (Including Roth IRAs and Education IRAs) For use in preparing 1999 Returns
2013 makes doing your taxes faster and easier.
1040A NOTE: THIS BOOKLET DOES NOT CONTAIN TAX FORMS INSTRUCTIONS 2013 makes doing your taxes faster and easier. is the fast, safe, and free way to prepare and e-file your taxes. See.
Guide for filing a United States Non-Resident Income Tax Return
Guide for filing a United States Non-Resident Income Tax Return Tax Year 2012 Prepared for Re:Sound By Deloitte LLP, Global Employer Services, Toronto September 6, 2013 Deloitte LLP and affiliated entities.
1. Nonresident Alien or Resident Alien?
U..S.. Tax Guiide for Non-Resiidents Table of Contents A. U.S. INCOME TAXES ON NON-RESIDENTS 1. Nonresident Alien or Resident Alien? o Nonresident Aliens o Resident Aliens Green Card Test Substantial Presence
Withholding of Tax on Nonresident Aliens and Foreign Entities
Department of the Treasury Internal Revenue Service Publication 515 Cat. No. 15019L Withholding of Tax on Nonresident Aliens and Foreign Entities For use in 2013 Contents What's New... 1 Reminders... 2
Instructions for Form 1040X (Rev. December 2014)
Instructions for Form X (Rev. December 2014) Amended U.S. Individual Income Tax Return Section references are to the Internal Revenue Code unless otherwise noted. Contents Page Future Developments... 1
Tax Relief for Victims of Terrorist Attacks
Department of the Treasury Internal Revenue Service Publication 3920 (Rev. September 2014) Cat. No. 32806E Tax Relief for Victims of Terrorist Attacks Contents Future Developments... 1 Introduction...
Frequently Asked Questions
Frequently Asked Questions Questions and answers are divided into the following topics BEFORE USING GLACIER Tax Prep ENTERING DATA INTO GLACIER Tax Prep RESULTS FROM GLACIER Tax Prep WHAT TO DO WHEN FINISHED
The Planner s Guide to the 1040 Form!
The Planner s Guide to the 1040 Form! Unlocking the annuity and life insurance planning opportunities that other financial advisors are out of position to pursue. PLAN Noun - \plan\ Definition - A program
1040EZ INSTRUCTIONS IRS NOTE: THIS BOOKLET DOES NOT CONTAIN TAX FORMS
1040EZ NOTE: THIS BOOKLET DOES NOT CONTAIN TAX FORMS INSTRUCTIONS 2012 makes doing your taxes faster and easier. is the fast, safe, and free way to prepare and e-file your taxes. See.
Instructions for Form W-8BEN (Rev. February 2006)
Instructions for Form W-8BEN (Rev. February 2006) Certificate of Foreign Status of Beneficial Owner for United States Tax Withholding Department of the Treasury Internal Revenue Service If applicable,
Application for Automatic Extension of Time To File U.S. Individual Income Tax Return
Form 4868 Department of the Treasury Internal Revenue Service (99) Application for Automatic Extension of Time To File U.S. Individual Income Tax Return Information about Form 4868 and its instructions,
Internal Revenue Service Wage and Investment
Internal Revenue Service Wage and Investment Stakeholder Partnerships, Education and Communication Winter/Spring 2015 Income Tax Workshop for Nonresident Aliens Please Note This workshop is for students
Certain Cash Contributions for Typhoon Haiyan Relief Efforts in the Philippines Can Be Deducted on Your 2013 Tax Return
Certain Cash Contributions for Typhoon Haiyan Relief Efforts in the Philippines Can Be Deducted on Your 2013 Tax Return A new law allows you to choose to deduct certain charitable contributions of money
Instructions for Form 8938
2015 Instructions for Form 8938 Statement of Specified Foreign Financial Assets Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless otherwise
Federal Income Tax Brochure
P R O F E S S I O N A L N E T W O R K S International Student and Scholar Advising Federal Income Tax Brochure James Tenney, Judy Todd, Robin Catmur February, 2011 The information contained in this resource
INSTRUCTIONS IRS NOTE: THIS BOOKLET DOES NOT CONTAIN TAX FORMS. Including Instructions for Form 8949 and Schedules 8812, A, C, D, E, F, J, R, and SE
1040 NOTE: THIS BOOKLET DOES NOT CONTAIN TAX FORMS INSTRUCTIONS Including Instructions for Form 8949 and Schedules 8812, A, C, D, E, F, J, R, and SE 2012 Get a faster refund, reduce errors, and save paper.
Tax Information for Foreign National Students, Scholars and Staff
Information for Foreign National Students, Scholars and Staff I. Introduction For federal income tax purposes, foreign national students and scholars are categorized in one of two ways: Nonresident alien
Individual Retirement Arrangements (IRAs)
Department of the Treasury Internal Revenue Service Publication 590 Cat. No. 15160x Individual Retirement Arrangements (IRAs) (Including Roth IRAs and Education IRAs) For use in preparing 2000 Returns
Prepared by Mark Brockman, CPA 6435 Lindyann Lane, Houston, TX 77008-3233 (832) 618-1932 (evenings/weekends)
International Students and Scholars Taxation Workshop University of Texas Health Science Center at Houston Office of International Affairs February 28, 2012 Prepared by Mark Brockman, CPA 6435 Lindyann
Frequently Asked Tax Questions 2014 Tax Filing Season
Frequently Asked Tax Questions 2014 Tax Filing Season Q. How do I determine if I am a resident alien for tax purposes? A. F-1, J-1, M or Q students are considered non-residents for tax purposes for their
Welcome to Tax Orientation for International Students & Scholars for the 2013 Filing Season" Marea Clarke" Paul Cullen
Welcome to Tax Orientation for International Students & Scholars for the 2013 Filing Season" Marea Clarke" Paul Cullen Overview of the U.S. Income Tax System" Your employer withholds from your earnings
Instructions for Form 8233 (Rev. June 2011)
Instructions for Form 8233 (Rev. June 2011) Department of the Treasury Internal Revenue Service (Use with the March 2009 revision of Form 8233.) Exemption From Withholding on Compensation for Independent
Instructions for Form 5329
2010 Instructions for Form 5329 Additional Taxes on Qualified Plans (Including IRAs) and Other Tax-Favored Accounts Department of the Treasury Internal Revenue Service Section references are to the Internal
Instructions for Form 8938 (Rev. December 2014)
Instructions for Form 8938 (Rev. December 2014) Statement of Specified Foreign Financial Assets Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless.
Line-by-Line Instructions for Schedule 1, Additions and Subtractions
and interest and send you a bill. If you annualize your income, you must complete and attach an MI-2210. Enter the penalty and interest amounts on the lines provided. Line 35: Refund. This includes any
Your Federal Income Tax. TAX YEAR 2012 Jan. 1 - Dec. 31, 2012 Download via efile.com. For Individuals. Contents. Department of the Treasury
Department of the Treasury Your Federal Income Tax For Individuals TAX YEAR 2012 Jan. 1 - Dec. 31, 2012 Download via efile.com Internal Revenue Service All material in this publication may be reprinted
U.S. TAX ISSUES FOR CANADIANS
March 2015 CONTENTS Snowbirds Canadians owning U.S. rental properties Summary U.S. TAX ISSUES FOR CANADIANS If you own rental property in the United States or spend extended periods of time there, you
1040 U.S. Individual Income Tax Return 2015
Form Department of the Treasury - Internal Revenue Service (99) 040 U.S. Individual Income Tax Return OMB No. 545-0074 For the year Jan. -Dec. 3,, or other tax year beginning,, ending, 20 Your first name
INTERNATIONAL TAX LAW FREQUENTLY ASKED QUESTIONS
INTERNATIONAL TAX LAW FREQUENTLY ASKED QUESTIONS 1 Table of Contents Page(s) Basic Rules of U.S. Taxation 3 Canadian Questions 4 Capital Gain Income (Nonresidents) 5 Currency Issues 6 Determination of
Instructions for Form 8606
2001 Instructions for Form 8606 Nondeductible IRAs and Coverdell ESAs Section references are to the Internal Revenue Code unless otherwise noted. Department of the Treasury Internal Revenue Service You
POWER FINANCIAL LLC JOHN POWER PO BOX 862 BISMARCK, ND 58502 (0) Organizer Mailing Slip
POWER FINANCIAL LLC JOHN POWER PO BOX 862 BISMARCK, ND 58502, (0) Organizer Mailing Slip TAX ORGANIZER Dear Client, Enclosed is your Tax Organizer for tax year 2012. Your Organizer contains several sections
Preparer Review. Tax Ease, LLC 39270 Paseo Padre Pkwy #624 May, 2011 Fremont, CA 94538. 877-829-2667 Fax: 510-779-5251
Preparer Review Tax Ease, LLC 39270 Paseo Padre Pkwy #624 May, 2011 Fremont, CA 94538 877-829-2667 Fax: 510-779-5251 1 Introduction Introduction We have designed this course with a heavy &
2015 Publication 4011
01 Publication 4011 Foreign Student and Scholar Volunteer Resource Guide For Use in Preparing Tax Year 01 Returns»» Volunteer Income Tax Assistance (VITA)»» Tax Counseling for the Elderly (TCE) For the
Partner's Instructions for Schedule K-1 (Form 1065)
2014 Partner's Instructions for Schedule K-1 (Form 1065) Partner's Share of Income, Deductions, Credits, etc. (For Partner's Use Only) Department of the Treasury Internal Revenue Service Section references
LOCAL 348 ANNUITY FUND 9235 4 TH AVENUE, BROOKLYN, NY 11209
TEL. # 718-745-3487 FAX # 718-745-2976 CLAIM FOR DEATH BENEFIT INSTRUCTIONS: - Please print in ink or type. - Complete all applicable items. - Sign and have this form notarized - Attach a certified copy
Tax Withholding and Estimated Tax
Department of the Treasury Internal Revenue Service Publication 505 Cat. No. 15008E Tax Withholding and Estimated Tax For use in 2013 Get forms and other Information faster and easier by: Internet IRS.gov
Partner's Instructions for Schedule K-1 (Form 1065-B)
2015 Partner's Instructions for Schedule K-1 (Form 1065-B) Partner's Share of Income (Loss) From an Electing Large Partnership (For Partner's Use Only) Department of the Treasury Internal Revenue Service
Income Tax Organizer
Income Tax Organizer This organizer will help you organize your tax information (and make sure that you don't miss important deductions). We hope you find it useful and informative! (This form was prepared
Instructions for Form 8962
2014 Instructions for Form 8962 Premium Tax Credit (PTC) Department of the Treasury Internal Revenue Service Complete Form 8962 only for health insurance! coverage in a qualified health plan (described
Tax Tips for Tax Year 2007 (Issued January 2008)
Topic Tax Tips for Tax Year 2007 (Issued January 2008) Publish Before Tax Tip 1 What s New for 2007? January 31, 2008 Tax Tip 2 Filing Requirements January 31, 2008 Tax Tip 3 Tax Assistance January 31,
GENERAL INSTRUCTIONS FOR COMPLETING YOUR RETURN
GENERAL INSTRUCTIONS FOR COMPLETING YOUR RETURN PITTSBURGH CITY & SCHOOL DISTRICT The City of Pittsburgh Earned Income Tax is levied at the rate of 1% under ACT 511. The Pittsburgh School District Earned
Distributions from Individual Retirement Arrangements (IRAs)
Department of the Treasury Internal Revenue Service Contents What's New for 2014 1 Publication 590-B What's New for 2015 1 Cat No 66303U Reminders 2 Distributions from Individual Retirement Arrangements
IRS Federal Income Tax Publications provided by efile.com
IRS Federal Income Tax Publications provided by efile.com This publication should serve as a relevant source for up to date tax answers to your tax questions. Unlike most tax forms, many tax publications
Part 4. Comprehensive Examples and Forms Example One: Active minister
Part 4. Comprehensive Examples and Forms Example One: Active minister te: This example is based on an illustrated example contained at the end of IRS Publication 517. Rev. John Michaels is the minister
University of Northern Iowa
University of Northern Iowa Direct Deposit of Payroll Authorization Form Name (Please Print) (Last, First, MI) UNI ID# I hereby authorize the University of Northern Iowa to initiate direct deposit credit
Individual Retirement Arrangements (IRAs)
Department of the Treasury Internal Revenue Service Publication 590 Cat. No. 15160X Individual Retirement Arrangements (IRAs) For use in preparing 2013 Returns Contents What's New for 2013... 2 What's
Instructions for Form 8960
2014 Instructions for Form 8960 Net Investment Income Tax Individuals, Estates, and Trusts Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless
Instructions for Form 8606
2014 Instructions for Form 8606 Nondeductible IRAs Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless otherwise noted. General Instructions Future
Tax Guide for Seniors
Department of the Treasury Internal Revenue Service Publication 554 Contents What's New 1 Reminders 2 Cat No 15102R Introduction 2 Tax Guide for Seniors Chapter 1 2014 Filing Requirements 4 General Requirements
Instructions for Form 8853
2011 Instructions for Form 8853 Archer MSAs and Long-Term Care Insurance Contracts Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless otherwise
Special Tax Notice for UC Retirement Plan Distributions
Special Tax Notice for UC Retirement Plan Distributions Special Tax Notice for UC Retirement Plan Distributions This notice explains how you can continue to defer federal income tax on your retirement
Shareholder's Instructions for Schedule K-1 (Form 1120S)
2015 Shareholder's Instructions for Schedule K-1 (Form 1120S) Shareholder's Share of Income, Deductions, Credits, etc. (For Shareholder's Use Only) Department of the Treasury Internal Revenue Service Section
National Electrical Annuity Plan Lump Sum Benefit Application
National Electrical Annuity Plan Lump Sum Benefit Application To avoid delays in the processing and payment of your benefit, please follow these instructions carefully and completely. 1. Print all information
Frequently Asked Questions for Non Resident Alien Taxation
Frequently Asked Questions for Non Resident Alien Taxation A. Income 1. Compensation I worked abroad for the summer. Do I have to include this income on my tax return? 2. Scholarships and Fellowships I
Franklin Templeton Retirement Plan Beneficiary Distribution Request
Franklin Templeton Retirement Plan Beneficiary Distribution Request For assistance, please call your financial advisor or Franklin Templeton Retirement Services at 1-800/527-2020. 1 PARTICIPANT (DECEDENT)
Outgoing Annuity Tax-Qualified Transfer Exchange, Conversion or Direct Rollover from RiverSource Life Insurance Co. of New York i
DOC0107138065 Service address: RiverSource Life Insurance Co. of New York 70500 Ameriprise Financial Center Minneapolis, MN 55474 Outgoing Annuity Tax-Qualified Transfer Exchange, Conversion or Direct
North Carolina. 2016 Income Tax Withholding Tables and Instructions for Employers.. NC - 30 Web 12-15. New for 2016
NC - 30 Web 12-15 North Carolina 2016 Income Tax Withholding Tables and Instructions for Employers You can file your return and pay your tax online at. Click on eservices. New
Selection of Partial Lump Sum Distribution
Selection of Partial Lump Sum Distribution MEMBER INFORMATION TMRS Identification Number (not required) Member s Name (first, middle, last) Social Security Number Mailing Address Daytime Phone Number
Tax Guide for Seniors
Department of the Treasury Internal Revenue Service Publication 554 Cat. No. 15102R Tax Guide for Seniors For use in preparing 2013 Returns Contents What's New... 1 Reminders... 2 Introduction... 2 Chapter
Questions. Please check the appropriate box and include all necessary details and documentation.
Questions Please check the appropriate box and include all necessary details and documentation. Yes No Personal Information Did your marital status change during the year? p p If yes, explain: Did you
2014 Tax Organizer. Thank you for taking the time to complete this Tax Organizer.
2014 Tax Organizer This Tax Organizer is designed to help you collect and report the information needed to prepare your 2014 income tax return. The attached worksheets cover income, deductions, and credits,
Who Must Make Estimated Tax Payments
2014 Form Estimated Income Tax for Estates and Trusts Section references are to the Internal Revenue Code unless otherwise noted. Future Developments For the latest information about developments related
2014 1040 Questionnaire
2014 1040 Questionnaire Please check the appropriate box. Any YES answers require you to attach details and/or documentation! Personal Information YES NO Did your marital status change during the year?...
Instructions for Form 8606 Nondeductible IRAs
2008 Instructions for Form 8606 Nondeductible IRAs Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless otherwise noted. General Instructions Tax
Instructions for Form 2441
2015 Instructions for Form 2441 Child and Dependent Care Expenses Department of the Treasury Internal Revenue Service Future Developments For the latest information about developments related to Form 2441
ATTENTION: NEW NC-4 WITHHOLDING FORMS ENCLOSED
North Carolina Department of Revenue ATTENTION: NEW NC-4 WITHHOLDING FORMS ENCLOSED IMMEDIATE ACTION REQUIRED North Carolina Department of Revenue TO: IMPORTANT NOTICE: NEW NC-4 REQUIRED FOR PAYMENTS BEGINNING
Shareholder's Instructions for Schedule K-1 (Form 1120S)
2000 Department Shareholder's Instructions for Schedule K-1 (Form 1120S) Shareholder's Share of Income, Credits, Deductions, etc. (For Shareholder's Use Only) Section references are to the Internal Revenue
Instructions for Form 8853
2010 Instructions for Form 8853 Archer MSAs and Long-Term Care Insurance Contracts Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless otherwise
2014 Client Organizer Questionnaire
2014 Client Organizer Questionnaire NOTE: We cannot complete your 2014 personal income tax returns without these questions being answered and the last page being signed. Please check the appropriate box
DOC010830482. RiverSource Life Account You Are Moving Assets From. Part 2. Account You Are Moving Assets To
DOC010830482 RiverSource Life Insurance Company 70100 Ameriprise Financial Center Minneapolis, MN 55474 Outgoing Annuity Tax-Qualified Transfer, Exchange, Conversion or Direct Rollover from RiverSource
2014 Instructions for Schedule E (Form 1040)
Department of the Treasury Internal Revenue Service 2014 Instructions for Schedule E (Form 1040) Supplemental Income and Loss Use Schedule E (Form 1040) to report income or loss from rental real estate,
1040 U.S. Individual Income Tax Return 2015
Form - (99) 1040 U.S. Individual Tax Return For the year Jan. 1-Dec. 31,, or other tax year beginning,, ending, 20 Your first name and initial Last name If a joint return, spouse's first name and initial
1999 Instructions for Schedule E, Supplemental Income and Loss
1999 Instructions for Schedule E, Supplemental Income Loss Use Schedule E (Form 1040) to report income or loss from rental real estate, royalties, partnerships, S corporations, estates, trusts, residual
1040EZ INSTRUCTIONS IRS NOTE: THIS BOOKLET DOES NOT CONTAIN TAX FORMS
1040EZ NOTE: THIS BOOKLET DOES NOT CONTAIN TAX FORMS INSTRUCTIONS 2011 makes doing your taxes faster and easier. is the fast, safe, and free way to prepare and e-file your taxes. See.
Slide 2. Income Taxes
Slide 1 Taxes Income taxes have been a part of American life since 1909 when the 16 th Amendment to the Constitution was ratified. You can t avoid taxes, so you might as well understand how taxes are structured | http://docplayer.net/14400928-Instructions-for-form-1040nr.html | CC-MAIN-2018-22 | refinedweb | 25,599 | 62.68 |
Eclipse Monkey offers two mechanisms to achieve this. First, you can right click on a script and choose "Copy for publication" to copy the script to the clipboard in a format that you can publish to a variety of destinations (e-mails, blogs, bugzilla entries, etc.). Eclipse Monkey will handle the conversions required by the selected destination, such as converting the script into valid HTML content if you are targeting a web page or blog entry. In the same way, Eclipse Monkey can extract the contents of your clipboard (where you may have copied a web page that contains a script published by another developer) and convert them back into scripts.
Secondly, Eclipse Monkey automatically downloads DOMs from the web if you still don't have them on your local machine. By analyzing information from the DOM metadata within the script, Eclipse Monkey can extract the update site the DOM comes from and fire up the Eclipse Update Manager to download it.
Use Case: Counting Lines of Code (LOCs)
Consider an example: creating a script that extracts some metrics from the projects that are in your Eclipse workspace and then displays them in a nice graphical way. The script will count the lines of code (LOCs) in one of your project's source files and display them in a pie chart grouped by package. (Find the whole example in the source code attached to this article.)
To analyze source files, the script reuses the javacore DOM described previously. To create the charts, it uses a custom DOM that wraps the open source JFreeChart library (other charting libraries should work fine too). The DOM is exposed with the jfreechart script variable and offers methods to create a new dataset for pie charts and show charts on the screen. Here's the complete script where these methods are invoked:
/*
* Menu: SLOC > SLOC by Package
* DOM:
dash/update/org.eclipse.eclipsemonkey.doms
* DOM:
*/
var text = "" ;
var i = 0 ;
function main() {
project = workspace.root.getProject("com.devx.monkey.doms");
srcFolder = project.getFolder("src");
srcroot = javacore.create(project).getPackageFragmentRoot(srcFolder);
p = jfreechart.dataSet.newPie();
// frg iterates over the packages of the project
for each (frg in openIfNeeded(srcroot).getChildren()) {
count = 0;
// cu iterates over the compilation units (source files)
for each( cu in openIfNeeded(frg).getCompilationUnits()){
source = cu.openable.buffer.contents;
sr = new Packages.java.io.StringReader(source);
lr = new Packages.java.io.LineNumberReader(sr);
while ( lr.readLine() != null) {}
count += lr.getLineNumber() ;
lr.close();
}
if (count != 0) {
// store the value into the chart
p.setValue(frg.elementName + "(" + count + ")",count);
}
}
chart = jfreechart.createPieChart3D("SLOC Chart",p);
jfreechart.show(chart);
}
function openIfNeeded(o) {
if (!o.isOpen())
o.open(null);
return o;
}
The calls to the methods jfreechart.dataset.newPie() and jfreechart.createPieChart3D() map to objects exposed through the DOM. The objects ultimately belong to the JFreeChart library, as shown here:
public JFreeChart createPieChart3D(String title, PieDataset dataset) {
JFreeChart chart =
ChartFactory.createPieChart3D(
title, dataset, false, false, false);
return chart;
}
The method jfreechart.show(), invoked at the end of the script, triggers the update() method of a custom Eclipse View that displays the chart. The following listing contains the relevant parts of the View source code:
public class ChartView extends ViewPart implements Observer {
// ChartComposite is a SWT widget to display charts
// provided by JFreechart
private ChartComposite chartComposite ;
private Composite parent;
private Label l;
@Override
public void createPartControl(Composite parent) {
this.parent = parent ;
l = new Label(parent, SWT.NONE);
l.setText("No chart to display");
}
public void update(Observable observable, Object obj) {
final JFreeChart chart = (JFreeChart) obj ;
if (chartComposite == null) {
l.dispose();
chartComposite = new ChartComposite(parent, SWT.NONE);
chartComposite.setLayoutData(
new GridData(GridData.FILL_BOTH));
}
chartComposite.setChart(chart);
chartComposite.redraw();
parent.layout();
}
// other non-relevant parts omitted...
}
Figure 2 shows the final result of the example.
Extend Eclipse with Your Own Scripts
With Eclipse Monkey, you can integrate interpreted scripts into Eclipse and leverage the benefits of an interpreted programming language to contribute components to the platform. By using a try-test-retry approach in developing new tools for your workspace, you avoid the complexities of Eclipse plug-in development as well as long debugging sessions. This is especially useful for quick and dirty tools like the ones shown in the article's examples. In addition, Eclipse Monkey allows you to extend the base Eclipse offering with additional DOMs and distribute your scripts without any effort from other colleagues and developers.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/Java/Article/35173/0/page/3 | CC-MAIN-2017-13 | refinedweb | 760 | 57.67 |
Table Of Content
Undertanding ReactJS Router with a basic example (NodeJS)
1- What is React Router?
-.
- The idea of the router is really helpful because you are working with React, a Javascript library for programming single page applications. To develop a React application, you have to write a lot of components but need only one file to serve users. That is index.html .
- The React Router helps you define dynamic URLs and select a suitable Component to render on the user browser corresponding to each URL.
<BrowserRouter> vs <HashRouter>
- The React Router provides you with the 2 components such as <BrowserRouter> & <HashRouter>. These two components are different in URL type to be created and synchronized by them.
// <BrowserRouter> // <HashRouter>
- <BrowserRouter> is used more commonly, it uses the History API included in HTML5 to monitor your router's history while <HashRouter> uses the hash of the URL ( window.location.hash) to remember everything. If you intend to support old browsers, you should be sealed to the <HashRouter>, or you want to create a React application using the client-side router, <HashRouter> is a reasonable choice.
<Route>
- The <Route> component defines a mapping between an URL and a Component. That means when the user visits by an URL on browser, a corresponding Component shall be renderred on interface.
<BrowserRouter> <Route exact path="/" component={Home}/> <Route path="/about" component={About}/> <Route path="/topics" component={Topics}/> </BrowserRouter> <HashRouter> <Route exact path="/" component={Home}/> <Route path="/about" component={About}/> <Route path="/topics" component={Topics}/> </HashRouter>
- The exact attribute is used in the <Route> to say that this <Route> only operates if the URL on the browser is matches absolutely the value of its path attribute.
<BrowserRouter>
...
<Route path="/about" component={About}/>
...
</BrowserRouter> ==> OK Work! ==> OK Work! ==> OK Work! ==> OK Work!
------------------- ==> Not Work! ==> Not Work! ==> Not Work!
<HashRouter>
...
<Route path="/about" component={About}/>
...
</HashRouter> ==> OK Work! ==> OK Work!
---------------- ==> Not Work! ==> Not Work!
<BrowserRouter>
...
<Route exact path="/about" component={About}/>
...
</BrowserRouter> ==> OK Work! ==> OK Work!
------------- ==> Not Work! ==> Not Work! ==> Not Work! ==> Not Work! ==> Not Work!
<HashRouter>
...
<Route exact path="/about" component={About}/>
...
</HashRouter> ==> OK Work!
---------------- ==> Not Work! ==> Not Work! ==> Not Work!
2- Create a project and install library
- First of all, you needs to install the create-react-app tool and create a React project with the name of react-router-basic-app:
# Install tool: npm install -g create-react-app # Create project named 'react-router-basic-app': create-react-app react-router-basic-app
- Your project is created:
- Next, CD to the project just created and perform the following command to install the react-router-dom library in your project:
# CD to your project cd react-router-basic-app # Install react-router-dom library: npm install --save react-router-dom
- Open your project on an editor which you are familiar to (For example, Atom). Open the package.json file, you shall see that the react-router-dom library has been added to your project.
- Start your application:
# Start App npm start
3- Write code
- Delete all the contents of the two files: App.css & App.js. We will write code for these 2 files.
- App.css
.main-route-place { border: 1px solid #bb8fce; margin:3px; padding: 5px; } .secondary-route-place { border: 1px solid #a2d9ce; margin: 5px; padding: 5px; }
- App.js
import React from "react"; import { BrowserRouter, Route, Link } from "react-router-dom"; import './App.css'; class App extends React.Component { render() { return ( <BrowserRouter> <div> <ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/about">About</Link> </li> <li> <Link to="/topics">Topics</Link> </li> </ul> <hr /> <div className="main-route-place"> <Route exact <Route path={`${this.props.match.url}/:topicId`} component={Topic} /> <Route exact path={this.props.match.url} render={() => <h3> Please select a topic. </h3> } /> </div> </div> ); } } class Topic extends React.Component { render() { return ( <div> <h3> {this.props.match.params.topicId} </h3> </div> ); } } export default App;
- It is not neccessary to change the two files: index.js & index.html:
- index.js
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import registerServiceWorker from './registerServiceWorker'; ReactDOM.render(<App />, document.getElementById('root')); registerServiceWorker();
- index.html
<> <noscript> You need to enable JavaScript to run this app. </noscript> <div id="root"></div> </body> </html>
- Run your application and see the results on the browser:
These are online courses outside the o7planning website that we introduced, which may include free or discounted courses.
Complete React JS web developer with ES6 - Build 10 projects
The Complete React Native + Hooks Course [2020 Edition]
Building User Interface Using React And Flux
Create Web Apps with Meteor & React
Perfect React JS Course - Understand Relevant Details
Learn React by Building Real Projects
Building a TodoMVC Application in Vue, React and Angular
React for Beginners: A Complete Guide to Getting Started
JavaScript and React for Developers: Master the Essentials
Build Apps with ReactJS: The Complete Course
Learn React JS from scratch: Create hands on projects
Advanced Design Patterns with React
React, Redux, & Enzyme - Introducing Apps & Tests
React - Mastering Test Driven Development
Learn by Example : ReactJS | https://o7planning.org/en/12139/undertanding-react-router-with-a-basic-example | CC-MAIN-2020-45 | refinedweb | 837 | 54.12 |
Use Expo Web with the AWS Amplify Console
K
Updated on
・7 min read
Cover image by Phil Roeder on Flickr
UPDATE 02.06.2019: Expo v33 pre-view came out! Now Web builds work alongside Android and iOS builds, so I updated this post.
I'm currently working on my next video course called MVPs with AWS. It's about building Minimal Viable Software Products with AWS technology with a focus on serviceful serverless approaches.
I did customer interviews for that course; they revealed that people want to build cross-platform apps right at the start, so the front-end tech I choose was Expo. Expo lets us build Android and iOS apps with the JavaScript and React skills we already have from the web. In the v33 release, they also allow building PWAs with the help of react-native-web. This universal app approach allows having one front-end code-base for three platforms.
Customers also care about continuous delivery and tooling that eases the interaction with AWS, so I choose AWS Amplify with its SDK, CLI, and Console. Amplify offers a JavaScript SDK which integrates nicely with Expo and the CLI helps to build serverless backends without the need to get too deep into CloudFormation right from the start. With the Amplify Console, we can automate the deployment of backend and frontend on every push to our repository.
If you want to get updates to this project just follow me on Twitter
What
We will set up an Expo project configured for Android, iOS and Web builds with an Amplify controlled backend.
We need the following tools:
AWS Cloud9
A cloud-based IDE that can be accessed via a browser. It comes with Node.js, npm and the AWS SDK pre-installed. We could install these tools manually on our own if we want to use another editor.
GitHub Repository
The continuous delivery is triggered by Git commits, so we need a repository the Amplify Console can watch.
Expo
We use the Expo CLI to init a project and build an Android, iOS and Web client from it later.
AWS Amplify
We will use the Amplify CLI and Console. The CLI is used to add Amplify features to our Expo project, and the Console is used to watch our Git repository for commits that will trigger builds and deploys.
How
Prerequisites
An AWS account and a Cloud9 environment. This is pretty straight forward to set up, but people who struggle with it can look at the first five steps of this article.
A GitHub account.
The setup will consist of seven steps:
- Create an Expo Account and Login
- Initialize Expo Project
- Add Script and Dependencies
- Initialize Amplify Project
- Commit Amplify files to Git
- Set up and push changes GitHub Repository
- Set up and run Amplify Console Build
1. Create an Expo Account and Login
First, step is to create an Expo account, so our published Android and iOS projects will be automatically available on the Expo-Client.
Account Creation
An Expo account is totally free and can be created here.
Download and Login with Expo Client
We need to install the Expo client for the mobile platform of our choosing. The Android client can be found in the Play Store
After the install, we need to log in with our freshly created Expo account. This will make projects we published with the Expo-CLI available in the client automatically.
Login with the Expo-CLI
To log in with the Expo-CLI, we just need to run the login command, this will create some files in our
~/.expo that keeps track of our login state.
npx expo-cli login -u <USERNAME> -p <PASSWORD>
2. Initialize Expo Project
Next, we create a new project with the help of the Expo CLI tool.
npx expo-cli init universal-app --template blank@next
We give the project the name "Universal App".
3. Add Scripts and Dependencies
We need to install the
expo-cli package as
devDependency. Installing global dependencies locally in the
package.json makes them available as if they were global inside the npm scripts.
cd universal-app npm i -D expo-cli
At the moment of writing, Expo projects come without a build or publish script; we need to add this to our
package.json.
Adding a
build script to the
package.jsons
scripts section:
{ ... "scripts": { "login": "expo login -u $EXPO_USERNAME -p $EXPO_PASSWORD", "build": "expo build:web", "publish": "expo publish --non-interactive", ... }, ... }
The build script is needed for the web build, so Amplify Console can build and deploy it to AWS.
The publish script is needed for the Android and iOS build, so Expo will build and publish to the Expo clients on these platforms.
Since it uses the Expo infrastructure, it needs username and password.
We use environment variables for this, so the Expo credentials don't show up in the logs.
4. Initialize Amplify Project
Next, we have to add Amplify support to our project. Cloud9 uses a different file-name to store AWS credentials than the Amplify CLI expects, so we have to create a symlink before we can initialize the project.
ln -s ~/.aws/credentials ~/.aws/config npx @aws-amplify/cli init
This will ask us a few questions. We choose these answers:
? Enter a name for the project universal-app
? Enter a name for the environment dev
? Choose your default editor: None
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react-native
? Source Directory Path: /
? Distribution Directory Path: /web-build
? Build Command: npm run-script build
? Start Command: npm run-script start
Using default provider awscloudformation
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use default
This will create a basic Amplify backend infrastructure in the cloud; IAM roles, S3 buckets for the deployment of generated CloudFormation templates, etc.
5. Commit Amplify files to Git
The Amplify Console will build our project when we push any changes to a remote Git repository, so we need to get our changes into Git before we move on.
Then we can add all our new files and do an initial commit:
git add -A git commit -m "Init"
6. Set up and push changes to GitHub Repository
A new GitHub repository can be set up here. Let's call it like our project universal-app. To keep it simple, we create a public repository.
We also need to set up SSH authentication for our Cloud9 machine on Github.
A key-pair can be generated with:
ssh-keygen
We can look it up at our Cloud9 machine with:
tail /home/ec2-user/.ssh/id_rsa.pub
The public key needs to be given a title and pasted here.
After the creation, we can push our local changes:
git remote add origin git@github.com:<GITHUB_USERNAME>/expo-web-app.git git push -u origin master
7. Set up and run Amplify Console Build
Now that we have our project created, configured and up on GitHub, we can set up the Amplify Console to do our build.
To connect we need to go here.
I use
eu-west-1 here, but you can use whatever region you like.
Choose GitHub as a provider and the
universal-app repository we created, with branch
master.
In the following form, we get asked a few things for the setup.
Would you like Amplify Console to deploy changes to these resources with your frontend?
Yes, we want. Let's choose the
dev environment we created with the Amplify CLI. This is necessary if we wanted to add other Amplify features.
Select an existing service role or create a new one so Amplify Console may access your resources.
We need to create a new IAM role for this, but the process is done via a wizard. Only clicking Next until finished. Then we go back to the form, refresh the roles and add our newly created
amplifyconsole-backend-role
The build settings need to be edited. They were created automatically, but need to be tweaked a bit for Expo Web builds.
version: 0.1 backend: phases: build: commands: - amplifyPush --simple frontend: phases: preBuild: commands: - echo fs.inotify.max_user_watches=524288 | tee -a /etc/sysctl.conf && sysctl -p - npm ci - npm run login & - sleep 2s build: commands: - npm run publish --non-interactive - npm run build artifacts: baseDirectory: web-build files: - '**/*' cache: paths: - node_modules/**/* - $(npm root --global)/**/*
- Increase the file watchers for the system, otherwise
expo publishwill fail later.
npm ciruns so all dependencies (including
expo-cli) are installed.
- Run the login script but don't wait for it to finish (&), otherwise the build will freeze.
- Sleep two seconds, so the login script has some time to do its thing.
- Build and publish the Android and iOS versions to the Expo Client
- Build the Web version to Amplify
We also need to set up the Environment Variables for the Expo CLI login. We have to add variables named
EXPO_USERNAME and
EXPO_PASSWORD here filled with the corresponding values we used when we created an Expo account.
After this, we can click on Next and then Save and Deploy.
Bonus: Push a Change to Trigger Build
Now that everything is up and running, we can change a file and push it to our remote repository at GitHub and watch the Amplify Console magic happen.
Let's change the text in our
App.js
import React from 'react'; import { StyleSheet, Text, View } from 'react-native'; export default class App extends React.Component { render() { return ( <View style={styles.container}> <Text>Awesome universal Expo app backed by AWS Amplify!</Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fff', alignItems: 'center', justifyContent: 'center', }, });
Add the changes, commit, and push it.
git add -A git commit -m "Change text" git push
We can watch the builds here.
Conclusion
When Expo version 33 is released we will be able to leverage Expo on iOS, Android and the Web with one code-base without sacrificing the power of native UI widgets.
Amplify accelerates this development even more by easing the pain of managing AWS services and leveraging the full force of serviceful serverless computing and providing one back-end for Android, iOS and Web clients.
I believe this stack to become a solid foundation to build apps for years to go.
Here is the project code
Dear Developers, What's Your Work/Home Setup Like?
Hey everyone, I want this to be a very casual conversation. I'm looking to set up my personal workspa...
Thank you so much for this. Very helpful.
Would this be very different if we use AWS CodeCommit instead of GitHub?
Probably not.
Great article, very informative! 😁❤️🦄
Thanks! Glad you like it 😳 | https://practicaldev-herokuapp-com.global.ssl.fastly.net/kayis/use-expo-web-with-the-aws-amplify-console-170m | CC-MAIN-2020-10 | refinedweb | 1,793 | 64.51 |
Hello m68k-porters As I don't have access to m68k.debian.org/kullervo I ask you if you could try to compile and run the latest unstable mysql-server.deb with the following modification in the rules file: # disabled others until someone proves the opposide (these architecures # are lacking "thread mutex" support in db3. Problem known to upstream. - ifeq ($(findstring $(ARCH),alpha i386),$(ARCH)) + ifeq ($(findstring $(ARCH),alpha i386 m68k),$(ARCH)) USE_BDB=--with-berkeley-db endif assuming that m68k is what "dpkg-architecture -qDEB_BUILD_ARCH" prints out. If then mysqld ist a) starting and b) creates a .db database with $ echo 'CREATE TABLE t (t int) TYPE=BDB;' | mysql test then it's very good. Please check if the resulting files in /var/lib/mysql/test/t.* are really .db/.frm and not .MYD/.MYI/.frm as mysql tends to ignore this option if the support isn't correctly compiled and gives no warning when using TYPE=BDB. bye, -christian- -- codito ergo sum - I code, therefore I am! | https://lists.debian.org/debian-68k/2001/05/msg00005.html | CC-MAIN-2017-22 | refinedweb | 168 | 66.84 |
:hey, : :I just ran (via a programming mistake on my side) into a mall bug in lwkt= :_alloc_thread: : : if (stack =3D=3D NULL) { :#ifdef _KERNEL : stack =3D (void *)kmem_alloc(&kernel_map, stksize); :#else : stack =3D libcaps_alloc_stack(stksize); :#endif : flags |=3D TDF_ALLOCATED_STACK; : } : :kmem_alloc() however can return NULL if there is no free memory. Arguabl= :y, if there is no free memory to satisfy a thread stack, you're hosed any= :ways, but nevertheless. : :I'm not sure how to fix this. Maybe something like this will be sufficie= :nt? : :while (stack =3D=3D NULL) { : stack =3D (void *)kmem_alloc(&kernel_map, stksize); : if (stack =3D=3D NULL) : tsleep(&kernel_map, 0, "stckalc", hz); :} : :I know that there is nobody waking us up in this case, but one second sho= :uld help the situation. Or we add the possibility of an error return to = :lwkt_alloc_thread(). : :comments? : :cheers : simon I think kmem_alloc() should probably be passed allocation flags just like kmalloc() is, and panic if it would otherwise return NULL and M_NULLOK wasn't passed to it. -Matt | https://www.dragonflybsd.org/mailarchive/kernel/2007-02/msg00128.html | CC-MAIN-2017-22 | refinedweb | 171 | 56.59 |
/1.There is a task to reverse a string using splitting the character dot(.) , Without reversing the word.
2.The string should be given by the user like :
input string : This.is.not.the.proper.way
output reverse string : way.proper.the.not.is.This
3.Please clear ,what is the problem in the code ?/
#include <stdio.h> #include <string.h> char *func(int i, int j, char *str){ char new[50]; int k = 0; int l; for (k=0,l=i;k<=j-i+1,l<=j;k++,l++){ new[k]= str[l]; }; return *new; } int main() { int i,j,x; char str[50]; printf("Give any string "); scanf(" %s",str); x = strlen(str); j = x-1; int k = 0; for (i=x-1;i>=0;i--){ if (str[i] == '.'){ printf("%s", func(i,j,str)); k = k+1; j = i-j; }; }; return 0; }
How exactly can you take a string, split it, reverse it and join it back together again without the brackets, commas, etc. using python?
Having fun in c++ on CodeWars trying to reverse the letters of the words in a string, words delimited by spaces ie. "Hello my friend" --> "olleH ym dneirf", where extra spaces are not lost.
My answer is failing the tests, but when I diff my answer and the suggested answer there is no output. I also tried checking the length of the original and reversed strings and there is a significant difference in their lengths, depending on the string. However, when I compare the outputs they are again identical in terms of length, and there is no trailing whitespace.
int main() { std::string s("The quick brown fox jumps over the lazy dog."); std::cout <<"Old: "<<s<<std::endl; std::cout <<"New: "<<reverse(s)<<std::endl; //lengths are not the same std::cout <<"Length of s: "<<s.length()<<std::endl; std::cout <<"Length of reverse(s): "<<reverse(s).length()<<std::endl; //check for trailing whitespace std::cout <<"Last 5 chars of reverse(s): "<<reverse(s).substr(reverse(s).length() - 6)<<std::endl;}
std::string reverse(std::string str) { std::string word; std::string revWord; std::string result; char space(' '); int cursor = 0; for(int i = 0; i < str.length(); i++){ std::string revWord; if (str[i] == space || i == str.length() - 1){ if (i == str.length() - 1) i++; word = str.substr(cursor, i - cursor); for(int j = word.length(); j >= 0; j--){ revWord.push_back(word[j]); } word = revWord; if(i != str.length() - 1) result.append(word + " "); cursor = i+1; } } return result;}
Console Output:
Old: The quick brown fox jumps over the lazy dog.New: ehT kciuq nworb xof spmuj revo eht yzal .god Length of s: 44Length of reverse(s): 54Last 5 chars of reverse(s): .god
Any ideas?); } }
I have the the string
"re\x{0301}sume\x{0301}" (which prints like this: résumé) and I want to reverse it to
"e\x{0301}muse\x{0301}r" (émusér). I can't use Perl's
reverse because it treats combining characters like
"\x{0301}" as separate characters, so I wind up getting
"\x{0301}emus\x{0301}er" ( ́emuśer). How can I reverse the string, but still respect the combining characters?. | http://convertstring.com/ro/StringFunction/ReverseString | CC-MAIN-2019-43 | refinedweb | 530 | 75.5 |
import "go.chromium.org/luci/config/impl/memory"
Package memory implements in-memory backend for the config client.
May be useful during local development or from unit tests. Do not use in production. It is terribly slow.
New makes an implementation of the config service which takes all configs from provided mapping {config set name -> map of configs}. For unit testing.
SetError artificially pins the error code returned by impl to err. If err is nil, impl will behave normally.
impl must be a memory config isntance created with New, else SetError will panic.
Files is all files comprising some config set.
Represented as a mapping from a file path to a config file body.
Package memory imports 9 packages (graph) and is imported by 4 packages. Updated 2018-08-14. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/config/impl/memory | CC-MAIN-2018-34 | refinedweb | 137 | 53.07 |
With all due respect to George and Ira Gershwin, I have a quick question for the readers of this blog. In V1, we have an interesting scenario is talked about frequently, and that's the file extension of our xml form of workflow.
When we debuted at PDC05, there existed an XML representation of the workflow which conformed to a schema that the WF team had built, and it was called XOML. Realizing that WPF was doing the same thing to serialize objects nicely to XML, we moved to that (XAML), but the file extensions had been cast in stone due to VS project setups. So, we had XAML contained in a XOML file.
Is this a problem for you? I could see three possible solutions in the future <insert usual disclaimer, just gathering feedback>:
Is this an issue? Is this something you would like to see changed? Do any of these solutions sound like a good idea, bad idea, etc?
Thanks, we appreciate your feedback :-)
Matt,
My thoughts on this:
XAML (not the WPF one) is the set of conventions for an XML representation of an object graph. It solves several problems in notations of sub-objects and new stuff like attached properties.
XOML (O for Orchestration) is the XAML-based grammar that assumes a certain dictionary of elements and attributes, corresponding to the WF namespaces and their types.
XAML (from WPF) should be renamed to something like XUML (U for User interface). A different grammar, like XOML, which is also based on the XAML syntax.
XAML
|
|--XOML
|--XUML
|--XEML (E for entities (what the Entity Framework should have used))
There you go: change WPF's extensions, not those of WF.
My preference would be to stick with a single .xaml extension for all variants and rely on the namespaces defined within the file to dictate the variant. I could see the extensions running wild as developers/companies decided they wanted to create their own XAML variant. Of course a drawback is that it would be necessary to open the file to determine the variant.
I would give my vote for .xaml. For me just like html has .html, xml has .xml, similarly xaml should be .xaml (xoml just doesnt make sense to me) as at its core xaml is similar to markup languages (has tags, values etc.) though very powerful, versatile and unique (thanks to WPF, WF) but still you should not forget its deep roots.
If you'll change the extension to XAML then it can cause a problem with the explorer file association - the explorer will not know how to open the file if you'll double click it.
I know that you can write a start selector (like the VS does for .sln files) - but who is going to do that?
I'd say - who cares what is the file extension - it's still (in many cases) compiled into the assembly.
IMO, if we don't get out of the 3/4 char file extention soon, we'll be neck deep in confusion.
Some have suggested that we keep the same extention and have the IDE parse them to determine the 'type' of file it is. If you carried that thought about the future, I would hate to think what that would do to our operating systems and webservers. Sure it will work in our IDE although it would slow it down, but we need to think beyond the IDE and realize that other systems nwill eed to look at the file extention and make a decision quickly.
IMHO, it would not be useful to use multiple extensions for files using the same specification. On the other hand using one extension would cause numerous association issues.
Suggestion would be to use alternate datastreams to store this type of information.
XOML is already a mainstay. Let's call the whole file name change off.
One extension is fine, the namespaces tell us what is contained within. | http://blogs.msdn.com/b/mwinkle/archive/2008/01/30/you-say-xaml-i-say-xoml-potayto-potahto-let-s-call-the-whole-thing-off.aspx?Redirected=true | CC-MAIN-2015-40 | refinedweb | 660 | 71.55 |
6.9. Long Short-term Memory (LSTM)¶
@TODO(smolix/astonzhang): the data set was just changed from lyrics to time machine, so descriptions/hyperparameters have to change.
This section describes another commonly used gated recurrent neural network: long short-term memory (LSTM) [1]. Its structure is slightly more complicated than that of a gated recurrent unit.
6.9.1. Long Short-term Memory¶
Three gates are introduced in LSTM: the input gate, the forget gate, and the output gate, as well as memory cells in the same shape as the hidden state (some literature treats memory cells as a special kind of hidden state) used to record additional information.
6.9.1.1. Input Gates, Forget Gates, and Output Gates¶
Like the reset gate and the update gate in the gated recurrent unit, as shown in Figure 6.7, the input of LSTM gates is the current time step input \(\boldsymbol{X}_t\) and the hidden state of the previous time step \(\boldsymbol{H}_{t-1}\). The output is computed by the fully connected layer with a sigmoid function as its activation function. As a result, the three gate elements all have a value range of \([0,1]\).
Fig. 6.9 Calculation of input, forget, and output gates in LSTM.}\). For time step \(t\), the input gate \(\boldsymbol{I}_t \in \mathbb{R}^{n \times h}\), forget gate \(\boldsymbol{F}_t \in \mathbb{R}^{n \times h}\), and output gate \(\boldsymbol{O}_t \in \mathbb{R}^{n \times h}\) are calculated as follows:
Here, \(\boldsymbol{W}_{xi}, \boldsymbol{W}_{xf}, \boldsymbol{W}_{xo} \in \mathbb{R}^{d \times h}\) and \(\boldsymbol{W}_{hi}, \boldsymbol{W}_{hf}, \boldsymbol{W}_{ho} \in \mathbb{R}^{h \times h}\) are weight parameters and \(\boldsymbol{b}_i, \boldsymbol{b}_f, \boldsymbol{b}_o \in \mathbb{R}^{1 \times h}\) is a bias parameter.
6.9.1.2. Candidate Memory Cells¶
Next, LSTM needs to compute the candidate memory cell \(\tilde{\boldsymbol{C}}_t\). Its computation is similar to the three gates described above, but using a tanh function with a value range for $[-1, 1] as activation function, as shown in Figure 6.8.
Fig. 6.10 Computation of candidate memory cells in LSTM.
For time step \(t\), the candidate memory cell \(\tilde{\boldsymbol{C}}_t \in \mathbb{R}^{n \times h}\) is calculated by the following formula:
Here, \(\boldsymbol{W}_{xc} \in \mathbb{R}^{d \times h}\) and \(\boldsymbol{W}_{hc} \in \mathbb{R}^{h \times h}\) are weight parameters and \(\boldsymbol{b}_c \in \mathbb{R}^{1 \times h}\) is a bias parameter.
6.9.1.3. Memory Cells¶
We can control flow of information in the hidden state use input, forget, and output gates with an element value range between \([0, 1]\). This is also generally achieved by using multiplication by element (symbol \(\odot\)). The computation of the current time step memory cell \(\boldsymbol{C}_t \in \mathbb{R}^{n \times h}\) combines the information of the previous time step memory cells and the current time step candidate memory cells, and controls the flow of information through forget gate and input gate:
As shown in Figure 6.9, the forget gate controls whether the information in the memory cell \(\boldsymbol{C}_{t-1}\) of the last time step is passed to the current time step, and the input gate can control how the input of the current time step \(\boldsymbol{X}_t\) flows into the memory cells of the current time step through the candidate memory cell \(\tilde{\boldsymbol{C}}_t\). If the forget gate is always approximately 1 and the input gate is always approximately 0, the past memory cells will be saved over time and passed to the current time step. This design can cope with the vanishing gradient problem in recurrent neural networks and better capture dependencies for time series with large time step distances.
Fig. 6.11 Computation of memory cells in LSTM. Here, the multiplication sign indicates multiplication by element.
6.9.1.4. Hidden States¶
With memory cells, we can also control the flow of information from memory cells to the hidden state \(\boldsymbol{H}_t \in \mathbb{R}^{n \times h}\) through the output gate:
The tanh function here ensures that the hidden state element value is between -1 and 1. It should be noted that when the output gate is approximately 1, the memory cell information will be passed to the hidden state for use by the output layer; and when the output gate is approximately 0, the memory cell information is only retained by itself. Figure 6.10 shows the computation of the hidden state in LSTM.
Fig. 6.12 Computation of hidden state in LSTM. Here, the multiplication sign indicates multiplication by element.
6.9.2. Read the Data Set¶
Below we begin to implement and display LSTM. As with the experiments in the previous sections, we still use the lyrics of the Jay Chou data set to train the model to write lyrics..9.3. Implementation from Scratch¶
First, we will discuss how to implement LSTM from scratch.
6i,
6.9.4. Define the Model¶
In the initialization function, the hidden state of the LSTM needs to return an additional memory cell with a value of 0 and a shape of (batch size, number of hidden units).
In [3]:
def init_lstm_state(batch_size, num_hiddens, ctx): return (nd.zeros(shape=(batch_size, num_hiddens), ctx=ctx), nd.zeros(shape=(batch_size, num_hiddens), ctx=ctx))
Below, we defined the model based on LSTM computing expressions. It should be noted that only the hidden state will be passed into the output layer, and the memory cells do not participate in the computation of the output layer.)
6.9.4.1. Train the Model and Write Lyrics¶
As in the previous section, during model training, we only use adjacent sampling.(lstm, get_params, init_lstm_state, num_hiddens, vocab_size, ctx, corpus_indices, idx_to_char, char_to_idx, False, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes)
epoch 40, perplexity 7.837004, time 0.75 sec - traveller the the the the the the the the the the the the t - time traveller the the the the the the the the the the the the t epoch 80, perplexity 3.919570, time 0.74 sec - traveller have a fourd the time traveller monng and the tim - time traveller monng and the time traveller monng and the time t epoch 120, perplexity 1.936787, time 0.74 sec - traveller for the wersaced mon anouther at thenter free is, - time traveller 'imet's iederamentatien thing thickness, and is epoch 160, perplexity 1.342082, time 0.74 sec - traveller held in his hand was a glittering metallic framew - time traveller held in his hand was a glittering metallic framew
6.9.5. Gluon Implementation¶
In Gluon, we can directly call the
LSTM class in the
rnn module.
In [7]:
lstm_layer = rnn.LSTM(num_hiddens) model = gb.RNNModel(lstm 8.438859, time 0.17 sec - traveller the the the the the the the the the the the the t - time traveller the the the the the the the the the the the the t epoch 80, perplexity 4.352155, time 0.17 sec - traveller the time traveller the time traveller the time tr - time traveller the time traveller the time traveller the time tr epoch 120, perplexity 2.260089, time 0.17 sec - traveller shive barking reacon, and the time traveller smil - time traveller ffree dimensions of space excery very vigat in tr epoch 160, perplexity 1.423886, time 0.17 sec - traveller thate. insiace a sontinied stave. bather the fore - time traveller. 'it wour_ be reard barking thrted the berical m
6.9.6. Summary¶
- The hidden layer output of LSTM includes hidden states and memory cells. Only hidden states are passed into the output layer.
- The input, forget, and output gates in LSTM can control the flow of information.
- LSTM can cope with the gradient attenuation problem in the recurrent neural networks and better capture dependencies for time series with large time step distances.
6.9.7. Problems¶
- Adjust the hyper-parameters and observe and analyze the impact on running time, perplexity, and the written lyrics.
- Under the same conditions, compare the running time of an LSTM, GRU and recurrent neural network without gates.
- Since the candidate memory cells ensure that the value range is between -1 and 1 using the tanh function, why does the hidden state need to use the tanh function again to ensure that the output value range is between -1 and 1?
6.9.8. References¶
[1] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. | http://gluon.ai/chapter_recurrent-neural-networks/lstm.html | CC-MAIN-2019-04 | refinedweb | 1,445 | 52.7 |
SYNOPSIS#include <sys/types.h>
#include <unistd.h>
off_t lseek(int fd, off_t offset, int whence);
DESCRIPTIONThe lseek() function:
- *
- Btrfs (since Linux 3.1)
- *
- OCFS (since Linux 3.2)
- *
- XFS (since Linux 3.5)
- *
- ext4 (since Linux 3.8)
- *
- tmpfs file offset is beyond the end of the file.
CONFORMING TOPOSSee open(2) for a discussion of the relationship between file descriptors, open file descriptions, and files.
The off_t data type is a signed integer data type specified by POSIX.1.
Some devices are incapable of seeking and POSIX does not specify which devices must support lseek().
On Linux, using lseek() on a terminal device returns ESPIPE.
When converting old code, substitute values for whence with the following macros:
Note that file descriptors created by dup(2) or fork(2) refer to the same open file descriptions (and thus file offsets), so seeking on such files may be subject to race conditions.
COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | http://manpages.org/lseek/2 | CC-MAIN-2019-04 | refinedweb | 186 | 67.55 |
Ron Adam <rrr at ronadam.com> wrote: ... > class namespace(dict): > def __getattr__(self, name): > return self.__getitem__(name) ... > Any thoughts? Any better way to do this? If any of the keys (which become attributes through this trick) is named 'update', 'keys', 'get' (and so on), you're toast; it really looks like a nasty, hard-to-find bug just waiting to happen. If you're really adamant on going this perilous way, you might try overriding __getattribute__ rather than __getattr__ (the latter is called only when an attribute is not found "in the normal way"). If you think about it, you're asking for incompatible things: by saying that a namespace X IS-A dict, you imply that X.update (&c) is a bound method of X; at the same time, you also want X.update to mean just the same thing as X['update']. Something's gotta give...!-) Alex | https://mail.python.org/pipermail/python-list/2005-October/345296.html | CC-MAIN-2016-50 | refinedweb | 150 | 81.63 |
Discuss using the Moddable SDK
great... how?
I am now confident there is a bug in Timer ... I think it deals with garbage collection and use of host memory, but was hoping to isolate further before sending your way. I can repo the Timer bug in just a few lines, but I need to jump to debugger and back to do it (that does a GC)
Timer fails for me with an
xsGetHostData: invalid in my product. I have it recreated as long as I do a
debugger statement at the end (I tried various
Debug.gc() calls but that didn't seem to exacerbate it), so not sure what else
debugger is doing that also occurs in my product (without using
debugger) but I'm out of ideas. Here is the fragment that causes the failure (run, hit breakpoint, and continue running to see the exception):
import Timer from 'timer'; const myTimer = Timer.set(() => {}, 300); Timer.set(() => Timer.clear(myTimer), 100); debugger;
Do you want me to make GitHub issues for this and/or the Proxy host function issues?
Timer.clearends up executing after the set expires.
undefinedthough I believe that's not necessary w/clear)
Very odd. I'm 99.99% confident I am clean on
Timer usage. I have only two methods that ever call
Timer, they guard it, and I track a unique ID with each usage and trace it. I see exactly what I would expect - several timers created, one expires and when I try to clear a different (not yet expired, long delay) timer it fails with
xsGetHostData: invalid. HOWEVER ... this only happens when I proxy the class that contains the
Timer, not the
Timer itself. I've verified
this looks fine in the debugger prior to the failure (it has the ID as I'd expect, for example). I've tried to reproduce this in a small fragment without success. Does anything pop into your head as to why the
xsGetHostData error might happen when using a proxy to hold the
Timer object?
I'm looking now into adding decorators so I can mark that class as being disallowed from mocking (I hacked that in and it works) ... but it sure feels like a bug - likely in my code or perhaps something to deal with Host usage in Timer and Proxies? Crazy stuff, sorry. If you have no ideas given how abstract this is, no worries - but I'm burning a lot of hours so I thought I'd ask.
Timer.clearthat is not a timer object (a timer object which has been cleared -- which includes a one shot timer that has fired -- is considered "not a timer object"). So... when you hit that break, take a look at the argument to Timer.clear on the stack to see what it is.
gistit, but
gist.github.comappears down. The following reproduces the failure - likely could reduce the proxy logic some but it hopefully will reproduce the failure (and not due to another stupid mistake on my part!)
import Timer from 'timer'; class MyClass { myTimer; } const handler = { construct(target, argArray, newTarget) { return new Proxy(Reflect.construct(target, argArray, newTarget), handler); }, get(target, propertyKey, receiver) { const value = Reflect.get(target, propertyKey, receiver); if (propertyKey == 'prototype') return value; const proxy = new Proxy(value, handler); return proxy; }, }; // choose one of the following sections - the first fails with Proxy, the second works without it const ProxyMyClass = new Proxy(MyClass, handler); const myProxyInstance = new ProxyMyClass(); // const myProxyInstance = new MyClass(); myProxyInstance.myTimer = Timer.set(() => { myProxyInstance.myTimer = undefined; trace('long time expired\n'); }, 300); Timer.set(() => { trace('short timer expired\n'); if (myProxyInstance.myTimer) Timer.clear(myProxyInstance.myTimer); }, 100); Timer.set(() => { trace('done\n'); }, 2000);
Imagine you have a class with many objects inside it. You want to mock it, so you use a generic mocking service to do it:
NewClass = mock(OldClass) (or
newObject = mock(new OldClass()). That class that wraps the initial proxy, and as the handler executes it continues to proxy what it finds (get/construct/access).
No worries - I'll find a way around this. My issue - not Moddable's. Thanks for the help.
Array.prototype.push. Internally to XS there are actually two kinds of host functions – a host object and a host primitive. The host primitive is more compact and less standard. As part of preparing the ROM image, the XS linker converts host objects to host primitives. That step is skipped when strip is disabled (which is why turning off strip allows your
Proxyto work). In most places, XS automatically and invisibly promotes a host primitive to a host object when needed (
fxToInstance). The special case to do that promotion isn't in the Proxy constructor, so it fails. We'll add that. Just FYI, a more focused workaround is to use
Object.assignwhich promotes the host primitive to a host object:
new Proxy(Object.assign([].push), {}).
Thanks! Since I can't tell which objects are host, I'd have to do that on every object, property, etc., which I'd prefer not to do. But it sounds like you found the fix?
I have
@unmockable working now and it's resolved my issue with
Timer - so I'm all good there! Getting close to have preload working on all modules (one big module left to go). Dependency injection and Mocking are working now (major rework to Mock, a bit of refactoring to Dependency Injection). Hopefully a few more days and I'll see what the new slot footprint looks like.
Mind if I ask a couple performance/memory questions?
1) If I have a method that needs to return an object, is there much benefit in slots/memory/fragmentation if I return a native object (
return { prop: value }) vs. use a class to contain the object (
return newClass(value); where the class provides the
prop property)?
2) Is it correct that if I have an object that is not preloaded with six properties, it will consume 7 slots (1 for the object, and 6 for the properties, or a total of 112 bytes)?
3) Likewise, if I have an array of 100 elements, it takes 100 slots? (And if the array has objects, we multiple the above with this ... or in this case, 100 elements * 6 slots = 600 slots = 9.6K?)
4) Would using CDV to encapsulate heavy usage objects (arrays of objects with many properties) drop the slot count to one per CDV buffer?!). | https://gitter.im/embedded-javascript/moddable?at=6282d6d6fa846847c96af475 | CC-MAIN-2022-33 | refinedweb | 1,072 | 63.8 |
Everything posted by tnovelli...
tnovelli commented on JordanBonser's blog entry in Jordan Bonser's Indie Dev JournalInteresting. I started using CMake to go the other way around, porting from Linux to Windows. If you want to get away from Visual Studio you can use CMake with the commandline MSVC compiler, or with MinGW (gcc) or Clang. MinGW as a cross-compiler under Linux is pretty ideal if you don't even want to touch Windows except for testing. I think Clang is a better compiler but it's more work to set up.
tnovelli commented on ferrous's blog entry in ferrous' JournalFood for thought, yeah. I don't use Unity but it's interesting to see how you're grappling with static vs dynamic types & pooling. These same issues come up everywhere.
tnovelli commented on Aardvajk's blog entry in Journal of AardvajkIt's not surprising that you needed a little overlap. Glad you solved it. If you ever get stuck on some other minor graphic glitch, remember: even AAA games have 'em! Always :D
tnovelli commented on Aardvajk's blog entry in Journal of AardvajkGood decision. Fun > realism. Is scripting definitely coming back? I ask because I've made enough games in Javascript (where "real code" and scripts are the same thing) to realize that too much scripting freedom tends to make a mess of things. In my current game I only have "fixed functions" coded in C++, and my scripts are just declarative data in a simple text format.
tnovelli commented on JEJoll's blog entry in Fidelum GamesI kinda doubt anyone will really help.. especially if you just post a github link.. but if you post some interesting code snippets here, you might get some decent comments.
tnovelli commented on roblane09's blog entry in Shinylane's OpenGL JournalYes, uglifyjs sucks! I've used it for a few games but it's... funky. Try using the libs instead. I haven't yet but @EDIgames has good things to say about it.
tnovelli commented on roblane09's blog entry in Shinylane's OpenGL JournalAnd if the thought of installing random JS scripts as root troubles you, you'll soon end up at (just about the only reason anyone visits my site :(). YMMV with OSX. I did a lot of JS gamedev 3-4 years ago. WebGL wasn't usable, as it had serious driver/browser compatibility problems; supposedly it's better now. Requirejs/AMD will be a problem if it still has the timeout (people with slower connections can't load game); AFAIK that won't be solved until all browsers support html5 modules. Anyway, you're better off using a build script to pack all the JS into one file. Tools: - build script example - simple JS sprite/texture packer Good luck, you'll need it! :D
tnovelli replied to Acharis's topic in GDNet Comments, Suggestions, and IdeasProbably the second most common request we get (after fixing the post editor) is a site that works well on mobile. Mobile accounts for such a huge portion of visitors that whatever we choose needs to make at least some improvement in that area; not particularly mobile-friendly just isn't a viable option. Oh ok, I'm only a little surprised. No big deal though. 1. Keep it simple, no frills, clean layout... a few CSS @media blocks 2. No battery-wasting JS bloat, CSS animations, or 3rd party tracking/sharing/ads/iframes 3. Minimize bandwidth usage & http requests... mainly, combine+minify js/css No need for the cheesy "mobile friendly" look of the reviled new Invision. They haven't done a very good job of the above, either.
tnovelli replied to Acharis's topic in GDNet Comments, Suggestions, and IdeasSo this is a slightly(?) customized older version of Invision forum software - no wonder it's been falling apart. (I was guessing Wordpress with way too many plugins, lol) I think GDnet is too big and long-term to entrust to any canned software suite. I would go for a modular architecture with a few custom modules, something like this: - Divide into independent parts: Articles, Books, Forums/Journals, Gallery/Screenshots, Comments, Chat, Store, Stats/analytics... - Throw away all the unused/useless parts (have no mercy .. for example, who needs emoji graphics? use friggin' ascii) - Guideline: KEEP IT SIMPLE, oldschool, semi-fugly, not particularly mobile-friendly - Goal: JSON APIs for all database access, with server-side templates for SEO and NoScript users - Possibility: generate static HTML pages (like NeHe..) for Articles, Book lists, etc (anything not prone to rapid change or obsolesence) - Goal: JS-based auth, comments, chat, stats (helps to minimize backend integration headaches) - Goal: common user account/authentication system for all parts that require login - Goal: use "de facto standard" components (eg. CKeditor?, Markdown?, PLupload, Jinja templates) - Goal: 100% self-hosted open source code (to simplify maintenance & IP issues) - Automate the migration process - Redirect old article/forum/journal/user URLs to new structure - Migrate, test, and repeat until all essential functionality works - Cutover and stand by to debug One thing to watch is the JSON API that Wordpress is rolling out. It's not ideal *but* it'll make it easy for people to write drop-in replacements for parts of WP, which unlike WP could actually be decent. So that could supply the missing pieces, for example, a standalone self-hosted comment system. It might be worth waiting like 6 months to see how all that develops. I don't know about you guys, but I have very low expectations for web stuff. Makes it easier to build it "good enough", go live, and move on.
tnovelli commented on tnovelli's article in Math and PhysicsAllright @TookieToon @lede, here's the first function in C++, with alias vars... untested :D bool lineIntersection( double ax, double ay, double bx, double by, double cx, double cy, double dx, double dy) { double vx,vy, ux,uy, dd,r,s; vx = dx-cx; vy = dy-cy; ux = bx-ax; uy = by-ay; dd = ux*vy - uy*vx; r = ((ay-cy)*vx - (ax-cx)*vy) / dd; s = ((ay-cy)*ux - (ax-cx)*uy) / dd; return (0<=r && r<=1 && 0<=s && s<=1); } Yeah, it figures that my C++ game code is nothing like a direct port of the JS. Sometime I'll do a direct port of that SAT code and upload it to Github. The main difference, the vector functions take a destination first argument instead of returning it. So for example var edge = vsub(v2,v1); becomes double edge; vsub(edge,v2,v1); ...and actually, I've inlined most of that stuff. I've never found a great notation for vector components. There's "vector swizzling" in GLSL (also possible in Lua) where you can declare vec2 v and then refer to v.x, v.y, but I think it's almost as cluttered as v[0], v[1]. I think I'd prefer automatic suffix generation (vx, vy or v0, v1) even if the declaration is a bit magical. You could probably do it with C++ macros, something like this... #define vectorize(v) double &v#x = v[0], &v#y = v[1] #define vector(v) double v[2]; vectorize(v) bool lineIntersection(double a[2], .....){ vectorize(a); vectorize(b); vectorize(c); vectorize(d); vector(u); vector(v); double dd,r,s; vx = dx-cx; vy = dy-cy; .... Meh... maybe I'll try it. | https://www.gamedev.net/profile/216081-tnovelli/content/ | CC-MAIN-2017-30 | refinedweb | 1,237 | 63.29 |
pbs_sigjob man page
pbs_sigjob — send a signal to a pbs batch job
Synopsis
#include <pbs_error.h>
#include <pbs_ifl.h>
int pbs_sigjob(int connect, char *job_id, char *signal, char *extend) int pbs_sigjobasync(int connect, char *job_id, char *signal, char *extend)
Description
Issue a batch request to send a signal to a batch job.
A Signal Job batch request is generated and sent to the server over the connection specified byconnect which is the return value of pbs_connect(). If the batch job is in the running state, the batch server will send the job the signal number corresponding to the signal named in signal. When the asynchronous pbs_sigjobasync() call is used, the server will reply before passing the signal request to the pbs_mom.
The argument,job_id, identifies which job is to be signaled, it is specified in the form:sequence_number.server
If the name of the signal is not a recognized signal name on the execution host, no signal is sent and an error is returned. If the job is not in the running state, no signal is sent and an error is returned.
The parameter, extend, is reserved for implementation defined extensions.
See Also
qsig(1B) and pbs_connect(3B)
Diagnostics
When the batch request generated by the pbs_sigjob() or pbs_sigjobasync() function has been completed successfully by a batch server, the routine will return 0 (zero). Otherwise, a non zero error is returned. The error number is also set in pbs_errno. | https://www.mankier.com/3/pbs_sigjob | CC-MAIN-2017-17 | refinedweb | 239 | 53.51 |
You want to perform a function and return the results to the script that invoked the function.
Use a return statement that specifies the value to return.
The return statement, when used without any parameters, simply terminates a function. Technically, return returns the value undefined to the caller if no value is specified. Likewise, if there is no return statement, the function returns undefined when it terminates. But any value specified after the return keyword is returned to script that invoked the function. Usually, the returned value is stored in a variable for later use:
function average (a, b) { // Return the average of a and b. return (a + b)/2; } var playerScore ; // Call the average( ) function and store the result in a variable. playerScore = average(6, 12); // Use the result in some way. trace("The player's average score is " + playerScore);
You can use the return value of a function, without storing it in a variable, by passing it as a parameter to another function:
trace("The player's average score is " + average(6, 12));
Note, however, that if you do nothing with the return value of the function, the result is effectively lost. For example, this statement has no detectable benefit because the result is never displayed or used in any way:
average(6, 12); | http://etutorials.org/Programming/actionscript/Part+I+Local+Recipes/Chapter+1.+ActionScript+Basics/Recipe+1.11+Obtaining+the+Result+of+a+Function/ | CC-MAIN-2018-51 | refinedweb | 216 | 52.29 |
On Sat, 5 Feb 2005, Fruhwirth Clemens wrote:> Fixed formating and white-spaces. The biggest change is, that I stripped> a redundant check from scatterwalk.c. This omission seems justified and> shows no regression on my system. ( )> Can you give it a quick test with ipsec anyhow? Just to make sure.Not tested yet, still reviewing the code.> + * The generic scatterwalker applies a certain function, pf, utilising> + * an arbitrary number of scatterlist data as it's arguments. These> + * arguments are supplied as an array of pointers to buffers. These> + * buffers are prepared and processed block-wise by the function> + * (stepsize) and might be input or output buffers.This is not going to work generically, as there number of atomic kmapsavailable is limited. You have four: one for input and one for output,each in user in softirq context (hence the list in crypto_km_types). Athread of execution can only use two at once (input & output). The cryptocode is written around this: processing a cleartext and ciphertext blocksimultaneously.> +> +int scatterwalk_walk(sw_proc_func_t pf, void *priv, int steps,> + struct walk_info *walk_infos) > +{> + int r = -EINVAL;> + > + int i;Looks like this i is left over from something no longer use.> + scatterwalk_copychunks(*cbuf,&csg->sw,csg->stepsize,1);There are several places such as this where you use literal 1 & 0 instead of the new macros, SCATTERWALK_IO_OUTPUT & INPUT.> +static inline int scatterwalk_needscratch(struct scatter_walk *walk, int nbytes) > +{> + return (nbytes <= walk->len_this_page);> +}The logic of this is inverted given the function name. While the calling code is using it correctly, it's going to cause confusion.Also confusing is the remaining clamping of the page offset:static inline void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg){ ... rest_of_page = PAGE_CACHE_SIZE - (sg->offset & (PAGE_CACHE_SIZE - 1)); walk->len_this_page = min(sg->length, rest_of_page);}rest_of_page should be just PAGE_SIZE - sg->offset (sg->offset shouldnever extend beyond the page).And then how would walk->len_this_page be greater than rest_of | http://lkml.org/lkml/2005/2/8/63 | CC-MAIN-2014-35 | refinedweb | 314 | 57.06 |
* Peter O'Gorman wrote on Mon, Oct 11, 2004 at 02:47:04PM CEST: > Ralf Wildenhues wrote: > > >Furthermore, f77demo-make fails for the non-static configurations with a > >./.libs/libmix.so: undefined reference to `MAIN__' > >while trying to link cprogram. I'm not sure how to fix this, not being a > >Fortran expert. > > Since we only recently started building shared fortran libraries in the > test, it is meant to skip the f77demo-make test if it fails. That's not > working? The testsuite does return SKIP. But obviously the test is not working, and I'd like to change that. (Even if the failure is known :) > Anyway whatever object/library contains MAIN__ should have been in FLIBS, > and we rely on autoconf to figure out FLIBS. Thanks for this hint. That's what I needed in order to look into it. First: the Autoconf check thinks no dummy main is needed. I still need to find out why it does this. Second: cprogram.c does not make use of the dummy. The patch below changes that. Compiling cprogram by hand then succeeds if I use $ make CPPFLAGS=-DF77_DUMMY_MAIN=MAIN__ Is this patch applicable? Is the idea the right one? Regards, Ralf 2004-10-11 Ralf Wildenhues <address@hidden> * tests/f77demo/cprogram.c: Define the F77_DUMMY_MAIN function if necessary. Index: tests/f77demo/cprogram.c =================================================================== RCS file: /cvsroot/libtool/libtool/tests/f77demo/cprogram.c,v retrieving revision 1.1 diff -u -r1.1 cprogram.c --- tests/f77demo/cprogram.c 14 Oct 2003 21:46:13 -0000 1.1 +++ tests/f77demo/cprogram.c 11 Oct 2004 13:16:06 -0000 @@ -16,6 +16,13 @@ #include "foo.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif + int main(int argc, char **argv) { | http://lists.gnu.org/archive/html/libtool-patches/2004-10/msg00198.html | CC-MAIN-2016-50 | refinedweb | 292 | 69.68 |
Posts511
Joined
Last visited
About WiseBear
- RankNewbie
Profile Information
- LocationSydney
- 3 minutes. Got popcorn?
- House prices are a function of interest rates. When the bond market crashes so will property prices.
- Dow Futures now down 852.
- Could be. It depends. Low rates could boost property prices or more money could trigger inflation fears and send interest rates higher and property lower. It's all down to perception.
- Nasdaq 100 futures halted - limit down.
-
- Dow futures still falling. Now down 643! This could be an interesting day.
- Australia in comparison: Sydney to Newcastle (Newcastle is NSW's second largest city at a distance of about 100 miles) Adult return £9.90 Off-peak return £6.60 Pensioner return (anytime) £1.40 But we now have the Opal card (equivalent to an Oyster card in London) so don't expect to pay this much
Post Your Favourite Charts Here
WiseBear replied to Confounded's topic in House prices and the economyI always like charts that correlate well to history and appear to be a good indicator of the future. This one caught my eye as being particularly interesting although I'd like to see more history. What do you think? Any other good "future predicting" charts?
With All This Good News When Will Ir's Raise
WiseBear replied to Monkey's topic in House prices and the economyThe global bond market will decide when interest rates will rise. The bond market is larger than both the property market and the stock market. At the moment the bond market sees significant future deflation and is priced accordingly. Stock markets and Property markets are currently responding to the low interest rates with a total disregard of why they are low. Who is right? Governments like to pretend that they set interest rates – they don’t.
Don't You Just Love Estate Agents
WiseBear replied to Total_Injustice's topic in House prices and the economyYes, very funny. You haven't dealt with many estate agents have you?
Britain Has More Debt Than The Eurozone
WiseBear replied to trippytinker's topic in House prices and the economyYou seem somehwat confused - the debts on a banks balance sheet ARE the assets. They are matched by the liabilities which is the massive void from which credit is created plus the banks capital. The problem is when those debts are not repaid, and they won't be, it's the banks capital that gets hit first.
- Agreed. One way or another we will get austerity because there's no other option. If you continuously spend more than your income someday this will jump up and bite you on the ****. It's just a question of how austerity is applied and who the winners and losers are. Whether it's cost cutting or higher taxes & deficit spending or inflation or deflation it doesn't matter someone has to repay the debt.
- It should be obvious to anyone that living beyond your means is the best way to solve the problems created by living beyond your means.
- The UK is caught in a death grip - trapped between peak oil and a government intent on destroying the currency. Even without the UK's massive debt burden it's in big, big trouble. It's a sad end to a great nation.
£50Bn More Money Printing Expected This Week, Qe Qe Qe
WiseBear replied to inflating's topic in House prices and the economyThe fear of a deflationary collapse is massive. This will do nothing to prevent the inevetiable. | https://www.housepricecrash.co.uk/forum/index.php?/profile/8742-wisebear/ | CC-MAIN-2022-21 | refinedweb | 586 | 65.01 |
Hello all!
So lately I’ve been messing with machine learning because I’ve always been interested in it and it’s just very cool and interesting to me. I’d like to talk a bit about what I’ve been doing and struggling with and show some examples. I will be working with scikit learn for Python, and it comes with 3 datasets. Iris and Digits are for classification and Boston House Prices are for regression. Simply put classification is identifying something like a handwritten number as the correct number it is and regression is essentially finding a line of best fit for a dataset. I still have a lot to learn about sklearn and machine learning in general, but I find it really interesting nonetheless and thought you guys would too.
So my code begins with the import of a bunch of libraries. The only ones I use in my example here are sklearn and matplotlib, the others are simply either dependencies or libraries I plan to use in the future.
import sklearn from sklearn import datasets import numpy as np import pandas as pd import quandl import matplotlib.pyplot as plt from sklearn import svm
In this import, sklearn is the main library I’m using to fit my data and predict things, sklearn.datasets comes with the 3 base datasets Iris Digits and Boston Housing Prices. I don’t know much about sklearn.svm, but I do know that it is the support vector machine which essentially separates our inputted data and runs our actual machine learning, so when we input testing data it can determine what number we have written. Numpy is a science / math library that adds support for larger multidimensional arrays and matrices. Pandas is a library for data analysis. Quandl is a financial library that lets me pull a lot of data that I can use for linear regression in the future. And matplotlib and it’s sub-library pyplot allow me to output the handwriting data.
So far my code for the recognition looks like this:
clf = svm.SVC(gamma=0.001, C=100) clf.fit(digits.data[:-1], digits.target[:-1]) clf.predict(digits.data[-1:]) plt.figure(1, figsize=(3, 3)) plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest') plt.show()
Although my understanding is rudimentary, I can explain a little bit of what this does. Clf is our estimator which is the actual machine that is learning, and that is what we pass out training data through with clf.fit(). Clf.fit() lets us pass data into the svm that we made clf off of, and it trains our machine to know what the numbers should look like. I am passing all digits except for the last one through this function, because we will be testing with the last one. We then pass a digit through clf using clf.predict(), which passes data for a know handwritten digit, 8, through clf. Our object clf then outputs the text <code>array([8])</code> which means that it has predicted our inputted number as 8. If we print out digits.target[-1:] we can see it and determine if it was correct. We do this using out 3 lines from matplotlib that create the figure, print it, and then show it. The figure we get is this:
It’s a very low resolution, but it’s an 8! I think that this is brilliant, and I definitely need to learn more about what is happening here with my code. Machine learning is very cool and I definitely need to mess with it more and learn more. So far I’m learning some of the basic elements like how to fit and predict things, how training and testing sets work, and a lot of the vocabulary that is used when talking about machine learning. I can now actually talk about things like supervised and unsupervised learning, or classification and regression methods. Along with this, I’m also learning more about other libraries like matplotlib, and how to write more pythonic (readable) code. For anyone who wants to try this themselves, there’s a lot of really cool stuff online, but I’m using some of the resources from hangtwenty‘s GitHub repo dive-into-machine-learning. It can be found here: Hopefully by my next post I will have created a basic understanding of linear regression and I can create some cool examples using it, and in my next post I will attempt to give my explanation on how fitting, predicting, and training actually works.
Thanks for reading and have a wonderful day!
~ Corbin | https://maker.godshell.com/archives/79 | CC-MAIN-2020-45 | refinedweb | 776 | 61.67 |
Working with perspectives
Opening and switching perspectives
Setting the default perspective
Opening perspectives in a new window
Customizing a perspective
Deleting a customized perspective
Resetting perspectives
Working with editors and views
Opening views
Moving and docking views
Rearranging tabbed views
Switching between views
Creating and working with fast views
Filtering the Tasks and Problems views
Creating working sets
Opening files for editing
Associating editors with file types
Editing files outside the workbench
Tiling editors
Maximizing a view or editor
Switching the workspace
Customizing the workbench
Rearranging the main toolbar
Changing keyboard shortcuts
Changing fonts and colors
Changing colors
Controlling single- and double-click behavior
Searching in the workbench
Searching for files
Searching for references and declarations
Using the Search view
Working in the editor’s Source and Design modes
Accessing keyboard shortcuts
Setting workbench preferences
The term workbench refers to the Flash Builder development
environment. The workbench contains three primary elements: perspectives,
editors, and views. You use all three in various combinations at
various points in the application development process. The workbench
is the container for all the development tools you use to develop
applications.
Perspectives include combinations of views and editors
that are suited to performing a particular set of tasks. For example,
you normally open the Flash Debugging perspective to debug your
application.
For an overview of perspectives, see About Flash Builder perspectives.
When you open a file that is associated with a particular
perspective, Flash Builder automatically opens that perspective.
The stand-alone configuration of Flash Builder contains three perspectives:
Flash Development
Flash Debugging
Flash Profiling
The Flash Profiling perspective is available with Flash Builder
Premium.
Window > Perspective or choose Other to access all other Eclipse perspectives.
(In the plug-in configuration of Flash Builder, select Window > Open
Perspective.)
You can also click the Open Perspective button
in the upper-right corner of the workbench window, then select a
perspective from the pop-up menu.
To see a complete list of
perspectives, select Other from the Open Perspective pop-up menu.
When
the perspective opens, its title changes to display the name of
the perspective you selected. An icon appears next to the title,
allowing you to quickly switch back and forth between perspectives
in the same window. By default, perspectives open in the same window.
The
default perspective is indicated by the word default in parentheses
following the perspective name.
Open the Preferences dialog and select General > Perspectives.
Under Available Perspectives, select the perspective to define
as the default, and click Make Default.
Click OK.
You
can specify to open perspectives in a new window.
Under Open a New Perspective, select In A New Window.
To
switch back to the default, select In The Same Window.
To
modify a perspective’s layout, you change the editors and views
that are visible in a given perspective. For example, you could
have the Bookmarks view visible in one perspective, and hidden in
another perspective.
You can also configure several other aspects of a perspective,
including the File > New submenu, the Window > Perspective
> Other submenu, the Window > Other Views submenu, and action
sets (buttons and menu options) that appear in the toolbar and in
the main menu items. (Menu names differ slightly in the plug-in
configuration of Flash Builder.)
Open an existing perspective.
Show views and editors as desired.
For more information,
see Opening views, and Opening files for editing.
Select Window > Perspective > Save Perspective As (Window
> Save Perspective As in the plug-in configuration of Flash Builder).
In the Save Perspective As dialog box, enter a new name for
the perspective, then click OK.
Open the perspective to configure.
Select Window > Perspective > Customize Perspective
(Window > Customize Perspective in the plug-in configuration
of Flash Builder).
Click the Shortcuts tab or the Commands tab, depending on
the items you want to add to your customized perspective.
Use the check boxes to select which elements to see on menus
and toolbars in the selected perspective.
Click OK.
In the Save Perspective As dialog box, enter a new name for
the perspective and click OK.
When you save a perspective,
Flash Builder adds the name of the new perspective to the Window
> Perspective menu (Window > Open Perspective in the plug-in
configuration of Flash Builder).
You
can delete perspectives that were previously defined. You cannot
delete a perspective you did not create.
Under Available Perspectives, select the perspective you
want to delete.
Click Delete, then click OK.
You
can restore a perspective to its original layout after you made
changes to it.
Under Available perspectives, select the perspective to reset.
Click Reset, then click OK.
Most perspectives in the workbench are composed of an editor
and one or more views. An editor is a visual component in the workbench
that is typically used to edit or browse a resource. Views are also
visual components in the workspace that support editors, provide
alternative presentations for selected items in the editor, and
let you navigate the information in the workbench.
For an overview of editors and views, see About the workbench.
Perspectives
contain predefined combinations of views and editors. You can also open
views that the current perspective might not contain.
Select Window and choose a Flash Builder view or select
Window > Other Views to choose other Eclipse workbench views.
(In the plug-in configuration of Flash Builder, select Window >
Show View.)
After you add a view to the current perspective,
you can save that view as part of the perspective. For more information,
see Customizing a perspective.
You can also create fast views to provide
quick access to views that you use often. For more information,
see Creating and working with fast views.
You
can move views to different locations in the workbench, docking
or undocking them as needed.
Drag the view by its title bar to the desired location.
As
you move the view around the workbench, the pointer changes to a
drop cursor. The drop cursor indicates where you’ll dock the view
when you release the mouse button.
You can also move a view by using
the view’s context menu. Open the context menu from the view’s tab,
select Move > View, move the view to the desired location, and
click the mouse button again.
(Optional) Save your changes by selecting Window > Perspectives
> Save Perspective As (Window > Save Perspective As in the
plug-in configuration of Flash Builder).
In addition to docking views at different locations in
the workbench, you can rearrange the order of views in a tabbed
group of views.
Click the tab of the view to move, drag the view to the
desired location, and release the mouse button. A stack symbol appears
as you drag the view across other view tabs.
There are
several ways to switch to a different view:
Click the tab of a different view.
Select a view from the flash builder Window menu.
Use a keyboard shortcut
Use Control+F7 on Windows,
Command+F7 on Macintosh. Press F7 to select a view.
Fast
views are hidden views that you can quickly open and close. They
work like other views, but do not take up space in the workbench
while you work.
Whenever you click the fast view icon in the shortcut bar, the
view opens. Whenever you click anywhere outside the fast view (or
click Minimize in the fast view toolbar), the view becomes hidden
again.
Drag the view you want to turn into
a fast view to the shortcut bar located in the lower-left corner
of the workbench window.
The icon for the view that you dragged
appears on the shortcut bar. You can open the view by clicking its
icon on the shortcut bar. As soon as you click outside the view,
the view is hidden again.
From
the view’s context menu, deselect Fast View.
You
can filter the tasks or problems that are displayed in the Tasks
or Problems views. For example, you might want to see only problems
that the workbench has logged, or tasks that you logged as reminders
to yourself. You can filter items according to which resource or
group of resources they are associated with, by text string in the
Description field, by problem severity, by task priority, or by
task status.
In Tasks or Problems view taskbar, click Filter.
Complete the Filters dialog box and click OK.
For more information about views, see Flash Builder Workbench Basics..
When you
open a file, you launch an editor so that you can edit the file.
From the context
menu for the file in one of the navigation views, select Open.
Double-click the file in one of the navigation views.
Double-click the bookmark associated with the file in the
Bookmarks view.
Double-click an error warning or task record associated with
the file in the Problems view.
This opens the file
with the default editor for that particular type of file. To open
the file in a different editor, select Open With from the context
menu for the file. Select the editor to use.
You
can associate editors with various file types in the workbench.
Select Window > Preferences.
Click the plus button to expand the General category.
Click the plus button to expand the Editors category, and
then select File Associations.
Select a file type from the File Types list.
To add
a file type to the list, click Add, enter the new file type in the
New File Type dialog box, and then click OK.
In the Associated Editors list, select the editor to associate
with that file type.
To add an internal or external editor
to the list, click Add and complete the dialog box.
You can
edit an MXML or ActionScript file in an external editor and then
use it in Flash Builder. The workbench performs any necessary build
or update operations to process the changes that you made to the
file outside the workbench.
Edit the MXML or ActionScript file in the external editor
of your choice.
Save and close the file.
Start Flash Builder.
From one of the navigation views in the workbench, select
Refresh from the context menu.
The workbench
lets you open multiple files in multiple editors. But unlike views, editors
cannot be dragged outside the workbench to create new windows. You can,
however, tile editors in the editor area, so that you can view source
files side by side.
Open two or more files in the editor area.
Select one of the editor tabs.
Drag the editor over the left, right, upper, or lower border
of the editor area.
The pointer changes to a drop cursor,
indicating where the editor will appear when you release the mouse
button.
(Optional) Drag the borders of the editor area of each editor
to resize the editors as desired.
There
are several ways you can maximize a view or editor so that it fills
the workbench window.
From the context menu
on the view or editor’s title bar, select Maximize (Restore).
Double-click the tab of a view.
From the Flash Builder menu, select Window > Maximize/Restore.
Click the Maximize/Restore icons in the upper-right corner
of the view or editor..
You
can customize the workbench to suit your individual development
needs. For example, you can customize how items appear in the main
toolbar, create keyboard shortcuts, or alter the fonts and colors
of the user interface.
Flash
Builder lets you rearrange sections of the main toolbar. Sections
of the main toolbar are divided by a space.
Ensure that the toolbar is unlocked. From the context
menu for the toolbar, deselect Lock the Toolbars.
Move the mouse pointer over the vertical line “handle” that
is on the left side of the toolbar section you want to rearrange.
Click the handle and drag the section left, right, up, or
down. Release the mouse button to place the section in the new location.
Open the Preferences dialog and select General
> Keys.
In the View screen of the Keys dialog box, select the command
you want to change.
In the Binding field, type the new keyboard shortcut you
want to bind to the command.
In the When pop-up menu, select when you want the keyboard
shortcut to be active.
Click Apply or OK.
By default, the workbench uses the fonts and colors that
your computer’s operating system provides. However, you can customize
fonts and colors in a number of ways. The workbench lets you configure
the following fonts:
Plug-ins that use other
fonts might also provide preferences that allow for customizing.
For example, the Java Development Tools plug-in provides a preference
for controlling the font that the Java editor uses (In the Preferences dialog,
select > General > Appearance > Colors and Fonts > Java
> Java Editor Text Font).
The operating system always displays
some text in the system font (for example, the font displayed in
the Package Explorer view tree). To change the font for these areas,
you must use the configuration tools that the operating system provides (for
example, the Display Properties control panel in Windows).
Open the Preferences dialog and select General
> Appearance > Colors and Fonts.
Expand the Basic, CVS, Debug, Text Compare, or View and Editor
Folders categories to locate and select the font and colors to change.
Set the font and color preferences as desired.
The workbench
uses colors to distinguish different elements, such as error text and
hyperlink text. The workbench uses the same colors that the operating system
uses. To change these colors, you can also use the configuration
tools that the system provides (for example, the Display Properties
control panel in Windows).
Open the Preferences dialog and select
General > Appearance > Colors and Fonts.
Expand the Basic, CVS, Debug, Text Compare, or View and Editor
Folders categories to locate and select the color to change.
Click the color bar to the right to open the color picker.
Select a new color.
You
can control how the workbench responds to single and double clicks.
Open the Preferences dialog and select General.
In the Open Mode section, make your selections and click
OK.
Flash
Builder provides a search tool that lets you quickly locate resources.
For more information about searching for text in a particular file,
see Finding and replacing text in the editor.
Flash Builder
lets you conduct complex searches for files.
In the stand-alone version
of Flash Builder select Edit > Find in Files.
Flash Builder includes advanced search features that are
more powerful than find and replace. To help you understand how
functions, variables, or other identifiers are used, Flash Builder
lets you find and mark references or declarations to identifiers
in ActionScript and MXML files, projects, or workspaces. For more
information, see Finding references and refactoring code.
The Search
view displays the results of your search.
Double-click the file.
Select the file to remove
and click Remove Selected Matches.
Click Remove All Matches.
Click Show Next Match or
Show Previous Match.
Click the down arrow next to
Show Previous Searches and select a search from the pull-down list.
Select Window
> Other Views > General. (Window > Show View > Other
in the plug-in configuration of Flash Builder.)
Expand the General category, select Search, and click OK.
The
MXML editor in Flash Builder lets you work in either Source or Design
mode. You can also use Flash Builder to create a split view so that
you can work in both Source and Design modes simultaneously.
Click Design at the top
of the editor area.
Click Source at the top
of the editor area.
From the option menu on the editor’s tab, select New Editor.
You
now have two editor tabs for the same file.
Drag one of the tabs to the right to position the editor
windows side-by-side.
Set one of the editors to Design mode, and set the other
editor to Source mode.
Press Control+`(Left Quote).
The
keyboard shortcuts available to you while working in Flash Builder
depend on many factors, including the selected view or editor, whether
or not a dialog is open, installed plug-ins, and your operating
system. You can obtain a list of available keyboard shortcuts at
any time using Key Assist.
Select Help > Key Assist.
You
can set preferences for many aspects of the workbench. For example,
you can specify that Flash Builder should prompt you for the workspace
you want to use at startup, you can select which editor to use when
opening certain types of resources, and you can set various options
for running and debugging your applications.
Your Flash Builder preferences apply to the current workspace
only. You can, however, export your workbench preferences and then
import them into another workspace. This may be helpful if you are
using multiple workspaces, or if you want to share your workbench
preferences with other members of your development team.
You can also set preferences for individual projects within a
workspace. For example, you can set separate compiler or debugging
options for each of your Flex projects.
Open the Preferences
window.
Expand General and select any of the categories of workbench
preferences and modify them as needed. | http://help.adobe.com/en_US/Flex/4.0/UsingFlashBuilder/WS6f97d7caa66ef6eb1e63e3d11b6c4d0d21-7fe5.html | crawl-003 | refinedweb | 2,848 | 64.91 |
Buy an Axon, Axon II, or Axon Mote and build a great robot, while helping to support SoR.
0 Members and 1 Guest are viewing this topic.
#include <stdio.h> /*This include will allow you to do port i/o operations. For libc5 you will need to include the asm/io.h file for the port i/o functions and the unistd.h file for the i/o permissions*/#include <sys/io.h>/*this defines the paraport variable as the memory address of the parallel port, you can find out this address using the dmesg command, there will be a line somewhere in the output with the base and range of each ports memory addresses. */#DEFINE paraport 0x0378int main() {/*This opens the port for use by the application. If you don't do this, you're application won't have permissions to access the port, and you're likely to get a seg fault. The first arg is the port base, the second num is the number of bytes you want to be allowed to work with, and the last number is 1 or 0 for on or off. You don't need to worry about the ports address range, it will all be sorted out from the base you provide*/ioperm(paraport, 1, 1);/*Here we output a full byte to the port, which turns all the data pins to high*/outb(255, paraport);sleep(5); /* 5 second sleep*/outb(0, paraport); /*Turn the data pins to low*/exit(1); /*Quit this mofo*/}
I learned via trial by fire (I seem to do that a lot). Few references were useful to me, but reading through the files in /etc/ and reading general articles on how Linux works helps a bit.
For a Linux robotic project take a look at OAP. Also the book Linux Robotics is worth reading, although the code is in Java, which I think is good since is platform independant, so you can use the same program in Windows or any flavour of Linux. I am also considering Linux as a robot OS, but since I have almost no experience with it, I can't decide which distro should I use. I have a 1GB CF card for this, so a full distro might not be possible. I will eighter buy a 8GB or 16GB CF card or a laptop HDD to house a dual boot XP and whatever Linux I'll end up with. I have tryed Gentoo, but it will not install on my hardware, also Ubuntu and Xubuntu. The problem is the VIA EPIA motherboard I guess, probably drivers have to be downloaded but I don't know how. For the moment I am going to use a 40GB regular HDD to see what I can do with it and how much power it uses. I can give you the link for the Java code from the Linux Robotics book if you want to take a look at it. | http://www.societyofrobots.com/robotforum/index.php?topic=5729.0 | CC-MAIN-2016-50 | refinedweb | 496 | 74.32 |
On Sat, 2005-01-08 at 00:26, It's me wrote: > What does it mean by "stability in sorting"? If I understand correctly, it means that when two sorts are performed in sequence, the keys that are equal to the second sort end up ordered the way they were left by the first sort. I'm far from certain of this, but at least I'm presenting an opportunity for someone to yell "no, you're wrong!" and in the process definitively answer the question. For example, given the list: .>>> l = [(1,2), (8,2), (2,2), (3,2), (4,3), (5,3), (8,9)] if we sort by the first element of each tuple then the second (the default), we get: .>>> l.sort() .>>> l [(1, 2), (2, 2), (3, 2), (4, 3), (5, 3), (8, 2), (8, 9)] Now, if we sort based on the second element we get: .>>> def seconditem(x): .... return x[1] .... .>>> l.sort(key=seconditem) .>>> l [(1, 2), (2, 2), (3, 2), (8, 2), (4, 3), (5, 3), (8, 9)] You'll note that there are several correct answers to the request "sort the list 'l' by the second element of each item", including: [(1, 2), (2, 2), (3, 2), (8, 2), (4, 3), (5, 3), (8, 9)] [(2, 2), (1, 2), (8, 2), (3, 2), (4, 3), (5, 3), (8, 9)] [(1, 2), (2, 2), (3, 2), (8, 2), (5, 3), (4, 3), (8, 9)] and many others. Because we didn't specify that the first item in the value tuples should be used in the sort key, so long as the second key is equal for a group of items it doesn't matter what order items in that group appear in. Python (at least 2.4), however, returns those groups where the order isn't defined in the same order they were before the sort. Look at this, for example: .>>> l.sort() .>>> l.reverse() .>>> l [(8, 9), (8, 2), (5, 3), (4, 3), (3, 2), (2, 2), (1, 2)] .>>> l.sort(key=seconditem) .>>> l [(8, 2), (3, 2), (2, 2), (1, 2), (5, 3), (4, 3), (8, 9)] See how the exact same sort command was used this time around, but because the list was reverse-sorted first, the elements are in reverse order by first item when the second item is equal? In the first case we used the same result as the stable sort could be obtained with: .>>> def revitem(x): .... return (x[1], x[0]) >>> l.sort(key=revitem) >>> l [(1, 2), (2, 2), (3, 2), (8, 2), (4, 3), (5, 3), (8, 9)] (in other words, saying "use the value tuple as the sort key, but sort by the second element before the first") That doesn't extend to more complex cases very well though. Imagine you had 3-tuples not 2-tuples, and wanted to maintain the previous sort order of equal groupings when re-sorting by a different key... but you didn't know what key was last used for sorting. A stable sort algorithm means you don't need to care, because the order will be maintained for you not randomized. Well, that's several hundred more words than were probably required, but I hope I made sense. -- Craig Ringer | https://mail.python.org/pipermail/python-list/2005-January/350129.html | CC-MAIN-2019-30 | refinedweb | 546 | 72.6 |
On Fri, 9 Nov 2001, Conor MacNeill wrote:
> Holger Engels wrote:
>
> >
> > These patches along with additional classes introduce / contain:
> >
> > o a fileset like depset for use in jar-/zip-task
>
>
> Holger,
>
> I have committed the depset code with some modifications. This is based
> on your original depset.jar file. I haven't checked for any updates in
> this latest set.
>
Cool! I'll check, if there where updates .. I don't remember.
> I made the following changes
>
> 1. I moved the classes into two separate packages - one for the type
> itself and scanner and one for the Dependencies utility classes since
> other tasks will depend on those. The packages also help with
> conditional compilation in the Ant build file.
>
ok
> 2. I renamed the class ClassfileSet rather than depset since I thought
> that name was a bit clearer
name is better. It makes clear, that it is only applicable to java class
files.
> 3. I added Apache copyright to the classes that did not already contain
> one. Please indicate your agreement to that.
ok
> 4. I removed tabs and added a few {} blocks around for statements and if
> statements
sorry for the tabs .. should change my emacs configuration ..
>
> Also, I would like to make some suggestions. I tested with this setup
> just on Ant's own code
>
> <classfileset dir="build/classes"
>
>
> <target name="main">
> <mkdir dir="deptest"/>
> <copy todir="deptest">
> <fileset refid="classes"/>
> </copy>
> </target>
>
> Now, I would prefer that to work something like this
>
>
> <classfileset dir="build/classes" id="classes">
> <root class="org.apache.tools.ant.Main"/>
> <root class="org.apache.tools.ant.Project"/>
> </classfileset>
>
> In other words, the classes should be expressed in the Java namespace
> rather than the file namespace and I should be able to specify multiple
> roots for dependency searches. What do you think?
I thought of that, too. Actually I had to use rather ugly code, to achieve
this, recently. I'll work on that, soon.
> I haven't looked at the Zip.java changes yet but when using refids as
> above, they may not be necessary.
+ public void addDepset(DepSet set) {
+ filesets.addElement(set);
+ }
Thanks,
Holger
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/ant-dev/200111.mbox/%3CPine.LNX.4.33.0111081425120.16215-100000@localhost%3E | CC-MAIN-2015-35 | refinedweb | 376 | 67.55 |
The edge cases 2to3 might not fix21 Dec 2020
A year after Python 2 was officially deprecated,
2to3 is still my favourite tool for porting Python 2 code to Python 3.
Only recently, when using it on a legacy code base, I found one of the edge cases
2to3 will not fix for you.
Consider this function in Python, left completely untouched by running
2to3.
It worked fine in Python 2, but throws
RecursionError in Python 3.
(It is of questionable quality; I didn’t make it originally).
def safe_escape(value): if isinstance(value, dict): value = OrderedDict([ (safe_escape(k), safe_escape(v)) for k, v in value.items() ]) elif hasattr(value, '__iter__'): value = [safe_escape(v) for v in value] elif isinstance(value, str): value = value.replace('<', '%3C') value = value.replace('>', '%3E') value = value.replace('"', '%22') value = value.replace("'", '%27') return value
But why? It turns out strings in Python 2 don’t have the
__iter__ method, but they do in Python 3.
What happens in Python 3 is that the
hasattr(value, '__iter__') condition becomes true, when
value is a string.
It now iterates over each character in every string in the list comprehension, and calls itself (the recursion part).
But… each of those strings (characters) also has the
__iter__ attribute, quickly reaching the max recursion depth set by your Python interpreter.
In this function it was easy to fix of course:
- Either the order of the two
elifs can be swapped
- or we exclude strings from the iter-check (
elif hasattr(value, '__iter__') and not isinstance(value, str))
The more labour-intensive way of fixing it would be rewriting it entirely, since the only thing it actually really does is recursively URL encoding (but for four characters only). Maybe there’s a (bad) reason it only URL encodes these four characters, so that was a can of worms I didn’t want to open.
Anyway, main lesson for me was: even though Python 2 is gone, you might still need to remember its quirks. | https://bartbroere.eu/ | CC-MAIN-2021-10 | refinedweb | 334 | 70.73 |
float Value between 0.0 and 1.0. (Return value might be slightly beyond 1.0.)
Genera ruido Perlin 2D.;
// Create a texture and fill it with Perlin noise. // Try varying the xOrg, yOrg and scale values in the inspector // while in Play mode to see the effect they have on the noise.
public class ExampleScript : MonoBehaviour { // Width and height of the texture in pixels. public int pixWidth; public int pixHeight;
// The origin of the sampled area in the plane. public float xOrg; public float yOrg;
// The number of cycles of the basic noise pattern that are repeated // over the width and height of the texture. public float scale = 1.0F;
private Texture2D noiseTex; private Color[] pix; private Renderer rend;
void Start() { rend = GetComponent<Renderer>();
// Set up the texture and a Color array to hold pixels during processing. noiseTex = new Texture2D(pixWidth, pixHeight); pix = new Color[noiseTex.width * noiseTex.height]; rend.material.mainTexture = noiseTex; }
void CalcNoise() { // For each pixel in the texture...[(int)y * noiseTex.width + (int)x] = new Color(sample, sample, sample); x++; } y++; }
// Copy the pixel data to the texture and load it into the GPU. noiseTex.SetPixels(pix); noiseTex.Apply(); }
void Update() { CalcNoise(); } }
Although the noise plane is two-dimensional, it is easy to use just a single one-dimensional line through the pattern, say for animation effects.
using UnityEngine;
public class Example : MonoBehaviour { // "Bobbing" animation from 1D Perlin noise.
// Range over which height varies. float heightScale = 1.0f;
// Distance covered per second along X axis of Perlin plane.. | https://docs.unity3d.com/es/2019.2/ScriptReference/Mathf.PerlinNoise.html | CC-MAIN-2021-25 | refinedweb | 252 | 59.19 |
There are six key sets of collection classes, and they differ from each other in terms of how data is inserted, stored, and retrieved. Each generic class is located in the System.Collections.Generic namespace, and their nongeneric equivalents are in the System.Collections namespace.
The List<T> class, and its nongeneric equivalent, ArrayList, have properties similar to an array. The key difference is that these classes automatically expand as the number of elements increases. (In contrast, an array size is constant.) Furthermore, lists can shrink via explicit calls to TRimToSize() or Capacity (see Figure 12.1 on page 422).
These classes are categorized as list collections whose distinguishing functionality is that each element can be individually accessed by index, just like an array. Therefore, you can set and access elements in the list collection classes using the index operator, where the index parameter value corresponds to the position of an element in the collection. Listing 12.1 shows an example, and Output 12.1 shows the results.
[View full width]
using System; using System.Collections.Generic; class Program { static void Main() { List<string> list = new List<string>(); // Lists automatically expand as elements // are added. list.Add("Sneezy"); list.Add("Happy"); list.Add("Dopey"); list.Add("Doc"); list.Add("Sleepy"); list.Add("Bashful"); list.Add("Grumpy"); list.Sort(); Console.WriteLine("In alphabetical order {0} is the " + "first dwarf while {1 } is the last.", list[0], list[6]); list.Remove("Grumpy"); } }
In alphabetical order Bashful is the first dwarf while Sneezy is the last.
C# is zero-index based; therefore, index zero in Listing 12.1 corresponds to the first element and index 6 indicates the seventh element. Retrieving elements by index does not involve a search. It involves a quick and simple "jump" operation to a location in memory.
When you use the Add() method, elements maintain the order in which you added them. Therefore, prior to the call to Sort() in Listing 12.1, "Sneezy" is first and "Grumpy" is last. Although List<T> and ArrayList support a Sort() method, nothing states that all list collections require such a method.
There is no support for automatic sorting of elements as they are added. In other words, an explicit call to Sort() is required for the elements to be sorted (items must implement IComparable). To remove an element, you use the Remove() method.
To search either List<T> or ArrayList for a particular element, you use the Contains(), IndexOf(), LastIndexOf(), and BinarySearch() methods. The first three methods search through the array, starting at the first element (the last element for LastIndexOf()), and examine each element until the equivalent one is found. The execution time for these algorithms is proportional to the number of elements searched before a hit occurs. Be aware that the collection classes do not require that all the elements within the collection are unique. If two or more elements in the collection are the same, then IndexOf() returns the first index and LastIndexOf() returns the last index.
BinarySearch() uses a binary search algorithm and requires that the elements be sorted. A useful feature of the BinarySearch() method is that if the element is not found, a negative integer is returned. The bitwise complement (~) of this value is the index of the next element larger than the element being sought, or the total element count if there is no greater value. This provides a convenient means to insert new values into the list at the specific location so as to maintain sorting (see Listing 12.2).
using System; using System.Collections.Generic; class Program { static void Main() { List<string> list = new List<string>(); int search; list.Add("public"); list.Add("protected"); list.Add("private"); list.Sort(); search = list.BinarySearch("protected internal"); if (search < 0) { list.Insert(~search, "protected internal"); } foreach (string accessModifier in list) { Console.WriteLine(accessModifier); } } }
Beware that if the list is not first sorted, an element will not necessarily be found, even if it is in the list. The results of Listing 12.2 appear in Output 12.2.
private protected protected internal public
Sometimes you must find multiple items within a list and your search criteria are more complex than looking for specific values. To support this, System.Collections.Generic.List<T> includes a FindAll() method. FindAll() takes a parameter of type Predicate<T>, which is a reference to a method called a delegate. Listing 12.3 demonstrates how to use the FindAll() method.
using System; using System.Collections.Generic; class Program { static void Main() { List<int> list = new List<int>(); list.Add(1); list.Add(2); list.Add(3); list.Add(2); List<int> results = list.FindAll(Even); Assert.AreEqual(2, results.Count); Assert.IsTrue(results.Contains(2)); Assert.IsFalse(results.Contains(3)); } public static bool Even(int value) { if ((value % 2) == 0) { return true; } else { return false; } } }
In Listing 12.3's call to FindAll(), you pass a delegate instance, Even(). This method returns true when the integer argument value is even. FindAll() takes the delegate instance and calls into Even() for each item within the list (this listing uses C# 2.0's delegate type inferencing). Each time the return is true, it adds it to a new List<T> instance and then returns this instance once it has checked each item within list. A complete discussion of delegates occurs in Chapter 13.
Another category of collection classes is the dictionary classesspecifically, Dictionary<Tkey, Tvalue> and Hashtable (see Figure 12.2). Unlike the list collections, dictionary classes store name/value pairs. The name functions as a unique key that can be used to look up the corresponding element in a manner similar to that of using a primary key to access a record in a database. This adds some complexity to the access of dictionary elements, but because lookups by key are efficient operations, this is a useful collection. Note that the key may be any data type, not just a string or a numeric value.
One option for inserting elements into a dictionary is to use the Add() method, passing both the key and the value, as shown in Listing 12.4.
using System; using System.Collections.Generic; class Program { static void Main() { Dictionary<Guid,string> dictionary = newDictionary<Guid, string>(); Guid key = Guid.NewGuid(); dictionary.Add(key, "object"); } }
Listing 12.4 inserts the string "object" using a Guid as its key. If an element with the same key has already been added, an exception is thrown.
An alternative is to use the indexer, as shown in Listing 12.5.
using System; using System.Collections.Generic; class Program { static void Main() { Dictionary<Guid, string> dictionary = newDictionary<Guid, string>(); Guid key = Guid.NewGuid(); dictionary[key] = "object"; dictionary[key] = "byte"; } }
The first thing to observe in Listing 12.5 is that the index operator does not require an integer. Instead, the index data type is specified by the first type parameter, TKey, when declaring a Dictionary<TKey, TValue> variable (in the case of Hashtable, the data type is object). In this example, the key data type used is Guid, and the value data type is string.
The second thing to notice in Listing 12.5 is the reuse of the same index. In the first assignment, no dictionary element corresponds to key. Instead of throwing an out-of-bounds exception, as an array would, dictionary collection classes insert a new object. During the second assignment, an element with the specified key already exists, so instead of inserting an additional element, the existing element corresponding to key is updated from "object" to "byte".
Accessing a value from a dictionary using the index operator ([]) with a nonexistent key throws an exception of type System.Collections.Generic.KeyNotFoundException. The ContainsKey() method, however, allows you to check whether a particular key is used before accessing its value, thereby avoiding the exception. Also, since the keys are stored in a hash algorithm type structure, the search is relatively efficient.
By contrast, checking whether there is a particular value in the dictionary collections is a time-consuming operation with linear performance characteristics. To do this you use the ContainsValue() method, which searches sequentially through each element in the collection.
You remove a dictionary element using the Remove() method, passing the key, not the element value.
There is no particular order for the dictionary classes. Elements are arranged into a hashtable type data structure using hashcodes for rapid retrieval (acquired by calling GetHashCode() on the key). Iterating through a dictionary class using the foreach loop, therefore, accesses values in no particular order. Because both the key and the element value are required to add an element to the dictionary, the data type returned from the foreach loop is KeyValuePair<TKey, TValue> for Dictionary<TKey, TValue>, and DictionaryEntry for Hashtable. Listing 12.6 shows a snippet of code demonstrating the foreach loop with the Dictionary<TKey, TValue> collection class. The output appears in Output 12.3
using System; using System.Collections.Generic; class Program { static void Main() { Dictionary<string,string> dictionary = new Dictionary<string,string>(); int index =0; dictionary.Add(index++.ToString(), "object"); dictionary.Add(index++.ToString(), "byte"); dictionary.Add(index++.ToString(), "uint"); dictionary.Add(index++.ToString(), "ulong"); dictionary.Add(index++.ToString(), "float"); dictionary.Add(index++.ToString(), "char"); dictionary.Add(index++.ToString(), "bool"); dictionary.Add(index++.ToString(), "ushort"); dictionary.Add(index++.ToString(), "decimal"); dictionary.Add(index++.ToString(), "int"); dictionary.Add(index++.ToString(), "sbyte"); dictionary.Add(index++.ToString(), "short"); dictionary.Add(index++.ToString(), "long"); dictionary.Add(index++.ToString(), "void"); dictionary.Add(index++.ToString(), "double"); dictionary.Add(index++.ToString(), "string"); Console.WriteLine("Key Value Hashcode"); Console.WriteLine("--- ------ --------"); foreach (KeyValuePair<string, string> i in dictionary) { Console.WriteLine("{0,-5}{1,-9}{2}", i.Key, i.Value, i.Key.GetHashCode()); } } }
If you want to deal only with keys or only with elements within a dictionary class, they are available via the Keys and Values properties. The data type returned from these properties is of type ICollection, and it is typed in generic or nongeneric form depending on whether a Dictionary<TKey, TValue> or Hashtable collection is used. The data returned by these properties is a reference to the data within the original dictionary collection, so changes within the dictionary are automatically reflected in the ICollection type returned by the Keys and Values properties.
The sorted collection classes (see Figure 12.3 on page 431) differ from unsorted implementation collections in that the elements are sorted by key for SortedDictionary<TKey, TValue> and by value for SortedList<T>. (There is also a nongeneric SortedList implementation.) A foreach iteration of sorted collections returns the elements sorted in key order (see Listing 12.7).
using System; using System.Collections.Generic; class Program { static void Main() { SortedDictionary<string,string> sortedDictionary = new SortedDictionary<string, string>(); int index =0; sortedDictionary.Add(index++.ToString(), "object"); // ... sortedDictionary.Add(index++.ToString(), "string"); Console.WriteLine("Key ValueHashcode"); Console.WriteLine--- -----------------"); foreach (KeyValuePair<string, string> i in sortedDictionary) { Console.WriteLine("{0,-5}{1,-9}{2}", i.Key, i.Value, i.Key.GetHashCode()); } } }
The results of Listing 12.7 appear in Output 12.4.
Note that the elements in the key are in alphabetical rather than numerical order, because the data type of the key is a string, not an integer.
When inserting or removing elements from a sorted dictionary collection, maintenance of order within the collection slightly increases execution time when compared to the straight dictionary classes described earlier. Behaviorally, there are two internal arrays, one for key retrieval and one for index retrieval. On a System.Collections.Sorted sorted list, indexing is supported via the GetByIndex() and SetByIndex() methods. With System.Collections.Generic.SortedList<TKey, TValue>, the Keys and Values properties return IList<TKey> and IList<TValue> instances, respectively. These methods enable the sorted list to behave both as a dictionary and as a list type collection.
Chapter 11 discussed the stack collection classes (see Figure 12.4). The stack collection classes are designed as last in, first out (LIFO) collections. The two key methods are Push() and Pop().
Push() places elements into the collection. The elements do not have to be unique.
Pop() retrieves and removes elements in the reverse order of how they were added.
To access the elements on the stack without modifying the stack, you use the Peek() and Contains() methods. The Peek() method returns the next element that Pop() will retrieve.
As with most collection classes, you use the Contains() method to determine whether an element exists anywhere in the stack. As with all collections, it is also possible to use a foreach loop to iterate over the elements in a stack. This allows you to access values from anywhere in the stack. Note, however, that accessing a value via the foreach loop does not remove it from the stack. Only Pop() provides this functionality.
Queue collection classes, shown in Figure 12.5, are identical to stack collection classes, except they follow the ordering pattern of first in, first out (FIFO). In place of the Pop() and Push() methods are the Enqueue() and Dequeue() methods. The queue collection behaves like a circular array or pipe. You place objects into the queue at one end using the Enqueue() method, and you remove them from the other end using the Dequeue() method. As with stack collection classes, the objects do not have to be unique, and queue collection classes automatically increase in size as required. When data is no longer needed, you recover the capacity using the TRimToSize() method.
In addition, System.Collections.Generic supports a linked list collection that enables both forward and reverse traversal. Figure 12.6 shows the class diagram. Notice there is no corresponding nongeneric type. | https://flylib.com/books/en/2.888.1.105/1/ | CC-MAIN-2019-09 | refinedweb | 2,266 | 50.23 |
To all,
I am using a Teensy 3.2 to control an Adafruit product #815 16 Channel 12 Bit PWM servo driver. The code below works fine, but when I try to use the serial monitor, nothing populates. If I rim out the two “Wire.set” lines and upload to an UNO, the serial monitor works fine. I believe this to be an issue with the Adafruit_PWMServoDriver library clashing with the 3.2 in some way. Any help or insight on this would be greatly appreciated as my final project will most definitely need to use a Teensy. Also, this is my first post to a Forum ever, so please help direct me if my post need better explanation. Thanks!
Adafruit 815: Adafruit 16-Channel 12-bit PWM/Servo Driver - I2C interface [PCA9685] : ID 815 : $14.95 : Adafruit Industries, Unique & fun DIY electronics and kits
Adafruit_PWMServoDriver library:…Driver-Library
Teensy 3.2
Windows 10 with Arduino IDE 1.8.13
Code Below:
#include <Wire.h>
#include <Adafruit_PWMServoDriver.h>
int analogPin = A0;
int val = 0;
Adafruit_PWMServoDriver pwm = Adafruit_PWMServoDriver(0x40, Wire);
void setup() {
pwm.begin();
Serial.begin(9600);
Serial.println(“16 channel PWM test!”);
pwm.setOscillatorFrequency(25000000);
pwm.setPWMFreq(1000); // This is the maximum PWM frequency
Wire.setSDA(18);
Wire.setSCL(19);
Wire.setClock(100000);
}
void loop() {
int sensorValue = analogRead(A0);
Serial.print(sensorValue);
Serial.print(0x0a);
for (uint16_t i=0; i<4096; i += 1) { //replaced 8 with 1
for (uint8_t pwmnum=0; pwmnum < 16; pwmnum++) {
pwm.setPWM(pwmnum, 0, (i + (4096/16)*pwmnum) % 4096 );
delay(100);
}
}
}
Teensy 3.2 not populating serial monitor when using Adafruit_PWMServoDriver
Are you saying that code fails on a Teensy and works on an Uno. I’m not clear whether you had to comment out Wire for both.
The serial monitor works on the UNO and fails on the Teensy. Serial monitor works on the teensy until I use the “Adafruit_PWMServoDriver” library.
I’ll guess that whatever timer Adafruit is using to run the wire protocol timing is being used by the Teensy for serial, but you would need to dig into the library code to see.
You always have the Teensy forum as an option for direct support.
Thanks all for the input. I was putting my “Serial.writeln” before the for loop. This loop was taking forever due to it’s size. I have figured it out and have everything up and running. Thanks! | https://forum.arduino.cc/t/teensy-3-2-not-populating-serial-monitor-when-using-adafruit-pwmservodriver/858520 | CC-MAIN-2021-25 | refinedweb | 400 | 59.9 |
I have a string from a database field thats like this i bring into a varible words:
spaceship cars boats "subway train" rocket bicycle "18 wheeler"
words = string.split()
['spaceship', 'cars', 'boats', "'subway", "train'", 'rocket', 'bicycle', "'18", "wheeler'"]
['spaceship', 'cars', 'boats', "'subway train'", 'rocket', 'bicycle', "'18 wheeler'"]
You can also use
shlex module:
>>> x 'spaceship cars boats "subway train" rocket bicycle "18 wheeler"' >>> import shlex >>> shlex.split(x) ['spaceship', 'cars', 'boats', 'subway train', 'rocket', 'bicycle', '18 wheeler']
Another solution would be using
regex in this form:
import re re.split(your_regular_exp, x)
but this is not as simple as with shlex but may prove usefull in other cases! | https://codedump.io/share/0n2igowGRVT5/1/separate-strings-with-quotes-around-them-from-strings-without-in-python | CC-MAIN-2017-09 | refinedweb | 108 | 53.75 |
This is an experimental on-line collection of "man" pages for the routines in Meschach. What is provided here is only a subset of what is in the book "Meschach: Matrix Computations in C", which is available from the Centre for Mathematics and its Applications, School of Mathematical Sciences, Australian National University, Canberra, ACT 0200, Australia. Alternatively, send your paper mail by email address to jillian.smith@maths.anu.edu.au and say that you wish to purchase a copy of "Meschach: Matrix Computations in C". Then it will be posted to you; you pay on delivery using (international) money order, credit/charge card or similar.
The cost of the manual is A$30 (about US$22) + postage and handling. The manual is about 260 pages long.
#include "matrix.h"To use the complex variants use the include statement
#include "zmatrix.h" | http://homepage.divms.uiowa.edu/~dstewart/meschach/html_manual/manual.html | CC-MAIN-2013-20 | refinedweb | 142 | 54.93 |
Check this out: I'm going to turn off my Wifi! Gasp! What do you think will happen? I mean, other than I'm gonna miss all my Tweets and Instagrams! What will happen when I refresh? The page will load, but all the images will be broken, right?
In the name of science, I command us to try it!
Woh! An error!?
Error executing ListObjects on ... Could not contact DNS servers.
What? Why is our Symfony app trying to connect to S3?
Here's the deal: on every request... for every thumbnail image that will be rendered, our Symfony app makes an API request to S3 to figure out if the image has already been thumbnailed or if it still needs to be. Specifically, LiipImagineBundle is doing this.
This bundle has two key concepts: the resolver and the loader. But there are actually three things that happen behind the scenes. First, every single time that we use
|imagine_filter(), the resolver takes in that path and has to ask:
Has this image already been thumbnailed?
And if you think about it, the only way for the resolver to figure this out is by making an API request to S3 to ask:
Yo S3! Does this thumbnail file already exist?
If it does exist, LiipImagineBundle renders a URL that points directly to that image on S3. If not, it renders a URL to the Symfony route and controller that will use the loader to download the file and the resolver to save it back to S3.
Phew! The point is: on page load, our app is making one request to S3 per thumbnail file that the page renders. Those network requests are super wasteful!
What's the solution? Cache it! Go back to OneupFlysystemBundle and find the main page of their docs. Oh! Apparently I need Wifi for that! There we go. Go back to their docs homepage and search for "cache". You'll eventually find a link about "Caching your filesystem".
This is a super neat feature of Flysystem where you can say:
Hey Flysystem! When you check some file metadata, like whether or not a file exists, cache that so that we don't need to ask S3 every time!
Actually, it's even more interesting & useful. LiipImagineBundle calls the
exists() method on the
Filesystem object to see if the thumbnail file already exists. If that returns false, the cached filesystem does not cache that. But if it returns true, it does cache it. The result is this: the first time LiipImagineBundle asks if a thumbnail image exists, Flysystem will return false, and Liip will know to generate it. The second time it asks, because the "false" value wasn't cached, Flysystem will still talk to S3, which will now say:
Yea! That file does exist.
And because the cached adapter does cache this, the third time LiipImagineBundle calls
exists, Flysystem will immediately return
true without talking to S3.
To get this rocking, copy the composer require line, find your terminal and paste to download this "cached" Flysystem adapter.
composer require league/flysystem-cached-adapter
While we're waiting, go check out the docs. Here's the "gist" of how this works, it's 3 parts. First, you have some existing filesystem - like
my_filesystem. Second, via this
cache key, you register a new "cached" adapter and tell it how you want things to be cached. And third, you tell your existing filesystem to process its logic through that cached adapter. If that doesn't totally make sense yet, no worries.
For how you want the cached adapter to cache things, there are a bunch of options. We're going to use the one called PSR6. You may or may not already know that Symfony has a wonderful cache system built right into it. Anytime you need to cache anything, you can just use it!
Start by going to
config/packages/cache.yaml. This is where you can configure anything related to Symfony's cache system, and we talked a bit about it in our Symfony Fundamentals course. The
app key determines how the
cache.app service caches things, which is a general-purpose cache service you can use for anything, including this! Or, to be fancier - I like being fancy - you can create a cache "pool" based on this.
Check it out. Uncomment
pools and create a new cache pool below this called
cache.flysystem.psr6. The name can be anything. Below, set
adapter to
cache.app.
That's it! This creates a new cache service called
cache.flysystem.psr6 that, really... just uses
cache.app behind the scenes to cache everything. The advantage is that this new service will automatically use a cache "namespace" so that its keys won't collide with other keys from other parts of your app that also use
cache.app.
In your terminal, run:
php bin/console debug:container psr6
There it is! A new fancy
cache.flysystem.psr6 service.
Back in
oneup_flysystem.yaml, let's use this! On top... though it doesn't matter where, add
cache: and put one new cached adapter below it:
psr6_app_cache. The name here also doesn't matter - but we'll reference it in a minute.
And below that add
psr6:. That exact key is the important part: it tells the bundle that we're going to pass it a PSR6-style caching object that the adapter should use internally. Finally, set
service to what we created in
cache.yaml:
cache.flysystem.psr6.
At this point, we have a new Flysystem cache adapter... but nobody is using it. To fix that, duplicate
uploads_filesystem and create a second one called
cached_uploads_filesystem. Make it use the same adapter as before, but with an extra key:
cache: set to the adapter name we used above:
psr6_app_cache.
Thanks to this, all Filesystem calls will first go through the cached adapter. If something is cached, it will return it immediately. Everything else will get forwarded to the S3 adapter and work like normal. This is classic object decoration.
After all of this work, we should have one new service in the container. Run:
php bin/console debug:container cached_uploads
There it is:
oneup_flysystem.cached_uploads_filesystem_filesystem. Finally, go back to
liip_imagine.yaml. For the loader, we don't really need caching: this downloads the source file, which should only happen one time anyways. Let's leave it.
But for the resolver, we do want to cache this. Add the
cached_ to the service id. The resolver is responsible for checking if the thumbnail file exists - something we do want to cache - and for saving the cached file. But, "save" operations are never cached - so it won't affect that.
Let's try this! Refresh the page. Ok, everything seems to work fine. Now, check your tweets, like some Instagram photos, then turn off your Wifi again. Moment of truth: do a force refresh to fully make sure we're reloading. Awesome! Yea, the page looks terrible - a bunch of things fail. But our server did not fail: we are no longer talking to S3 on every request. Big win.
Next, let's use a super cool feature of S3 - signed URLs - to see an alternate way of allowing users to download private files, which, for large stuff, is more performant. | https://symfonycasts.com/screencast/symfony-uploads/cached-s3-filesystem | CC-MAIN-2019-43 | refinedweb | 1,216 | 76.11 |
How to use Azure Blob Storage modules by PHP
Introduction
This PHP sample application demonstrates how to make a good use of the modules provided by Microsoft Azure Blob Storage SDK. In addition, Microsoft Azure Storage SDK for PHP resource code can be found on Azure GitHub repository.
For now, the Azure Storage SDK for PHP shares almost the same interface with storage blobs, tables and queues APIs in Azure SDK for PHP. However, there are some minor breaking changes that need to be addressed during your migration.
- Remove all the pear dependencies including HTTP_Request2, Mail_mime, and Mail_mimeDecode. Use Guzzle as underlying http client library.
- Change root namespace from "WindowsAzure" to "MicrosoftAzure/Storage".
- If the set metadata operations contain invalid characters, a ServiceException with 400 bad request errors instead of Http_Request2_LogicException will appear.
This code sample is separated into two tiers. The frontend is built using AngularJs. AngularJs source code scripts are in src folder, and are combined with br gulp. The backend is built using PHP and implementing with Azure storage SDK for PHP. All the operations on Azure Blob Storage in the frontend are called via $http module, which is against the functions in PHP backend.
All the functions that are called via frontend are listed in this single script as trial.php. The function first parses the URL param key to determine which kind of resources the client end is requesting, creating/initializing or trailing the code sample.
If the URL params have the key "init", it means the client end is asking for the initialization of the code sample.
Switch the param value in following cases:
- "container" - get the trial container, its metadata and properties.
- "blob" - list the trial blobs, its metadata and properties with pagination.
- "content" - streaming output the content of the specified blob.
Building the Sample
Before running this code sample, please open settings.php in the api directory.
Here we need the storage account name and access key for setting up a correct configuration. To get your storage account and access key, please login Azure Manage Portal, click the STORAGE in the left navigation bar, select your storage account in the right side list, click the Manage Access Keys button in the bottom bar.
- Replace ACCOUNT_NAME with your storage account in settings.php.
- Replace ACCOUNT_KEY with the storage account key in settings.php.
Make sure your PHP runtime has enabled the extension "fileinfo". To check this, you can use "phpinfo()" function in PHP script or run command "php -m" in cmdlet.
(Optional) You can use WebMatrix to work with this PHP sample. WebMatrix is a free and light-weight tool that will let you create web sites using various technologies like PHP, Asp.Net, Node.js etc. In order to run this sample in Web matrix, you can choose "Open Site", then choose "Folder as Site" and select the folder that contains all the downloaded files.
(Optional) You can use PHP Build-in web server for a quick local test. Use following command in cmdlet from the code sample directory:
php -S localhost:8000
Tour the sample
Click 3 Create buttons in the home page to create the container and blobs resources in your Azure storage. The result of the operations will be directly output in the block under the buttons:
Click the trial button in the top navigation bar to start trailing the code sample.
Then it will redirect to the page that shows the container info.
Enter the container, redirect the page that shows the blob list separated by delimiter /, which can be treated as a virtual directory.
Enter the virtual directory of text blobs lists.
Click the horizontal spots button to get paginated blobs. Click the vertical spots button to show the details of the blob.
Click the disk-like button to get the content of the text blob.
Back to the root of the container, enter the video blobs virtual directory to trail the video blobs.
More Information and Resources
Azure Storage SDK for PHP:
Microsoft Azure storage documents: | https://azure.microsoft.com/es-es/resources/samples/storage-blob-php-webapplication/ | CC-MAIN-2017-30 | refinedweb | 670 | 63.7 |
New Product Announcement!
New Product Announcement!
< ?:namespace prefix = st1
– Cardinal Imaging Supplies, Inc. is pleased to announce two new products to it’s line of recharging supplies; the M3 Universal Mag Roller Press and Spray Lube OPC drum padding powder. BLOOMINGDALE, IL
< ?:namespace prefix = o
Spray Lube for OPC drums has a unique “spray-on” application which provides technicians with a mess free and easy to use padding powder. “We have had an overwhelming response from our customers. They are very excited about this revolutionary product because it truly will put an end to many problems associated with using loose padding powder” said Art Bruno, President of Cardinal Imaging.
The M3 Universal Mag Press for HP4200/4300, WX/8100 and HP9000 cartridges removes and installs hubs with one easy to use machine. It removes the hubs from the Mag Roller sleeves quickly and re-installs magnets accurately. Saves time, work space and money by eliminated the need for three separate machines.
Cardinal Imaging Supplies, Inc offers a complete line of drum coatings, mag roller coating, PCR coating and cleaning supplies. For more information contact them at (800)225-8672 630-295-5663 or visit the company website at. | http://tonernews.com/forums/topic/webcontent-archived-12672/ | CC-MAIN-2017-04 | refinedweb | 198 | 63.9 |
@Deprecated public class GroovyTestSuite extends TestSuite
A TestSuite which will run a Groovy unit test case inside any Java IDE either as a unit test case or as an application.
You can specify the GroovyUnitTest to run by running this class as an application
and specifying the script to run on the command line.
java groovy.util.GroovyTestSuite src/test/Foo.groovy
Or to run the test suite as a unit test suite in an IDE you can use
the 'test' system property to define the test script to run.
e.g. pass this into the JVM when the unit test plugin runs...
-Dtest=src/test/Foo.groovy | https://docs.groovy-lang.org/docs/groovy-3.0.8/html/gapi/groovy/util/GroovyTestSuite.html | CC-MAIN-2022-27 | refinedweb | 108 | 63.53 |
US5408598A - Method for fast generation of parametric curves employing a pre-calculated number of line segments in accordance with a determined error threshold - Google PatentsMethod for fast generation of parametric curves employing a pre-calculated number of line segments in accordance with a determined error threshold Download PDF
Info
- Publication number
- US5408598AUS5408598A US08202678 US20267894A US5408598A US 5408598 A US5408598 A US 5408598A US 08202678 US08202678 US 08202678 US 20267894 A US20267894 A US 20267894A US 5408598 A US5408598 A US 5408598A
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- sub
- curve
- δt
- error
-
Abstract
Description
This is a continuation of application Ser. No. 07/705,041, filed on May 23, 1991, now abandoned.
This invention relates to computer graphics, and more precisely, to a method and apparatus for rapidly generating curves on a computer display.
Computer graphics employ a number of techniques to reproduce curved lines and surfaces. One commonly used technique involves producing a set of points and connecting those points with straight lines to approximate the curve. The curve is successively divided into smaller pieces and then checked to see if each piece can be approximated by a straight line to within a given error threshold. It turns out that the check of the accuracy of the approximation is the dominant part of the cost of executing the curve approximation algorithm.
In a two-dimensional space, a curve is often expressed as a function of orthogonal components x and y, i.e., y is equal to f(x). In a three-dimensional coordinate system, the x, y and z coordinates may also be represented as functions of one or two of the orthogonal components. However, these representations may cause difficulty in generating the coordinate values. One alternative technique is to use a parametric representation of the curve or a representation where each coordinate value on a curve is represented as a function of some common variable, that is, a variable common to all coordinate components. For a three dimensional system, such a variable may be described as t resulting in the following: x=f(t); y=g(t); and z=h(t); for 0<t<1. A further representation is a parametric cubic curve that is represented by third order polynomials.
x=f(t)=a.sub.x t.sup.3 +b.sub.x t.sup.2 +c.sub.x t+d.sub.x
y=g(t)=a.sub.y t.sup.3 +b.sub.y t.sup.2 +c.sub.y t+d.sub.y
z=h(t)=a.sub.x t.sup.3 +b.sub.y t.sup.2 +c.sub.z t+d.sub.z
Cubic curves are important because no lower-order representation of curve segments can provide either continuity of position and slope or continuity of slope and curvature at the point where the curve segments meet. Cubic curves are also the lowest order curves that can render non-planar curves in three dimensions.
A popular prior art method for subdividing parametric cubic curves is termed "Bezier subdivision". The Bezier subdivision method will be briefly described, but no explanation or proof of its correctness will be given. Such proofs may be found in various texts on graphics, i.e., see "Fundamentals of Interactive Computer Graphic, Foley et al., Addison-Wesley Publishing Co., pp. 514-523; "Algorithms for Graphics and Image Processing", Pavlidis, Computer Science Press, pp. 220-231; and "Computational Geometry For Design and Manufacture", Faux et al., Wiley and Sons, pp. 126-145.
Given a curve and an error threshold ΔE, the Bezier algorithm produces a list of line segments which approximate the curve with errors no larger than ΔE. The curve itself is described by a control polygon consisting of 4 points, p1, p2, p3, p4. Such a control polygon is shown in FIG. 1 and which also includes cubic curve 10 is being approximated. The dimension E is the error and defines the distance between the chord p1-p4 and the apogee of curve 10. If E is greater than ΔE two more polygons are constructed from the original polygon, with the property that curve 10 lies within each new polygon, and that each new polygon is the control polygon for the part of the curve which it contains. This subdivision proceeds recursively.
An iteration of this subdivision process has three stages. The first stage is indicated in FIG. 2 wherein the following values are found: ##EQU1##
The second stage of the calculation is shown in FIG. 3 and shows the derivation of points 13 and r2 as follows: ##EQU2##
The third stage involves the derivation of points l4 and r1 as follows: ##EQU3##
At the end of the above iterations, the derived values are: ##EQU4##
As can be seen from FIG. 4, the Bezier control polygons l1-l4 and r1-r4 now better approximate curve 10. For any given resolution ΔE, the subdivision process ends when, any curve within a polygon is at most a distance ΔE from the base line of that polygon. (The base line of the polygon is the line connecting P1 to P4). The calculation may be simplified by calculating the value for d instead of ΔE (see FIG. 1).
In essence, Bezier subdivision generates a binary tree where each node of the tree corresponds to a subdivision step, and each edge of the tree descending from a node corresponds to a right or left control polygon that is the result of a subdivision at that node. In a simple implementation, the calculation cost for each node includes approximately 12 additions and 12 shifts (24 cycles). However, the error check for ΔE requires at least 350 additional cycles. Thus, most of the computational work (88%) is in computing the error condition ΔE at the end of each iteration. Similar disparities in calculation work exist for more complex equations, i.e., parametric quadratics, conics, etc.
Bezier subdivision algorithms such as the one presented for Bezier cubics exist for Bezier curves of all orders. In particular, subdivision of parametric quadratics will now be described. A parametric quadratic is a parametric curve whose coordinate functions are second order polynomials in the parameter t.
The parametric curve is described by a triangular control polygon (see FIG. 5) consisting of three points, p0, p1, p2. These points define the curve as follows:
(x(t),y(t))=p.sub.0 (1-t).sup.2 +2.sub.p 1t(1-t)+p.sub.2 t.sup.2
0<t<1
From this polygon, two more polygons are constructed with the property that half of the curve lies within each new polygon and that each new polygon is the control polygon for the part of the curve which it contains. Thus the subdivision may proceed recursively. An iteration of this subdivision process has two stages as follows: ##EQU5##
The second stage: ##EQU6## At the end of such an iteration, the values are: ##EQU7##
The notation r(p)(t) will be used to denote the curve determined by ri and l(p)(t) to denote the curve determined by li. The algorithm continues recursively with each of the two output polygons being subdivided. Just as with subdivision for cubics, the subdivision process ends when the curve within the polygon is sufficiently close to the polygon, e.g., a distance less than some threshold T away. Again, if d is the function which gives the distance from a point to a line, and l(p,q) represents the line which connects p to q, d(p1,l(p0,p2))≦T where, ##EQU8## where • is the 2-D dot product and the hat of a vector is a vector of the same length as that vector but perpendicular to it.
In this section, a prior art algorithm for drawing rational quadratics is described. It is based on using the subdivision algorithm described above to subdivide both the numerator and denominator of a rational quadratic. The control points of the subdivided numerator and denominator are divided to give the endpoints of a linear approximation to the curve.
Assume the following rational quadratic curve (x(t),y(t)) in a plane. The forms of the coordinate functions are given by: ##EQU9## Note that the numerator is expressed in the quadratic Bezier basis, as is the denominator. Henceforth, wi is required to be positive since this can be done without losing the ability to represent all rational quadratics. Geometrically, (xi, yi) are control points and wi is the weight attached to these control points. The following quantities are defined, Xi =xi wi and Yi =yi wi. These are the coordinates in the Bezier basis of the numerator, (X(t), Y(t)).
In order to draw such a curve, Bezier subdivision is applied to both the Xi,Yi and the wi to generate l(X), l(Y), l(w) and r(X), r(Y) and r(w) as in the previous section. Let s(a) denote r(a) or l(a) (i.e., "a" subdivided once). By evaluating the resultant fraction (s(X)i,s(Y)i /s(w)i, the control points of the resultant curves are obtained. The weights of the resultant curves are the s(w)i.
This is equivalent to the application of Bezier subdivision to the 3-d, second order curve (Z(t),Y(t),w(t)) followed by a perspective projection onto the x,y plane (see FIG. 6). The ending condition of the previous subsection is then applied to the curve in the x,y plane. If the subdivision is not finished, the 3-d curve is further subdivided and again projected. This continues recursively.
An Incremental Algorithms For Cubics
There are several incremental schemes for evaluating cubics. Assume a cubic curve that is specified by its two coordinate functions,
x(t)=x.sub.0 +x.sub.1 t+x.sub.2 t.sup.2 +x.sub.3 t.sup.3
y(t)=y.sub.0 +y.sub.1 t+y.sub.2 t.sup.2 +y.sub.3 t.sup.3
Suppose the values of the coordinate functions are known for some t0 and it is desired to know them for some other value t0 +Δt. Define
x.sub.t (t.sub.0)=x(t.sub.0 +Δt)-x(t.sub.0)
x.sub.tt (t.sub.0)=x.sub.t (t.sub.0 +Δt)-x.sub.t (t.sub.0)
x.sub.ttt (t.sub.0)=x.sub.tt (t.sub.0 +Δt)-x.sub.tt (t.sub.0)
y.sub.t (t.sub.0)=y(t.sub.0 +Δt)-y(t.sub.0)
y.sub.tt (t.sub.0)=y.sub.t (t.sub.0 +Δt)-y.sub.t (t.sub.0)
y.sub.ttt (t+.sub.0)=y.sub.tt (t.sub.0 +Δt)-y.sub.tt (t.sub.0)
By rewriting the above quantities in terms of the derivatives of x and y, it can be seen that xttt and yttt are constant with respect to t. Now, to get x(t+Δt) add x(t)+xt (t). To continue in this fashion, compute xt (t+Δt) in order to make the next step forward by Δt. Similarly, xtt must be updated. This leads to the following algorithm for generating points of the curve. ##EQU10##
An Incremental Algorithm for Conics
Assume a parametric quadratic coordinate function
y(t)=y.sub.0 +y.sub.1 t+y.sub.2 t.sub.2
Suppose the values of the coordinate function for some t0 are known and it is desired to know them for some other value t0 +Δt.
y.sub.t (t.sub.0)=y(t.sub.0 +Δt)-y(t.sub.0)
y.sub.tt (t.sub.0)=yt(t.sub.0 +Δt)-y.sub.t (t.sub.0)
By rewriting the above quantities in terms of the derivatives of y it can be seen that ytt is constant with respect to t. Now, to get y(t+Δt) simply add y(t)+y1 (t), continue in this fashion and compute yt (t+Δt) in order to make the next step forward by Δt. This leads to the following algorithm for generating coordinates. ##EQU11## This may be extended to an algorithm for rational quadratics in the same way the subdivision algorithm was. Namely, the above algorithm which works for a coordinate function is applied to the numerator, (X(t), Y(t)) and the denominator, w(t), and then divide to get the result.
Both prior art incremental algorithms suffer from similar problems which will be explained using cubics as an example. The incremental algorithm is more efficient than subdivision, but is also numerically ill-conditioned in the sense that errors in yttt grow cubically with the number of steps taken. That problem can be skirted, to some extent, by calculating yttt to high precision, however, precision grows cubically with the number of steps. The proper choice of value for Δt is a more difficult problem. One approach to solving this problem is adaptive subdivision which varies the size of Δt based upon the distance moved in the last step. This makes the function more expensive and more complicated.
While the Bezier subdivision technique has been described above, other derivations employ the Hermite form and the B-spline form, which forms are also considered in the above cited texts. In U.S. Pat. Nos. 4,760,548 to Baker et al, 4,912,659 to Liang 4,949,281 to Hilenbrand et al. and 4,907,282 to Daly, various aspects of B-spline curve approximation techniques are described. In U.S. Pat. Nos. 4,674,058 to Lindbloom et al. and 4,943,935 to Sato, Bezier calculations are described for curve approximation. All of the aforedescribed patents either precalculate the curve approximations and then store the results for subsequent display or calculate error values as the curves are being constructed.
In U.S. Pat. 4,855,935 to Lein, the problems inherent in recursive subdivision methods are recognized and it is suggested that a technique called "forward differencing" (advancing along a parametric curve or surface in constant parameter increments) be utilized to more efficiently generate the curve. This adaptation is performed by transforming the equation of the curve to an identical curve with different parameterization, such that the step sizes increase or decrease so that the curve proceeds in substantially uniform increments.
In U.S. Pat. No. 4,648,024 to Kato et al., curved lines (circles) instead of straight lines are employed at the lowest level of approximation for the algorithm. Other prior art concerns itself with the construction of curves that can be parameterized by arc length, i.e., a curve whose points, at equally stepped times t, are separated by a constant function or distance along the curve, (e.g. a circle). Such a system is shown in U.S. Pat. No. 4,654,805 to Shoup, II.
In U.S. Pat. No. 3,806,713 to Ryberg, a curve approximation system is described for curves having a rotational axis. Ryberg's system is based on an error in approximation of a circle by a line. Ryberg does not attempt to overcome the error calculation problem mentioned above by determining, in advance, the number of straight line segments that will be required to approximate the curve. The length of each straight line approximation is expressed by Ryberg in terms of the number of steps along the rotational axis of the curve, for each straight line approximation. That value is determined by multiplying the total number of steps to be taken along the rotational axis times a function that results from dividing a predetermined maximum error in the number of steps to be taken along a radial axis, by the total number of steps to be taken along the radial axis for a given curve. While Ryberg's procedure is useful for curves that can be parameterized by arc length, he does not teach any method for more complex curves that do not lend themselves to such parameterization.
Recently, it has been proved by Dahmen that subdivision algorithms used to reproduce curves converge in a quadratic fashion. In specific Dahmen found that the class of subdivision algorithms that includes Bezier subdivision, converges quadratically. This is equivalent to the statement that at some indeterminate point in the subdivision algorithm, the error E began to reduce in size by a factor of approximately 4 at each division step. See "Subdivision Algorithms Converge Quadratically", Dahmen, Journal of Computational and Applied Mathematics, Vol. 16, 1986, pp. 145-158. While Dahmen's results indicate that widely used subdivision algorithms do have a convergency, he does not indicate at which stage that convergency occurs and at which point in the subdivision process the error begins to be divided by a factor of 4.
Accordingly, it is an object of this invention to provide an improved method for approximating a curve through the use of straight lines, and to reduce the number of error calculations.
It is another object of this invention to provide an improved curve approximation algorithm which is particularly applicable to cubics, conics, quadratics, and other high order curvilinear equations.
A method is described which enables the prediction of the number of subdivisions of a curve that will be required by control polygons to assure that a resulting straight line representation of the curve will not exceed a preset error threshold. The method is applicable to cubics and parametric quadratics including parabolas, ellipses and hyperbolas. In each case, the prediction of the number of subdivisions eliminates the need for a detailed error calculation at each subdivision step, thereby enabling an error calculation to be carried out only once in the process.
FIGS. 1-4 illustrate the prior art Bezier subdivision of a curve to enable representation of a curve by straight line segments.
FIG. 5 illustrates a prior art Bezier subdivision of a second order curve (parabola).
FIG. 6 illustrates a perspective projection of a 3-d parabola onto a viewplane to obtain a 2-d conic.
FIG. 7 is a high level block diagram of a data processing system for carrying out the invention.
FIG. 8 is a high level flow diagram of the method of the invention as applied to a cubic curve.
FIG. 9 is a high level flow diagram of an alternate method of the invention as applied to a cubic curve.
FIG. 10 is a high level flow diagram showing the application of the invention to a conic curve.
FIG. 11 is a high level flow diagram showing a modification to the method of FIG. 10.
FIG. 12 is a high level flow diagram showing an application of the method of the invention to an incremental algorithm for approximating a cubic curve.
FIG. 13 is a high level flow diagram showing the application of the method to an incremental algorithm for the approximating a conic curve.
As indicated in the Background of the Invention, in the process of subdividing a curve into smaller segments and checking to see if each segment can be approximated by a line to within a given threshold, substantial computation time is taken up at each subdivision by the calculation of an error function. When it is realized that each subdivision creates a tree wherein the next level of subdivision doubles the number of control polygons, it can be seen that error calculations greatly hinder the curve approximation procedure. As further indicated in the Background, Dahmen has determined, theoretically, that at some point in the subdivision procedure the error decreases by a factor of four at each subsequent subdivision.
It has been found, for a curve which can be expressed as a cubic, that the reduction in error by four occurs generally after the second subdivision. Also, the largest contribution to the error function occurs as a result of attempting to simulate a complex curve with a straight line, and arises from the fact that a straight line cannot approximate the second derivative of the curve. It has been further determined that the number of subdivisions of control polygons to simulate a curve within a certain error tolerance can be predicted by carrying out an initial error computation and then dividing the found error by a factor (e.g. the value 4) an integer number of times until the resultant error is less than a predetermined value. The number of times the error function is divided is then equal to the number of subdivisions that need to be accomplished. As a result, the required number of subdivisions is then known. This allows the required number of control polygons to be constructed, and thus gives the starting and end points of the straight lines to simulate the curve. No further error function calculations are required.
The method of this invention can be carried out on a personal computer-sized data processing system, such as is shown in FIG. 7. The firmware for carrying out the method is stored in electrically programmable read only memory (EPROM) 11. The operation of the system is controlled by microprocessor 12 which communicates with the various elements of the system via bus 13. A curve's coordinate points are stored, for instance, on disk drive 14 and are transferred into random access memory (RAM) 15 when the curve approximation and display method is to be performed. Once the required number of control polygons has been determined, the coordinates of the beginning and end of each control polygon are employed by display control 16 to construct a curve approximation that is then shown on display 17.
The procedure has been found applicable to not only curves described by a cubic function, but also to rational quadratics (i.e. conics, ellipses, parabolas, and hyperbolas) and further to incremental algorithms for simulating curves. Hereafter, the algorithms for both Bezier subdivision and incremental subdivision will be described, followed by a proof section which substantiates the illustrated relationships.
The algorithm takes as its input, a cubic curve specified by its control polygon (as indicated in the background of the invention and with respect to FIGS. 1-4). An error tolerance ΔE is specified and is the maximum error which the user is willing to tolerate in the curve approximation.
Referring to FIG. 8, a flow diagram illustrating the algorithm is shown. Initially, the Bezier control polygon is provided for the curve (box 20) and the maximum error ΔE is selected (box 22). A modified error m is then calculated by the equation shown in box 24. It will be noted that the numerator of that equation is the maximum of the second derivatives of the x and y coordinate functions at the beginning and end points of the curve. It turns out that the maximum second derivative value will always occur at either the beginning or end point of the curve, so by testing for the maximum second derivative at those points, one is able to derive the maximum second derivative coordinate function for the curve.
The numerator result is divided by the maximum error ΔE and the quotient multiplied by a constant 7√2 to derive the modified error m. The constant 7√2 assures that the modified error m does not affect the calculation, substantially, until after approximately the second subdivision. The modified error m is then tested (decision box 26) to see whether its value is less than or equal to one (i.e. if the real error is less than or equal to ΔE), and if it is, a line is drawn from one end of the control polygon to the other since the error limit has been reached. If not, the polygon is subdivided (or if this is a subsequent step, the polygons are subdivided) leaving, for each subdivided polygon a left polygon and right polygon (box 30). The value of m is then divided by four and tested to determine if its new value is less than or equal to one. If not, the process repeats until the condition set in decision box 26 is met.
It can thus be seen that the modified error calculation need only be made once, after which, the value of m is divided subsequent to a next subdivision and the effort repeated until the error condition is met. As a result, sufficient polygons are generated to meet the error condition without requiring additional calculations of the error function. In effect the number of control polygons to be used is equal to 2n, where n=the number of modified error values calculated.
The following is a pseudocode listing for the procedure shown in FIG. 8.
Input is a Bezier cubic with its control points, and an error tolerance, ΔE
Derive the second derivatives of the coordinate functions at the end of points of the cubic.
Compute the Modified Error, ##EQU12## where the primes denote differentiation. Call the subdivision routine subdivision (cubic, m)
End procedure processcubic
Begin procedure subdivision (cubic, m)
If m≦1, draw a line from one endpoint of the polygon to the other.
Otherwise,
subdivide the polygon as in the prior art, yielding two polygons, Left Polygon and Right Polygon.
call subdivision (Left Polygon, m/4)
call subdivision (Right Polygon, m/4)
end procedure subdivision
An alternative procedure (see. FIG. 9) for accomplishing the same result as above described is as follows:
Input is a Bezier cubic with its control points, and an error tolerance, ΔE
Derive the second derivatives of the coordinate functions at the end of points of the cubic.
Compute a value ##EQU13## where the primes denote differentiation. Call the subdivision routine subdivision (cubic, L)
End procedure processcubic
Begin procedure subdivision (cubic, L)
If L≦1, draw a line from one endpoint of the polygon to the other.
Otherwise,
subdivide the polygon as in the prior art, yielding two polygons, Left Polygon and Right Polygon.
call subdivision (Left Polygon, L-1)
call subdivision (Right Polygon, L-1)
end procedure subdivision
The value of log4 (x) can be easily calculated by shifting. When L=log4 (m) is computed, where m is the modified error, what is desired is that L=the smallest positive integer which is not greater than log4 (m). It is also known that m>1. So, m may be computed as follows:
if m<4, then L=1;
otherwise: L=1
while (m≧4) m=m/4; L=L+1
return (L)
This computes L by successive shifting since division by 4 is equivalent to right shifting by 2.
As will be remembered from the Background section, the coordinate functions for a rational quadratic are expressed as a fraction having quadratics in the both denominator and the numerator. If x(t), y(t) are the coordinate functions of the conic, it will be recalled (for ease of expression) that the x(t) function in the numerator and denominator, is defined as X(t)/w(t). Similarly, the expression for y(t) is simplified by letting its numerator and denominator respectively equal Y(t)/w(t). The value of ΔE is then chosen as the maximum error to be tolerated in the approximation. In the following equations, "a" can represent either x or y, as the case may be. Two quantitites are defined in terms of X, Y, w. The modified error m for an ellipse and a hyperbola (parabola) are as follows: ##EQU14##
To determine if a particular curve is an ellipse or a hyperbola, the values of w0 and w1 are compared. It will be recalled that a rational quadratic has, for its beginning and end coordinates, three coordinate functions X, Y, and w, with w being a "weight" or third dimensional value which determines in which direction the curve is "pulled" and vice versa. If w0 and w1 are the beginning and end coordinate point weights, whether a curve is a hyperbola, ellipse or parabola can be determined from the following (The hyperbola modified error function is used for parabolas).
w.sub.1 >w.sub.0 =hyperbola
w.sub.1 <w.sub.0 =ellipse
w.sub.1 =w.sub.0 =parabola.
The above noted procedures are shown in boxes 40, 42, and 44 in FIG. 10.
Once the type of curve has been identified, then one or the other of the modified error equations (box 46) is solved. It will be recalled from the Background, that the Bezier polygon for a conic is an open sided triangle having points p0, p1 and p2. An equation in box 46 is thus solved by initially substituting for each "a" value the appropriate value of x. The equation is then solved. Then the appropriate values of y are inserted and the equation solved. The maximum value obtained from the solution is then equal to the modified error m for the curve.
The procedure completes by following the procedure shown in boxes 26, 28, 30, and 32 in FIG. 8. Here again, it can be seen that the number of polygonal subdivisions required to achieve the desired error is determined by the number of divisions by 4 required to reduce the value of m to less than or equal to 1.
The following is a pseudocode listing for the procedure shown in FIG. 10:
begin procedure processconic (conic, Δe)
input is a conic specified as a rational quadratic, m and an error tolerance, Δe
if the conic is an ellipse, compute m=mellipse
if the conic is an hyperbola, compute m=mhyperbola
call the conicsubdivision routine conicsubdivision (conic, m)
end procedure processconic
begin procedure conicsubdivision (conic, m)
if m≦1, draw a line from one endpoint of the conic to the other.
otherwise, subdivide the conic as in the prior art section, yielding two polygons, LeftConic and RightConic.
call conicsubdivision (LeftConic, m/4)
call conicsubdivision (RightConic, m/4) end procedure conicsubdivision
An alternative procedure (see FIG. 11) that accomplishes the same result as above is as follows:
begin procedure processconic (conic, Δe)
input is a conic specified as a rational quadratic, m and an error tolerance, Δe
if the conic is an ellipse, compute L=Log4 mellipse
if the conic is an hyperbola, compute L=Log4 mhyperbola
call the conicsubdivision routine conicsubdivision (conic, L)
end procedure processconic
begin procedure conicsubdivision (conic, m)
if L≦1, draw a line from one endpoint of the conic to the other.
otherwise, subdivide the conic as in the prior art section, yielding two polygons, LeftConic and RightConic.
call conicsubdivision (LeftConic, L-1)
call conicsubdivision (RightConic, L-1) end procedure conicsubdivision
As indicated in the Background, incremental algorithms are also used to approximate curves. Such algorithms start at one end of the curve and successively step along the curve by a predetermined amount and at each interval, determine whether the interval is small enough to give a good approximation of the curve by an inserted straight line. This invention enables the best time step along the curve to be precalculated and then simply utilized without intervening error calculations. Furthermore, rather than requiring a modified error calculation, a direct calculation is made of the optimum number of time steps to enable a given error to be achieved. It has been determined that the best number of time steps is 2L with L being dependent upon the maximum second derivative of either the x or y coordinate functions at the beginning and end points of the curve.
As shown in FIG. 12, the algorithm starts by having as its inputs, a Bezier cubic polygon specified by its coordinate functions and a specified tolerance error ΔE (box 50). The procedure then computes the value of L using the expression shown in box 52. Here again, the value of L is directly related to the maximum second derivative of one of the coordinate functions of the curve (L being the optimum number of step sizes required to achieve the error tolerance ΔE). The algorithm then proceeds along the prior art incremental subdivision route shown in boxes 54, 56 and 58. In essence, the step size is chosen as being the reciprocal of 2L. Next, finite differences xt, xtt, xttt, yt, ytt, yttt, as defined in the Background of the invention are calculated for the x and y coordinate functions at the beginning coordinate, to a precision determined by the error tolerance and the step size. Then, the functions shown in box 58 are calculated using Δt increments to determine the succeeding coordinate points. Here again, it is to be noted that subsequent to the calculation shown in box 52, there is no further error calculation. The requisite straight lines are then drawn.
A pseudo code description for this procedure follows:
compute ##EQU15## choose step size Δt=2-L compute x(0), xt (0), xtt (0), xttt (0) to a precision ΔE 2-L
compute y(0), yt (0), ytt (0), yttt (0) to a precision ΔE 2-L
for (t=Δt to 1, by Δt increments))
draw line from x(t-Δt), y(t-Δ) to x(t), y(t).
end procedure processcubic
In lieu of using the Bezier subdivision for conics, an incremental subdivision technique can be employed. In this instance, the modified error expressions for both ellipses and hyperbolas are utilized, as above described. With reference to FIG. 13, a rational quadratic and error tolerance ΔE are input (box 60). Then, if the conic is determined to be an ellipse, L is computed as shown in box 62 using the modified error equation shown in box 46, FIG. 6. If, on the other hand, the conic is determined to be a hyperbola, then L is computed, as shown in box 62, using the equation shown in box 46, FIG. 6. At this point, the number of increments required to achieve an error tolerance ΔE is known. Thus, knowing the value of L, the number of steps chosen are 2L and the step size Δt is chosen as 2-L (box 64).
At this point, the procedure continues, as in prior art incremental algorithms, and computes the values shown in box 66. Subsequently, the specific increments and their weights are derived as shown in box 68, and the requisite lines drawn.
A pseudocode description for this procedure follows:
begin procedure processconic(conic, ΔE)
input is a conic specified as a rational quadratic, and an error tolerance, ΔE
if the conic is an ellipse, compute L=log4 (mellipse)
if the conic is an hyperbola, compute L=log4 (mhyperbola)
choose step size Δt=2-L
compute X(0), Xt (0), Xtt (0) to a precision ΔE 2-L
compute Y(0), Yt (0), Ytt (0) to a precision ΔE 2-L
compute w(0), wt (0), wtt (0) to a precision ΔE 2-L
for(t=Δt to 1 by Δt)
X(t)=X(t-Δt)+X.sub.t)
draw line from x(t-Δt), y(t-Δ) to x(t), y(t)
end procedure processconic
(1) Proof of correctness of the bound for cubics
This section gives a proof of the correctness of the bound on the depth of the cursion which is necessary to make the algorithm work.
Dahmen proved [Journal of Computational and Applied Mathematics 16(1986) 145-158 "Subdivision algorithms converge quadratically"] that a class of subdivision algorithms which includes Bezier Subdivision converges quadratically and that this bound is tight in the sense that no cubic bound is possible. Using this result, along with several computations, a formula has been found for an upper bound on the depth of the tree necessary to subdivide to a given error threshold. Furthermore, this formula is readily computed (requiring 13 additions, 6 shifts, 3 compares and a variable number of shifts which is less than half the word size.) and needs to be computed only once, using quantities available before the subdivision starts.
Dahmen's result, is an estimate for the distance of the control points from the curve in terms of a constant, the granularity of the partition of the time variable, and the second derivative. The general form of his Theorem 2.1 gives an estimate of the form
|S.sub.δ -P(t.sub.δ)|<C.sub.k *δ.sup.2 *||P"||∞
Where |Sd-P(td)| is the distance between the control points and the curve, evaluated at a point which is the average of neighboring partition points. δ is a measure of the granuality of the partition, and P" is the second derivative of the curve. ||f||∞ is the L∞ norm of the function, f. k is the order of the curve. Ck is a constant which depends only on k. A special case of this result is needed where δ is a (negative) power of 2, since the subdivision can be thought of as evaluation of points on the curve for t equal to multiples of some power of 2. Due to the fact that cubics are considered, the LHS can be thought of simply as the distance from the control points to the curve. This special case result is assumed with this interpretation.
If Ck is known, then it can be found exactly how far the iteration will need to go. Dahmen's proof does not, however generate a value for Ck. The proof relies on the fact that the basis for the subdivision is uniformly stable and uses the uniform stability inequality as the starting point. In fact, a careful reading of the proof reveals that Ck is exactly the reciprocal of the constant which appears in the uniform stability inequality. So, an upper bound on Ck =m-1 is computed where ##EQU16## [Note that uniform stability of a basis is the same as m being finite and for the purposes of the proof, will be taken as the definition.] The control vector r has k+1 components and the norm on the space of control vectors is given by,
||r||=max|r.sub.i |
Bi are some basis functions for the polynomial space. So, the Σri Bi is an element of the polynomial space with control vector r. [Note that control points are simply coefficients in a basis. They are called control points because they also have some geometric significance]. That element is called Pr. In this case, the Bi are the Bernstein-Bezier (hence called B-B) Basis functions for polynomials of degree 3.
B.sub.0 (t)=(1-t).sup.3
B.sub.1 (t)=3t(2-t)
B.sub.2 (t)=3t.sup.2 (1-t)
B.sub.3 (t)=t.sup.3
So, m measures how small the sup norm of the function can get when the control points are made small. deBoor calculates the value of m for the power basis. The calculation of m for the B-B basis will follow his closely, but will have some differences which exploit features of the B-B basis. deBoor shows that an equivalent formula for m is ##EQU17## The following quantitites may be defined ##EQU18## So that m-1 =maxi mi -1 Now consider the B-B basis under the reparameterization as t goes to 1-t. This exchanges r0 with r3 and r1 with r2 but leaves the image of the curve (and hence its sup norm) unchanged. From this it can be concluded that
m.sub.0.sup.-1 =m.sub.3.sup.-1
m.sub.1.sup.-1 =m.sub.2.sup.-1
To estimate m0 -1 and m1 -1 the following fact is used together with the exact form of the transformation between the B-B basis and the power basis and the relation of the coordinates of the power basis to the derivatives. For 0≦a<b ##EQU19## where Tn-1.sup.(i) is the ith derivative of the n-1th Chebyshev polynomial, and pn is the n dimensional vector space of all polynomials of degree less than n-1. In this special case, this formula becomes ##EQU20##
Suppose a third degree polynomial, P(t). Call its coordinates in the B-B ri and its coordinates in the power basis ai, then,
r.sub.0 =a.sub.0
r.sub.1 =1/3a.sub.1 +a.sub.0
a.sub.0 =P(.sub.0)
a.sub.1 =P'(0)
It can be seen, now that m0 -1 =1 ##EQU21## The argument for m1 -1 is more involved. Pick some cubic polynomial, P, with coordinates r in the B-B basis and a in the power basis. ##EQU22## Since this inequality is valid for every polynomial in the space, it must be valid for the maximum over all polynomials in the space. ##EQU23## At the end of this calculation, the desired result is,
C.sub.3 ≦7
This may or may not be the best bound. However another calculation yields C3 ≧3.5. The difference in the upper and lower bounds is a factor of 2. Since a factor of 4 in C3 is required to double the average execution time for the algorithm there is little to gain from tightening the bound.
Now that an upper bound on C3 has been found, a calculation is needed to get the precomputable ending condition. If Dahmen's result is rewritten using the upper bound,
|S.sub.δ -P(t.sub.δ)|≦7*δ.sup.2 *||P"||∞
So, if it is required
|S.sub.δ -P(t.sub.δ)≦Δ
This can be accomplished by requiring
7*δ.sup.2 *||P"||√≦Δ
then the following is needed ##EQU24## Also, restrict d to be a power of two, say, 2-1 ##EQU25##
This reveals the error in each coordinate function, but if Euclidean error is desired another calculation is required. ##EQU26##
If the Euclidean error is bounded by ΔE, Δx and Δy must be bounded by ##EQU27## This gives a formula 1 in terms of the Euclidean error which is acceptable in the rasterized curve. ##EQU28##
Since P is assumed to be a cubic, P" is a linear function and hence must take its maximum and minimum values at the endpoints of its interval of definition. This gives ##EQU29## which is the precomputable ending condition. In the most often used special case, of ΔE =1/2, the following results
l≧log.sub.4 (max(|x"(0)|, |x"(1)|, |y"(0)|, |y"(1)|))+2.154
(2) Derivations of the ending conditions for rational quadratics (conics etc.)
This section is devoted to calculating formulae for ending conditions for conic subdivision. In para 2.2 a simple test at each iteration is shown to be an ending condition. In para 2.5 a uniform bound on the depth of subdivision necessary for a given ellipse is derived. This bound is a worst case estimate for the ending condition in 2.2. In para 2.6 a similar bound is derived for hyperbolae. In the next subsection, a bound is generated on the error in a straight line approximation to a parametric quadratic after Bezier subdivision is carried out to a depth of n. This will be used in later paragraphs to derive an ending condition and a bound on the maximum possible depth of the tree for the subdivision of rational parametric quadratics.
(2.1) Prerequisite Calculations
Consider the Bezier quadratic coordinate function
c(t)=c.sub.0 (1-t).sup.2 +2c.sub.1 t(1-t)+c.sub.2 t.sup.2
Now, consider the error made in approximating this curve for 0<t<1/2 by the line which joins c0 to c1.Also approximate c(t) by the line from c1 to c2 for 1/2<t<1. This approximation will be called approximation by the legs of the control triangle to distinguish it from approximation by the base of the control triangle which will be introduced later. The arguments for each side of 1/2 are similar and only the argument for 0<t<1/2 will be presented. In the following we will assume 0<t<1/2. The error is then,
E(c,t)=(c.sub.0 +2(c.sub.1 -c.sub.0)t)-c(t)
Take the Taylor expansion of c(t) about t=0 and calculate
E(c,t)=(2c.sub.1 -c.sub.0 c.sub.2)t.sup.2
Now, let l(c) and r(c) be the curves produced by subdividing c. The arguments for right and left side are identical, so only the left side will be shown. So, ##EQU30## Hence, with each subdivision, pointwise error is reduced by a factor of four. This is also true of the worst case error.
So, for a Curve, c(t), let sn (c)(t) denote n applications of either r or l to c(t). Hence sn (c)(t) is short hand for any of the curves which appear at depth n in the tree. In order to simplify the notation, let sn (c) denote the particular segment at depth n which is being tested for termination. This corresponds to letting sn take on specific value. For example, s2 takes the values rl, rr, lr, and ll. The pointwise error after n subdivisions is
E(s.sup.n (c), t)=(1/4).sup.n (2c.sub.1 -c.sub.0 -c.sub.2)t.sup.2
The largest absolute value of the error occurs when t=1/2 so,
max(|E(s.sup.n (c), t)|)=(1/4).sup.n+1 |2c.sub.1 -c.sub.0 -c.sub.2 |
If the curve for 0<t<1 is approximated by the line from c(0) to c(2), this is called approximation by the base of the control triangle.
By looking at a plot of c(t) and the ci in the c-t plane, it can be seen that the maximum error for approximation by the legs occur at the same point, t=1/2. This is because c(1/2) is the midpoint of the line connecting c1 with the midpoint of the line connecting c0 to c2. So, the maximum error for the two approximations is the same, occurs at the same point and the two errors are of opposite sign. From this special case, the general result can be generated.
(2.2) General Ending Condition
(2.3) Errors in the Numerator and Denominator
Recall that for the subdivision of conics, (X,Y, w) are weighted points where Xi =xi wi, Yi =yi wt which then project down under perspective projection to the correct answer. Each of these coordinate functions behaves like c of the previous paragraph. Let A denote X or Y and let a denote x or y respectively so the argument need be given only once.
The curve may be reparametrized in order to get w0 =w2. So using equation 2 it can be concluded that for approximation by the legs of the triangle,
E(s.sup.n (A), t)=(1/4).sup.n (2a.sub.1 w.sub.1 -w.sub.0 a.sub.0 -w.sub.0 a.sub.2)t.sup.2
E(s.sup.n (w), t)=(1/4).sup.n 2(w.sub.1 -w.sub.0)t.sup.2
Now, translate so that a1 lands at 0. In this new coordinate system, a0 becomes a0 -a1, a1 becomes zero, a2 becomes a2 -a1, so the error bound for sn (A)(t) becomes
E(s.sup.n (A), t)=(1/4).sup.n w.sub.0 (2a.sub.1 -a.sub.0 -a.sub.2)t.sup.2
For approximation of an ellipse by the base,
|E(s.sup.n (A), t)|≦(1/4).sup.n+1 w.sub.0 |2a.sub.1 -a.sub.0 -a.sub.2 |
0≦E(s.sup.n (w), t)≦(1/4).sup.n+1 2(w.sub.0 -w.sub.1)
recall that for an ellipse, w0 ≧w1,
(2.4) Error in a fraction
Suppose the two quantities, x+Δx and y+Δy. x and y are the true values while Δx and Δy are errors in the quantities. What is then the error in the quotient of the two quantitites. Algebra gives. ##EQU31## Applying this formula to the problem, i.e. letting x=sn (A)(t) and y=sn (w)(t), the following results ##EQU32## Dividing top and bottom by sn (w)(t) ##EQU33## By choosing approximation by the base line for ellipses and approximation by the legs for hyperbolas, it can be insured that E(sn (w),t) is non-negative. This allows the following to be written ##EQU34## By plugging in the maximum errors and taking absolute values in the numerator as well as worst case estimates for other quantities in the denominator, the following results. ##EQU35## In order to make this more computationally tractable, the fact that maxt (sn (a)(t))≦maxt (a(t)) is used and results in ##EQU36## and is precomputable before any subdivision begins. Since the control points of sn (w)(t) are known at each iteration, and are a weighted average of two quantitites, an estimate is derivable for mint (sn (w)(t)) at each iteration by using the smallest control point.
Suppose that it is wanted to ensure that E(sn (a),t)≦b. This can be done by ensuring that the right hand side of equation 4 is less than b. With rearrangement, the check for termination becomes ##EQU37## which takes one shift and one comparison at each node to compute.
Up to this point, ellipses and hyperbolas have been treated together. However, in order to prove that this is a good bound and to derive a formula to precompute the sufficient depth of a uniform tree, each case must be analyzed separately.
(2.5) Precomputable Bound for Ellipses For ellipses, it can be shown that such an estimate generates a formula for the sufficient depth of a uniform tree. This is done using the fact that for ellipses, w0 >w1 and ##EQU38## Beginning with formula 3, plugging in the bounds for ellipses, and applying the above facts, along with the fact that sn (a)(t)≦maxt (a(t)) the following results ##EQU39## If it is required that E(sn (a),t)≦b, any n satisfying ##EQU40## will do. So, for an ellipse, the ending condition will never generate a tree deeper than the bound given in the previous equation. Hence subdividing uniformly to that depth will also generate an accurate curve.
(2.6) Precomputable Bound for Hyperbolas
In this section a precomputable bound for hyperbolas is found. Starting with equation 3 and plugging in the errors for approximation by the legs, results in ##EQU41## For a hyperbola, w0 ≦w(t)≦w1 for every t. Hence, the first term in the error poses no problems, since ##EQU42## The second term, however, requires more analysis. Suppose that n is fixed and allow the meaning of s to vary. Recall that sn (w) denotes the function given by n subdivisions of w and at each subdivision, either the right or the left resultant curve may be chosen. For a hyperbola, it is clear that the choice which minimizes the absolute values of sn (w) is to choose the segment which contains one of the endpoints. Suppose the endpoint corresponding to t=0 is chosen. (The argument for t=1 is similar) Look at the quantity ##EQU43## for this segment. In particular its maximum value is to be found. A calculation shows that this quantity has nonnegative first derivatives at all points. Hence it will attain its maximum value for the maximum allowable value of t, namely 1/2. This value is ##EQU44## So with rearrangement, ##EQU45## Requiring E(sn (a),t)<b, it can be required ##EQU46## which is a precomputable depth given the curve and the error bound. Also, this is a bound on the depth to which the original error check will go since it represents a worst case for that check. (14)
w(1)>w(0)=hyperbola
w(1)<w(0)=ellipse.), and
x(0)=x coordinate function at t=0,
x.sub.t (0)=x.sub.t (t.sub.0 -Δt)-x(t.sub.0),
x.sub.tt (0)=x.sub.t (t.sub.0 +Δt)-x.sub.t (t.sub.0),
x.sub.ttt (0)=x.sub.tt (t.sub.0 +Δt) -x.sub.tt (t.sub.0),
y(0)=y coordinate function at t=0,
y.sub.t (0)=y.sub.t (t.sub.0 -Δt)-y(t.sub.0),
y.sub.tt (0)=y.sub.t (t.sub.0 +Δt)-y.sub.t (t.sub.0),
y.sub.ttt (0)=y.sub.tt (t.sub.0 +Δt) -y.sub.tt 0)(t.sub.0 +Δt)-X(0)(t.sub.0);
X.sub.tt (0)=X.sub.t (0)(t.sub.0 +Δt)-X.sub.t (0)(t.sub.0);
X.sub.tt (0)=X.sub.tt (0)(t.sub.0 +Δt)-X.sub.t (0)(t+Δt)-X(0)(t.sub.0);
X.sub.tt (0)=x.sub.t (0)(t.sub.0 +Δt)-X.sub.t (0)(t.sub.0);
X.sub.ttt (0)=X.sub.tt (0)(t.sub.0 +Δt)-X.sub.tt | https://patents.google.com/patent/US5408598A/en | CC-MAIN-2018-17 | refinedweb | 8,837 | 50.87 |
using openmp on a 64 threads system
By blue on Apr 14, 2009
What do you do when you get a 64 threads machine? I mean other than trying to find the hidden messages in Pi?
Our group recently acquired a T5120 behemoth for builds, and I wanted to see what it was capable of.
|uname -a
SunOS hypernova 5.10 Generic_127127-11 sun4v sparc SUNW,SPARC-Enterprise-T5120
|psrinfo | wc -l
64
In my case I settled a slightly less ambitious endeavor. I recently had to implement Gaussian elimination as part of a university course work, I converted it to use the OpenMP and compiled with SunStudio.
|cat Makefile
gauss: gauss.omp.c
/opt/SUNWspro/bin/cc -xopenmp=parallel gauss.omp.c -o gauss
|diff -u gauss.single.c gauss.omp.c
--- gauss.single.c Tue Apr 14 14:32:57 2009
+++ gauss.omp.c Tue Apr 14 14:44:48 2009
@@ -7,6 +7,7 @@
#include <sys/times.h>
#include <sys/time.h>
#include <limits.h>
+#include <omp.h>
#define MAXN 10000 /\* Max value of N \*/
int N; /\* Matrix size \*/
@@ -35,7 +36,7 @@
char uid[L_cuserid + 2]; /\*User name \*/
seed = time_seed();
- procs = 1;
+ procs = omp_get_num_threads();
/\* Read command-line arguments \*/
switch(argc) {
@@ -63,7 +64,7 @@
exit(0);
}
}
-
+ omp_set_num_threads(procs);
srand(seed); /\* Randomize \*/
/\* Print parameters \*/
printf("Matrix dimension N = %i.\\n", N);
@@ -170,6 +171,7 @@
}
+#define CHUNKSIZE 5
void gauss() {
int row, col; /\* Normalization row, and zeroing
\* element row and col \*/
@@ -178,7 +180,9 @@
/\* Gaussian elimination \*/
for (norm = 0; norm < N - 1; norm++) {
+ #pragma omp parallel shared(A,B) private(multiplier,col, row)
{
+ #pragma omp for schedule(dynamic, CHUNKSIZE)
for (row = norm + 1; row < N; row++) {
multiplier = A[row][norm] / A[norm][norm];
for (col = norm; col < N; col++) {
As you can see, the changes are very simple, and requires very little modification to the code. Below was my result running it in a single thread and next using all 64 threads.
First the single threaded version.
|time ./gauss 10000 1 4
Random seed = 4
Matrix dimension N = 10000.
Number of processors = 1.
Initializing...
Starting clock.
Stopped clock.
Elapsed time = 1.11523e+07 ms.
(CPU times are accurate to the nearest 10 ms)
My total CPU time for parent = 1.11523e+07 ms.
My system CPU time for parent = 1080 ms.
My total CPU time for child processes = 0 ms.
--------------------------------------------
./gauss 10000 1 4 11163.06s user 1.64s system 99% cpu 3:06:04.96 total
And now using all threads.
|time ./gauss 10000 64 4
Random seed = 4
Matrix dimension N = 10000.
Number of processors = 64.
Initializing...
Starting clock.
Stopped clock.
Elapsed time = 254993 ms.
(CPU times are accurate to the nearest 10 ms)
My total CPU time for parent = 1.53976e+07 ms.
My system CPU time for parent = 37960 ms.
My total CPU time for child processes = 0 ms.
--------------------------------------------
./gauss 10000 64 4 15371.53s user 38.51s system 5757% cpu 4:27.65 total
Now I am all set to look for my name in Pi. :)
\*the gaussian elimination source is here. | https://blogs.oracle.com/blue/tags/parallelization | CC-MAIN-2015-48 | refinedweb | 511 | 68.57 |
Java Exercises: Test whether AB and CD are orthogonal or not
Java Basic: Exercise-235 with Solution
There are four different points on a plane, P(xp,yp), Q(xq, yq), R(xr, yr) and S(xs, ys).
Write a Java program to test whether AB and CD are orthogonal or not.
Input:
xp,yp, xq, yq, xr, yr, xs and ys are -100 to 100 respectively and each value can be up to 5 digits after the decimal point It is given as a real number including the number of.
Output: Yes or No.
Sample Solution:
Java Code:
import java.util.*; import static java.lang.Math.*; class Main{ public static void main(String args[]){ System.out.println("Input xp, yp, xq, yq, xr, yr, xs, ys:"); Scanner scan = new Scanner(System.in); double x[] = new double[4]; double y[] = new double[4]; for(int i=0;i<4;i++){ x[i] = scan.nextDouble(); y[i] = scan.nextDouble(); } double a = (x[0] - x[1]) * (x[2] - x[3]); double b = (y[0] - y[1]) * (y[2] - y[3]); if((float)a + (float)b == 0) System.out.println("Two lines are orthogonal."); else System.out.println("Two lines are not orthogonal."); } }
Sample Output:
Input xp, yp, xq, yq, xr, yr, xs, ys: 3.5 4.5 2.5 -1.5 3.5 1.0 0.0 4.5 Two lines are not orthogonal.
Flowchart:
Java Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Java program to create maximum number of regions obtained by drawing n given straight lines. | https://www.w3resource.com/java-exercises/basic/java-basic-exercise-235.php | CC-MAIN-2019-47 | refinedweb | 261 | 58.99 |
We all know of the wired doorbell systems which require wires and suitable outlets for it to work satisfactorily. As the wired doorbell system needs complicated wiring, it requires an experienced person to get the work done and it does not do good, in both working and appearance. Another problem with it is that, if you want to install a wired doorbell system for an existing house, it needs more effort and time for the installation. Due to the temperature and humidity, and other environmental factors, wires are damaged and will lead to a short circuit. This is where the wireless doorbell system gets into the picture. Even though the cost of the wireless Doorbell system is more, when compared to the wired doorbell system, the regular maintenance for the wireless Doorbell system is low when compared with the wired doorbell system, which requires an experienced person for maintenance purposes. When it comes to installation, wireless doorbell systems are very simple to install and requires no experience person for installation. In addition to this, wireless doorbell systems have additional features like camera, video recorder, etc and look stylish, and it can be easily installed in any part of the house as it is completely wireless.
In this project, we are going to build a Wireless Doorbell using Arduino. We will have a button which when pressed will wirelessly play a melody of our choice to indicate someone is at the door. For wireless connectivity, we will use the 433 MHz RF module. In general, the RF module must always be accompanied by a decoder and encoder module, but in place of the decoder and encoder module, we can also use a microcontroller such as Arduino which we are using in this tutorial. If you want to build a simple wired Doorbell you can check this Doorbell using 555 IC tutorials to build one.
Hardware Required:
- RF module
- Arduino
- Buzzer
- Push-button
- Breadboard
- Connecting wires
433 MHz RF Module:
For our Arduino based Wireless Doorbell, we will be using the 433 MHz Wireless RF modules. An RF module, which is a Radio Frequency module consists of two modules, one which receives the data called receiver, and the one which transmits the data called transmitter.
Learn more about the RF transmitter and receiver by following the link.
RF Transmitter:
A transmitter consists of a SAW resonator, which is tuned to 433MHz frequency, a switching circuit, and a few passive components.
When the input to the data pin is HIGH, the switch will act as a short circuit and the oscillator runs which produces a fixed amplitude carrier wave and a fixed frequency for some period of time‘t’. When the input to the data pin is low, the switch acts as an open-circuit and the output will be zero. This is also known as Amplitude shift keying (ASK). We will discuss more on this later in this article
Receiver Circuit:
An RF receiver is a simple circuit that consists of an RF tuned circuit, an amplifier circuit, and a phase lock loop circuit.
An RF tuner is used to tune the circuit to a particular frequency, which needs to meet the transmitted frequency. An amplifier circuit is used to amplify a particular frequency from all other signals and to increase the sensitivity of the particular frequency.
Phase Lock Loop Circuit:
A phase lock loop circuit (PLL) is a circuit that is used in types of equipment in which we want a highly stable frequency from a low-frequency reference signal. A PLL is a negative feedback system that consists of a voltage-controlled oscillator and a phase comparator connected in such a way that the oscillator frequency always matches the input signal as shown below.
In the PLL circuit two signals i.e. from the reference signal and the signal from the voltage-controlled oscillator (VCO), is given as inputs to the phase detector and the output from the phase detector the difference between both the inputs, and this output is the phase difference of both the signals. This output contains frequency components, which are the sum and difference of the signals. So, this output is given as input to the low pass filter, which allows only low frequencies, and doesn’t allow the high-frequency signals to pass through. The output of the low pass filter is fed to a voltage-controlled oscillator (VCO), and this input acts as a value to the VOC, which must be changed to decrease the phase difference between both the signals. The change in the VCO takes place until the phase difference is minimal, or the output of the phase detector has a constant error output. This results in the loop lock situation.
With all these components, the receiver receives the signal from the antenna which is then tuned by RF tuned circuit and this weak signal is amplified using OP-Amp, and this amplified signal is further used as input to PLL, which makes the decoder to lock onto the incoming digital bits which gives an output which is less in noise.
Modulation:
Modulation is a process of converting data into electrical signals, and these modulated signals are used for transmission. We modulate the signals so that we can separate the necessary signal from other signals. Without modulation, all the signals having the same frequencies will get mixed, which will lead to error. There are many types of modulation the popular ones are Analog Modulation, Digital Modulation, Pulse Modulation, and Spread Spectrum.
Out of these the most popular one used in a wireless transmission is digital modulation. The popular Digital modulation techniques are Amplitude Shift Keying, Frequency Shift Keying, Phase Shift Keying, Orthogonal Amplitude Modulation.
Amplitude Shift Keying (ASK) Modulation:
In Amplitude Shift key modulation, the sinusoidal carrier will keep on generating continuous high-frequency carrier, and the signal which is to be modulated will be in the binary sequence, and these signals make the input to the switching circuit to be either high or low.
As shown in the above figure, when the input is low, the switch will act as an open circuit, and the output will be zero. When the input to the switch is high, the output will be the carrier signal.
Arduino RF Transmitter Circuit Diagram
Our wireless doorbell project will require a transmitter and receiver circuit each with its own Arduino board. The Circuit diagram for Doorbell Transmitter is shown below
The Arduino pin 5 is connected to the one end of the doorbell switch, and the other end of the switch is connected to the supply voltage. A pull-down resistor of 10kohm is connected to pin 5 as shown in the fig. Pin 11 is connected to the data pin of the transmitter module. Vcc is connected to the supply voltage, and the ground pin of the transmitter module is grounded.
In here I used a breadboard for connecting the modules, and a push-button is used as a doorbell switch.
Arduino RF Receiver Circuit Diagram
Similarly, on the receiver side, we need to use another Arduino board with the RF receiver module. Then the Arduino Doorbell Receiver circuit also has a buzzer is to play some melody when the button is pressed.
Here, we connect pin 7 of the Arduino to the buzzer positive terminal, and the negative terminal is grounded. A supply voltage of VCC is given to the receiver module, and the GND pin of the module is connected to the ground. The out pin of the receiver module is connected to the 12th pin of the Arduino.
The receiver module consists of 4 pins in which one pin is grounded, and another one pin is for giving VCC supply, and the remaining two pins are used for data transfer. In the above diagram, a buzzer is connected to the digital 7th pin of the Arduino, and the 12th pin of the Arduino is connected to the receiver module output pin.
Arduino Transmitter Code Explanation
The complete code for the Arduino transmitter part is given at the bottom of this page. The explanation of the code is as follows.
These are the header files which are needed to be included to send or receive the data using the RF module. These libraries make the connection between the Arduino and the module simple. Without these, you have to manually write the code for connecting the RF module with the Arduino. An object is created “driver” to access the commands used for sending and receiving the data. You can download the Radio Head Library for Arduino from Github.
#include <RH_ASK.h> #include <SPI.h> // Not actually used but needed to compile RH_ASK driver;
Serial.begin() is used to find whether the RF transmitter module is working or not and I have initialized the PIN 5 (digital pin 5) as an Input pin and this acts as a door-bell switch.
void setup() { Serial.begin(9600); // Debugging only pinMode(5,INPUT);
This code is used to print the message “init failed” when the RF TX module does not initialize at starting of the program and this run only ones.
if (!driver.init()) Serial.println("init failed");
The if function checks whether the pin is logic HIGH or LOW, i.e. if the doorbell switch is on the state or in an off state. The pointer msg contains the message which we want to send through a transmitter. One note is that we must know the number of characters we need to send. This will help in writing the receiver code.
if(digitalRead(5)==HIGH){ const char *msg = "a";
The strlen() command checks the length of the message, which is 1 in this case. The driver.send() commands sends the data to the tx module which are converted into waves. The command driver.waitPacketSent() is used to wait until the data is sent.
driver.send((uint8_t *)msg, strlen(msg)); driver.waitPacketSent();
Arduino Receiver Code Explanation
The Receiver program is also given at the end of this page below the Transmitter code or it can be downloaded from here. You can directly use it with your hardware; the code explanation is as follows.
These are the header files which are needed to be included to send or receive the data using the RF module. These libraries make the connection between the Arduino and the RF module simple. Without these, you have to manually write the code for connecting the RF module with the Arduino.
#include <RH_ASK.h> #include <SPI.h> // Not actually used but needed to compile
These are the header files that are created for the code to equate the values of frequency to a particular note and get the note values to get the musical tone. If you want to know more about pitches.h or how to play the melody with Arduino and buzzer you can refer to this Melody using Tone() Function tutorial.
#include "pitches.h" //add Equivalent frequency for musical note #include "themes.h" //add Note vale and duration
An object is created “driver” to access the commands used for sending and receiving the data.
RH_ASK driver;
To print messages in the serial monitor. Then the if condition checks if the initialization is failed or not.
void setup() { Serial.begin(9600); // Debugging purpose if (!driver.init()) Serial.println("init failed"); else Serial.println("done");
This whole code deals with the notes, pitches, and duration to be taken to the required melody
void Play_Pirates() { for (int thisNote = 0; thisNote < (sizeof(Pirates_note)/sizeof(int)); thisNote++) { int noteDuration = 1000 / Pirates_duration[thisNote];//convert duration to time delay tone(8, Pirates_note[thisNote], noteDuration); int pauseBetweenNotes = noteDuration * 1.05; //Here 1.05 is tempo, increase to play it slower delay(pauseBetweenNotes); noTone(8); //stop music on pin 8 } }
The command uint8_t buf[1] initializes the buf as an unsigned integer of length 8 bits, and the size of the buf variable is 1, as I told you before that we should how many bits we sent and getting the length of the buf variable in binary form.
void loop() { uint8_t buf[1]; uint8_t buflen = sizeof(buf);
This code checks whether we received the correct data and if the received signal is correct it plays the song.
if (driver.recv(buf, &buflen)) // Non-blocking Serial.println("Selected -> 'He is a Pirate' "); Play_Pirates(); Serial.println("stop");
Wireless Arduino Doorbell Working
The transmitter module, along with the Arduino is connected near the door, and the receiver module, along with Arduino, can be installed in any part of the room. When someone presses the switch, it sends the high pulse to the 5th pin of Arduino, which is connected near the door along with the transmitter module. In our Receiver code, we wrote a command- digitalRead(5), this command makes the Arduino, to keep on reading this pin. When this pin gets HIGH, Arduino transmits data through the transmitter, and these signals are received by the receiver. The Arduino, which is connected to a buzzer, reads these signals, and when the desired data is received, the if function is satisfied, and the code will initiate the function, Play_Pirates() and the music will start to play.
The complete working of Arduino based wireless Doorbell project can be found in the video linked at the bottom of this page. Hope you understood the project and enjoyed learning something useful, if you have any questions please leave them in the comment section. For other technical quires use the forums.
Complete code is given below or it can be downloaded from this link.
Doorbell Transmitter Code
Doorbell Receiver Code
Dec 29, 2019
By the Transmitter you say: Pin 11 is connected to the data pin of the transmitter module.
Is not market in the schematic. Is it connected or not?
By the Reciever you say: Here, we connect pin 7 of the Arduino to the buzzer positive terminal. But in the schematic is on the pin D5
you say: The out pin of the receiver module is connected to the 12th pin of the Arduino.
Here I have a problem see your picture from 433 MHz RF Module, 4 pins, the first is VCC the last one GND, the two in the middle I dont now.
In jour foto from the Reciever I see the Gray is the VCC and the Green is the GND and i Black go to pin D8. Wer is the Black connected, on the second or the third pin from the module?
In the code from the Reciever you say: noTone(8); //stop music on pin 8. But there is nothing conneted on pin D8 | https://circuitdigest.com/microcontroller-projects/wireless-doorbell-using-arduino | CC-MAIN-2020-50 | refinedweb | 2,434 | 59.64 |
from win32com.client import Dispatch fso = Dispatch('scripting.filesystemobject') for i in fso.Drives : print i Hameed Khan wrote: >hello all, > i was wondering how can i get list of all drives >available on my windows platform. is there any >function or something from which can i build a list >of all drives on my window platform. > >Thanks, >Hameed khan > >__________________________________ >Do you Yahoo!? >Yahoo! Hotjobs: Enter the "Signing Bonus" Sweepstakes > > >_______________________________________________ >Python-win32 mailing list >Python-win32 at python.org > > > -- Jens B. Jorgensen jens.jorgensen at tallan.com "With a focused commitment to our clients and our people, we deliver value through customized technology solutions." -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3108 bytes Desc: S/MIME Cryptographic Signature Url : | http://mail.python.org/pipermail/python-win32/2004-January/001562.html | crawl-002 | refinedweb | 131 | 60.82 |
Re: Arithmetic on function address
From: Arthur J. O'Dwyer (ajo_at_nospam.andrew.cmu.edu)
Date: 04/23/04
- ]
Date: Fri, 23 Apr 2004 17:08:34 -0400 (EDT)
On Fri, 23 Apr 2004, Stephen Biggs wrote:
>
> Eric Sosman <Eric.Sosman@sun.com> wrote...
> > Stephen Biggs wrote:
> >>
> >> Given this code:
> >> void f(void){}
> >> int main(void){return (int)f+5;}
> >>
> >> Is there anything wrong with this in terms of the standards?
> >
> > Yes and no. The Standard permits you to cast a pointer
> > value (even a function pointer value) to an integer, but it
> > does not guarantee that the result is useful or even usable.
>
> But, it then should allow you to add a value to that integer, as the
> code says, no? Is this what you mean by no guarantees of it being
> usable?
[This explanation based on N869 6.3.2.3#6.]
Not necessarily. The implementation might for instance map the
address of 'f' onto 'INT_MAX', thus producing signed-int overflow
when you try to add 5 to it. *Then*, and only then, is your program
allowed to defrost your refrigerator.
Alternatively, the implementation could map 'f' directly onto a
trap representation in 'int'; then, *any* attempt to use the value
of '(int)f' at all would trigger undefined behavior. (Note that
'(int)f' itself is still a valid construct on such systems; you
can take the 'sizeof ((int)f)' with impunity, but that's all you can
do.)
Finally, according to the word of the C99 draft standard, the
implementation is allowed to map the address of 'f' directly onto
a number so large that 'int' can't hold it. Instant undefined
behavior! (Except IMO in the case of 'sizeof', as above.)
> >> Is this legal C code?
> >
> > Legal but useless.
>
>
> Ok, fine... I agree completely that it is useless, but shouldn't correct
> code be generated for it?
If it's completely useless --- and in fact could legitimately do
*anything at all* to your machine --- then what, pray tell, would be
the "correct code" you'd expect to see generated? "Garbage in,
garbage out" is the rule that applies here. Well, more precisely,
"Something that might or might not produce garbage in, something that
might or might not be garbage out."
> >> One compiler I'm working with compiles this quietly, even with the
> >> most stringent and pedantic ANSI and warning levels, but generates
> >> code that only loads the address of "f" and fails to make the
> >> addition before returning a value from "main".
You mean that on this compiler, the expressions
(int)f AND (int)f+5
compile to the same machine code? This is odd, but perfectly legitimate,
behavior for a conforming optimizing C compiler as long as it documents
its behavior in a conforming fashion. There's nothing wrong with your
compiler (although I would say it's a weird one); there is something
wrong with your test suite.
[BTW, if any experts could explain what 6.3.2.3 #6 means by
"except as previously specified," I'd love to hear it. I don't
recall any "previously specified" cases of the pointer-to-integer
cast's being defined.]
-Arthur
- ] | http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2004-04/2837.html | crawl-002 | refinedweb | 525 | 62.48 |
> * support... Indeed... the naming issue of DLLs here is still a problem. There's no reason for LuaCheia, IUP, etc to work with a 'standard' Lua set of DLLs. I will certainly be shipping our own build of them with our application; but I'm happy to follow standard naming rules to be compatible. > * rewriting wrapper names (for example, SDL.LoadBMP instead of > SDL.SDL_LoadBMP) Yep, there are a lot of these... the Lua ones got cleaned up when moved to a "namespace".. but others are just in a table with exactly the same names as before. Another one that comes to mind is bitlib... bit.bor etc seem redundant now. Love, Light and Peace, - Peter Loveday Director of Development, eyeon Software | http://lua-users.org/lists/lua-l/2003-08/msg00064.html | CC-MAIN-2019-43 | refinedweb | 122 | 77.64 |
Talk:Lifecycle prefix
Contents
removed
The current definition for "removed" reads: "features that do not exist anymore or never existed but are commonly seen on other sources". I propose to remove the part "or never existed but are commonly seen on other sources", because this has nothing to do with "removed". If people want to tag easter eggs or errors from other maps in the OSM db they should use a distinct tag for it. --Dieterdreist (talk) 12:16, 28 January 2015 (UTC)
- Definitely, this was cut&paste from Comparison of life cycle concepts and a few other pages and probably needs a few more cleanups. RicoZ (talk) 13:12, 28 January 2015 (UTC)
- I still have a problem with the current definition, why would we encourage to tag "features that do not exist anymore" ? I believe that the consensus for not existing feature is to simply remove them from the osm DB. My proposition is the following : features that don't exist but have an high probability to be re-added by a non surveyed edit because they can be seen on commonly used imagery or import sources
- If one still want to propose a tag for features that do not exist anymore, without beeing more specific, I'd suggest to add a tag like removed:reason=nonexisting or to propose a new prefix : nonexistent:building=yes sletuffe (talk) 18:15, 4 February 2015 (UTC)
- Currently the page has this big fat warning: "Generally historic objects with no relevance to current state of a site do not belong into the main database". We do not encourage it. In some cases it may be appropriate. More importantly if there are any databases of historic objects or other special databases they may choose to use some of the lifecycle prefixes for their purposes. Documenting this possibility here will hopefully help to keep the data models consistent so the data can be mixed without too much trouble. RicoZ (talk) 21:07, 4 February 2015 (UTC)
- Sorry, I missed that warning. What I understand when reading Dieterdreist is that my usage of the removed prefix is not well suited because of the meaning of the word removed that implied it existed. At first, I didn't care about such slight differences that is why I wrote the first definition usage of removed as beeing wide to include any reasons why an object is no more. And because the destroyed prefix allready existed for that purpose. But ok, I'll use another prefix for my/our needs about easter eggs or errors from other maps. The no: prefix looks short, not used too much and does'nt necessarly have a meaning of "it existed". I'll move my documentation to a page for the no prefix sletuffe (talk) 22:04, 4 February 2015 (UTC)
Meaning of "demolished"
I think that demolished does not imply "without any traces left" and have no idea when and how it got into comparison of life cycle concepts from where I originally copied without much thinking. RicoZ (talk) 13:56, 5 February 2015 (UTC)
- I'm the one who originally added that description in 11/2014 because there was a few usage in the db of the destroyed: prefix but no description of the exact meaning. From english dictionnary demolish : To tear down or break apart the structure of; raze. Without any other clues how have other mappers used it, I'm okay to stick to that definition. sletuffe (talk) 17:17, 5 February 2015 (UTC)
For me "demolished" means "intentionally destroyed beyond repair" - this definition also would make more sense and not conflict with the use of removed, destroyed,ruined. RicoZ (talk) 13:56, 5 February 2015 (UTC)
- I'm unsure that we have less conflict however, the english definition of demolished doesn't forbid a state where there is nothing left from the structure. If demolition is "intentionally destroyed beyond repair", then it does not defers much from destroyed:. Beeing in a ruined state doesn't necessarly mean it was demolished (as demolished seams to imply "intentionnaly"), but if it was destroyed it could be in a ruined state or with nothing left. Tricky... and bike shedding imho sletuffe (talk) 17:17, 5 February 2015 (UTC)
- There may be traces left or not, "demolished" does not say anything about this aspect. "Destroyed" can be the result of willful or accidental destruction by man - or natural disaster. In contrast "demolished" is more likely to mean intentional/planned destruction. It looks ok as it is now. RicoZ (talk) 20:43, 5 February 2015 (UTC)
ruined: or ruins: ?
Also, wondering if it should be really "ruins:" instead of the currently suggested "ruined:"? True - all other prefixes are formed as past tenses of verbs but semantically ruined:building=casino is simply wrong - it could suggest the casino is commercially bankrupt. "ruins:building=casino" does not have this problem. RicoZ (talk) 13:56, 5 February 2015 (UTC)
- ruined is my choice. I don't see any incredible argument against one or the other sletuffe (talk) 17:17, 5 February 2015 (UTC)
- for my language feeling "ruins:" is much better here. "ruined" is almost never used to describe the physical state of a building but instead the financial state (bankruptcy) of a business or a bank. "Ruins" is used to describe the heap of rubble left after a disintergated building. RicoZ (talk) 20:53, 5 February 2015 (UTC)
- My native language is german so I did a search about this. "Ruined" on Wiktionary yields nothing, but Oxford, Cambridge and Oxford from 1914 all put our intended meaning first. And ruined really fit's best to all other common prefixes.--Jojo4u (talk) 20:02, 25 July 2015 (UTC)
Testing possibilities to rename prefix keys
Hi, for testing purposes I have renamed key:disused to disused:=* and key:abandoned to abandoned:*=*. Want to see what works better for users and wiki search engines.. post your findings here. RicoZ (talk) 12:28, 29 March 2015 (UTC)
- One problematic issue is that the tag template links to the version without the colon. See disused:building=church, for example. (And before you change the template, note that adding the : to the link would break links for tags like access:conditional). --Tordanik 15:54, 30 March 2015 (UTC)
- This is just a test for the easiest workaround. Currently I don't feel like hacking these templates .. I admire people who can program this strange thing which reminds me of a brainfuck dialect but it is nothing for me.
- Regarding access and access:, it could be done with redirects or the page could be split for access and access:* . RicoZ (talk) 19:53, 30 March 2015 (UTC)
- Any progress on which is better? Best from my limited point of view would be Namespace:abandoned as the page title. From your two options: Key:abandoned:* looked very alien to me when first browsed it, I'd choose Key:abandoned:. --Jojo4u (talk) 17:40, 25 July 2015 (UTC)
- Thanks for unifying the tags. But where on the wiki can we discuss page names and descriptions of namespace/prefix/postfix/extension broader? I'm not convinced to use "Key:" for a Namespace. I also came accross Template:PrefixNamespaceDescription Template:PostfixNamespaceDescription here: Template:Description.--Jojo4u (talk) 10:56, 2 August 2015 (UTC)
- Anything would be better than abusing "key:" but someone has to write the mentioned templates first. Only few people can do this. When that is done taginfo support needs to be done which only one person can do. Even as experienced programmer I am not enthusiastic about learning how to create complicated templates, have rarely ever seen something as painfull as template programming. RicoZ (talk) 11:33, 2 August 2015 (UTC)
- To see what it involves, look here: User_talk:Moresby/Description#Rationale_for_this_new_template. It might help to ask around on various devel lists, forums, and here: User_talk:Moresby#Template_support_for_namespaces RicoZ (talk) 12:07, 2 August 2015 (UTC)
- For the record, I'm not a fan of calling things like language suffixes "namespaces". That's a term people have imported from outside OSM, and it does not really fit the semantics. --Tordanik 13:53, 3 August 2015 (UTC)
- Feel free to update Namespace#Nomenclature (which I created recently) to tone it down a bit.--Jojo4u (talk) 14:17, 3 August 2015 (UTC)
- I have started Proposed features/Namespaces in wiki. Agree that semantically prefix/suffix is better than namespace. It may make sense to treat them together in one overview page and technically in one future wiki-namespace and a common template. RicoZ (talk) 17:31, 5 August 2015 (UTC)
- And many people would interpret namespace as "a conceptual space that groups classes, identifiers, etc. to avoid conflicts with items in unrelated code that have the same names", which is the only definition of the term on wiktionary. If you don't like "affix", then let's call the page "Key prefixes, suffixes and infixes" or something. Namespace is simply wrong, even our prefixes aren't really namespaces. --Tordanik 12:04, 7 August 2015 (UTC)
- Good enough for me if nothing better surfaces. Also I am wondering are there any "true" infixes or is so that those infixes merely result from situations where two or more affixes are combined? RicoZ (talk) 13:58, 7 August 2015 (UTC)
New home construction stages
When mapping new residential developments under construction, I tag a home site as planned:building=house when it's obvious that a house has been planned to be built there. By that time, the lot has been graded and usually marked. Before the street was even paved, the infrastructure connections (power, telecommunications, water, sewer) would have been extended into the lot. Often some other detail such as the lot number ref=* and street address addr:street=* can already be tagged.
Only when the ground is actually broken for building the house (or at least the outline of the foundation has been chalked in for imminent digging) do I change the tag to construction:building=house. New taggable properties will soon appear, including building:levels=* (though JOSM complains about this).
I remove the construction: prefix only when the house is occupied by the new residents or advertised as "move-in ready" by the builder (or it's very clear that all construction activity has been completed).
--T99 (talk) 19:40, 19 November 2016 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:Lifecycle_prefix | CC-MAIN-2016-50 | refinedweb | 1,730 | 59.74 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.