text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
.
Saving images in WPF.
Saving images in Silverlight 3:
- ICSharpCode.SharpZipLib.Silverlight
- ImageTools
- ImageTools.IO
- ImageTools.IO.Png (only if you want .png support)
- ImageTools.IO.Bmp (only if you want .bmp support)
- ImageTools.Utils!
Please document the .net image encoder properties. The current API does not document the supported forms of TIFF encoding, jpeg encoding or PNG encoding.
Tiff (group 3, group 4, packbits, lzw, horizontal tile size, verticle tile size, background color, image bounding box)
canvas.ToImage();
do not contain defindion ToImage();
Please help me?
canvas.ToImage();
do not contain defindion ToImage();
I m having same issue any help ?
for those with the problems with the extension method. Add this to the top of your page:
using ImageTools;
HI
i saved image only what i am seen in the View, if my Image is big and scrol bars came that part is not saved.. also more than300*3000 also not able to create image .. if this thing solved it is perfect to use. Thanks
My Email id is muruganas81@gmail.com pls replay my comment if u need more info pls ask me
How we can save the image in isolated storage and display in image control with silverlight .
Pleas help me .
I have an issue I can show the ashx image in silverlight apps but when its xap use in windows .vb apps image is not going to work its not showing what is issue I could not understand how can I short out this one please hwlp me
How can I download the image like "" and save in local folder in silverlight ,
I have used so many way but I couldn't get any right solution so that this image display in .net windows application.
if i have a Image instance, not files provided by OpenFileDialog, do you have solutions to save this Image instance to server drives.
Nice one. Cheers. | https://blogs.msdn.microsoft.com/kirillosenkov/2009/10/12/saving-images-bmp-png-etc-in-wpfsilverlight/ | CC-MAIN-2017-34 | en | refinedweb |
For Postgresql there is a minor security issue. The start scripts do "su - postgres" to launch the daemon, this is to run the ~/.bash_profile file to get settings for the database. The problem with this is that such scripts are writable by the postgres user and thus the postgres user can cause their own program to run which can stuff key-presses into the input buffer of the controlling terminal, this controlling terminal is in many instances (*) the terminal of an administrative shell, and commands such as "chmod 666 /etc/shadow" could be executed. To solve this I have written a program named init_su to provide the necessary functionality from su(1) without the terminal issue. init_su closes all file handles other than 1 and 2 (stdout and stderr). File handles 1 and 2 are fstat()'d, if they are regular files or pipes then they are left open (no attack is possible through a file or pipe), otherwise they are closed and /dev/null is opened instead. /dev/null is opened for file handle 0 regardless of what it might have pointed to previously. Then setsid() is called to create a new session for the process (make it a group leader), this invalidates /dev/tty. Then the uid is changed and the daemon is started. I have attached the source code to init_su, please check it out and tell me what you think. Also this solves a minor problem with the SE Linux patched su and sudo not doing quite what we want for daemon startup. (*) On system boot and shutdown there is no problem. It's when the administrator uses /etc/init.d/postgresql to start or stop the database that there is potential for attack. -- My NSA Security Enhanced Linux packages Bonnie++ hard drive benchmark Postal SMTP/POP benchmark My home page
#include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <pwd.h> #include <stdlib.h> #include <unistd.h> #include <syslog.h> void usage(const char * const msg) { if(msg) fprintf(stderr, "Error: %s\n\n", msg); fprintf(stderr, "Usage: init_su [-l] user -c command\n"); exit(1); } int main(int argc, char **argv) { int i, fd; int login = 0; char *command = NULL, *user = NULL, *shell = NULL, *nu_argv[4]; struct passwd *pw; int int_c = 0; while(int_c != -1) { int_c = getopt(argc, argv, "-lc:s:"); switch(int_c) { case 1: if(!strcmp(optarg, "-")) { login = 1; } else { user = optarg; } break; case 'l': login = 1; break; case 's': shell = optarg; break; case 'c': command = optarg; break; } } if(!user || !command) usage(NULL); pw = getpwnam(user); if(!pw) usage("User unknown."); if(setregid(pw->pw_gid, pw->pw_gid)) usage("Can't setgid(), are you root?"); if(setreuid(pw->pw_uid, pw->pw_uid)) usage("Can't setuid(), are you root?"); if(!shell) shell = pw->pw_shell; if(login) { nu_argv[0] = strrchr(shell, '/'); if(!nu_argv[0]) usage("Bad shell."); nu_argv[0] = strdup(nu_argv[0]); nu_argv[0][0] = '-'; } else nu_argv[0] = shell; nu_argv[1] = "-c"; nu_argv[2] = command; nu_argv[3] = NULL; close(0); for(i = 3; i < 1024; i++) close(i); openlog("initrc_su", LOG_CONS | LOG_NOWAIT, LOG_DAEMON); fd = open("/dev/null", O_RDWR); if(fd == -1) { syslog(LOG_ERR, "Can't open /dev/null when trying to execute program %s", command); return 1; } for(i = 0; i < 3; i++) { struct stat sbuf; if(i != fd && (fstat(i, &sbuf) == -1 || (!S_ISREG(sbuf.st_mode) && !S_ISFIFO(sbuf.st_mode)) )) { close(i); if(dup2(fd, i) != i) { syslog(LOG_ERR, "Can't dup2() when trying to execute program %s", command); return 1; } } } if(fd >= 3) close(fd); setsid(); /* it's OK if this fails as we get the right result anyway */ execv(shell, nu_argv); syslog(LOG_ERR, "Can't exec program %s", command); return 1; } | https://www.redhat.com/archives/fedora-devel-list/2004-July/msg01314.html | CC-MAIN-2017-34 | en | refinedweb |
Each Answer to this Q is separated by one/two green lines.
I want to ask what the
with_metaclass() call means in the definition of a class.
E.g.:
class Foo(with_metaclass(Cls1, Cls2)):
- Is it a special case where a class inherits from a metaclass?
- Is the new class a metaclass, too?
with_metaclass() is a utility class factory function provided by the
six library to make it easier to develop code for both Python 2 and 3.
It uses a little sleight of hand (see below) with a temporary metaclass, to attach a metaclass to a regular class in a way that’s cross-compatible with both Python 2 and Python 3. and c) when you subclass from a base class with a metaclass, creating the actual subclass object is delegated to the metaclass. It effectively creates a new, temporary base class with a temporary
metaclass metaclass that, when used to create the subclass swaps out the temporary base class and metaclass combo with the metaclass of your choice:(type): def __new__(cls, name, this_bases, d): return meta(name, bases, d) @classmethod def __prepare__(cls, name, this_bases): return meta.__prepare__(name, bases) return type.__new__(metaclass, 'temporary_class', (), {})
Breaking the above down:
type.__new__(metaclass, 'temporary_class', (), {})uses the
metaclassmetaclass to create a new class object named
temporary_classthat is entirely empty otherwise.
type.__new__(metaclass, ...)is used instead of
metaclass(...)to avoid using the special
metaclass.__new__()implementation that is needed for the slight of hand in a next step to work.
- In Python 3 only, when
temporary_classis used as a base class, Python first calls
metaclass.__prepare__()(passing in the derived class name,
(temporary_class,)as the
this_basesargument. The intended metaclass
metais then used to call
meta.__prepare__(), ignoring
this_basesand passing in the
basesargument.
- next, after using the return value of
metaclass.__prepare__()as the base namespace for the class attributes (or just using a plain dictionary when on Python 2), Python calls
metaclass.__new__()to create the actual class. This is again passed
(temporary_class,)as the
this_basestuple, but the code above ignores this and uses
basesinstead, calling on
meta(name, bases, d)to create the new derived class.
As a result, using
with_metaclass() gives you a new class object with no additional base classes:
>>> class FooMeta(type): pass ... >>> with_metaclass(FooMeta) # returns a temporary_class object <class '__main__.temporary_class'> >>> type(with_metaclass(FooMeta)) # which has a custom metaclass <class '__main__.metaclass'> >>> class Foo(with_metaclass(FooMeta)): pass ... >>> Foo.__mro__ # no extra base classes (<class '__main__.Foo'>, <type 'object'>) >>> type(Foo) # correct metaclass <class '__main__.FooMeta'>.
| https://techstalking.com/programming/python/python-metaclass-understanding-the-with_metaclass/ | CC-MAIN-2022-40 | en | refinedweb |
Is it possible to show only the top/bottom n groups in a
sns.countplot()?
Using an example from the seaborn website,
sns.countplot(y="deck", hue="class", data=titanic, palette="Greens_d");
Is there any easy (or even relatively straightforward) way of limiting this plot to just 3 decks (groups) instead of displaying all 7 or is this something that would be better accomplished with an
sns.bargraph or just plain matplotlib?
Solution #1:
import seaborn as sns titanic = sns.load_dataset("titanic") sns.countplot(y="deck", hue="class", data=titanic, palette="Greens_d", order=titanic.deck.value_counts().iloc[:3].index)
Solution #2:
Just adding real example instead of toy dataset.
Assuming you have Pandas Data Frame name training_var and you want to display top 10 ‘Gene’ column counts ‘order=” bit should look as follows:
sb.countplot(x="Gene',data=training_var,order=pd.value_counts(training_var['Gene']).iloc[:10].index)
The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .
| https://techstalking.com/programming/question/solved-limit-the-number-of-groups-shown-in-seaborn-countplot/ | CC-MAIN-2022-40 | en | refinedweb |
Jackson Annotations For Java Application
Java has immense application in the coding world, and almost all of us know this fact. One of its features is the Jackson annotations. Although that is a very familiar term with the people involved in the coding world, it is less known. Jackson is a popular and very efficient JAVA library used to map or serialize JAVA objects to JSON and vice versa.
Since Jackson is a JAVA-based library, one must know the basics of JAVA before going on with Jackson. It provides various features that are listed as below:
- Jackson is an easy-to-use library that can simplify commonly used cases.
- Since it provides default mapping, there is no need to create a mapping in Jackson.
- It has a commendable speed and low memory footprint, making it suitable for large object systems.
- The JSON results created by Jackson are compact and clean, which makes them easy to read.
- Jdk is the only library that Jackson requires. Therefore, it has no dependency.
- Above all, the Jackson library is free to use since it is open-source.
After we have learned about the use and advantages of using Jackson, we now need to understand the Jackson annotations are as follows:
- @JsonAnyGetter
- @JsonGetter
- @JsonPropertyOrder
- @JsonRawValue
- @JsonValue
- @JsonRootName
- @JsonSerialize
- @JsonCreator
- @JacksonInject
- @JsonAnySetter
- @JsonSetter
- @JsonDeserialize
- @JsonEnumDefaultValue
- @JsonIgnoreProperties
- @JsonIgnore
- @JsonIgnoreType
- @JsonInclude
- @JsonAutoDetect
- @JsonTypeInfo
- @JsonSubTypes
- @JsonTypeName
- @JsonProperty
- @JsonFormat
- @JsonUnwrapped
- @JsonView
- @JsonManagedReference
- @JsonBackReference
- @JsonIdentityInfo
- @JsonFilter
- @JacksonAnnotationsInside
Let us discuss each of the annotations in order to understand them to deeper roots by implementing them providing fragment codes for all of them.
Annotation 1: @JsonAnyGetter
t facilitates a return of Maps using a getter method. Later the maps are used to serialize the additional properties of JSON in the same way as other properties.
public class ExtendableBean { public String name; private Map<String, String> properties; @JsonAnyGetter public Map<String, String> getProperties() { return properties; } }
Annotation 2: @JsonGetter
This annotation facilitates the marking of a specific method as a getter method.
public class MyBean { public int id; private String name; @JsonGetter("name") public String getTheName() { return name; } }
Annotation 3: @JsonPropertyOrder
While you serialize a JSON object, its order might change, but this annotation facilitates preserving a specific order while the process goes on.
@JsonPropertyOrder({ "name", "id" }) public class MyBean { public int id; public String name; }
Annotation 4: @JsonRawValue
Using this annotation, one can serialize any word without any decoration or escape.
public class RawBean { public String name; @JsonRawValue public String json; }
Annotation 5: @JsonValue
Using this annotation, you can serialize an entire object using a single method.
public enum TypeEnumWithValue { TYPE1(1, "Type A"), TYPE2(2, "Type 2"); private Integer id; private String name; // Standard constructors @JsonValue public String getName() { return name; } }
Annotation 6: @JsonRootName
This annotation facilitates the appearance of a root node specified over JSON. Wrap root value also needs to be enabled.
{ "id": 1, "name": "John" }
Annotation 7: @JsonSerialize
Using this annotation, you can specify a custom serializer to marshall the JSON object.
public class EventWithSerializer { public String name; @JsonSerialize(using = CustomDateSerializer.class) public Date eventDate; }
Annotation 8: @JsonCreator
During deserialization, there is a factory method used. This annotation is used to fine-tune this method.
{ "id": 1, "theName": "My bean" }
Annotation 9: @JacksonInject
When we do not have to parse the property value, instead inject it in the JSON input, we use this annotation.
public class BeanWithInject { @JacksonInject public int id; public String name; }
Annotation 10: @JsonAnySetter
Just as the getter annotation, this facilitates a setter method for using Map, which is then used to deserialize the additional properties of JSON in the same manner as other properties.
public class ExtendableBean { public String name; private Map<String, String> properties; @JsonAnySetter public void add(String key, String value) { properties.put(key, value); } }
Annotation 11: @JsonSetter
This annotation allows any method to be marked as a setter method.
public class MyBean { public int id; private String name; @JsonSetter("name") public void setTheName(String name) { this.name = name; } }
Annotation 12: @JsonDeserialize
This annotation is used to specify a custom deserializer in order to unmarshall a JSON object.
public class EventWithSerializer { public String name; @JsonDeserialize(using = CustomDateDeserializer.class) public Date eventDate; }
Annotation 13: @JsonEnumDefaultValue
Using this annotation, we use a default value for deserializing an unknown enum value.
public class AliasBean { @JsonAlias({ "fName", "f_name" }) private String firstName; private String lastName; }
Annotation 14: @JsonIgnoreProperties
Using this annotation, you can mark a property or a group of properties to be ignored. This is done at the class level.
@JsonIgnoreProperties({ "id" }) public class BeanWithIgnore { public int id; public String name; }
Annotation 15: @JsonIgnore
This one serves the same purpose as above, the only difference being that it is used at the field level.
public class BeanWithIgnore { @JsonIgnore public int id; public String name; }
Annotation 16: @JsonIgnoreType
Using this annotation, you can mark the property of a specific type to be ignored.
public class User { public int id; public Name name; @JsonIgnoreType public static class Name { public String firstName; public String lastName; } }
Annotation 17: @JsonInclude
This annotation is used at those exclude properties that have null or empty default values.
@JsonInclude(Include.NON_NULL) public class MyBean { public int id; public String name; }
Annotation 18: @JsonAutoDetect
This annotation helps detect properties that are not visible or accessible otherwise.
@JsonAutoDetect(fieldVisibility = Visibility.ANY) public class PrivateBean { private int id; private String name; }
Annotation 19: @JsonTypeInfo
Using this annotation, you can indicate the details of the type of information that has to be included either during serialization or deserialization.
Annotation 20: @JsonSubTypes
This one is used to indicate the subtypes of the types that have been annotated.
Annotation 21: @JsonTypeName
Using this one can set type names that have to be used for annotated classes.
public class Zoo { public Animal animal; @JsonTypeInfo( use = JsonTypeInfo.Id.NAME, include = As.PROPERTY, property = "type") @JsonSubTypes({ @JsonSubTypes.Type(value = Dog.class, name = "dog"), @JsonSubTypes.Type(value = Cat.class, name = "cat") }) public static class Animal { public String name; } @JsonTypeName("dog") public static class Dog extends Animal { public double barkVolume; } @JsonTypeName("cat") public static class Cat extends Animal { boolean likesCream; public int lives; } }
Annotation 22: @JsonProperty
This annotation is used to mark the non-standard setter or getter methods that have to be used concerning JSON properties.
public class MyBean { public int id; private String name; @JsonProperty("name") public void setTheName(String name) { this.name = name; } @JsonProperty("name") public String getTheName() { return name; } }
Annotation 23: @JsonFormat
This annotation is usually used in Date fields and specifies the format during serialization or deserialization.
public class EventWithFormat { public String name; @JsonFormat( shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy hh:mm:ss") public Date eventDate; }
Annotation 24: @JsonUnwrapped
During the process of serialization or deserialization, it is required to unwrap the values of objects. This annotation is used to fulfill the purpose.
public class UnwrappedUser { public int id; @JsonUnwrapped public Name name; public static class Name { public String firstName; public String lastName; } }
Annotation 25: @JsonView
This annotation is used to control the values to be serialized or not.
public class Views { public static class Public {} public static class Internal extends Public {} }
Annotation 26: @JsonManagedReference
Such annotations are used for the display of objects with a parent-child relationship.
public class ItemWithRef { public int id; public String itemName; @JsonManagedReference public UserWithRef owner; }
Annotation 27: @JsonBackReference
This shares the same function as the previous one.
private class Player { public int id; public Info info; } private class Info { public int id; public Player parentPlayer; } // Something like this will come into play Player player = new Player(1); player.info = new Info(1, player);
Annotation 28: @JsonIdentityInfo
This annotation is used in a case where there is a parent-child relationship. It is used to indicate that an object identity will be used during serialization and deserialization.
Java
Annotation 29: @JsonFilter
This annotation can be used to apply filters during the process of serialization and deserialization.
@JsonFilter("myFilter") public class BeanWithFilter { public int id; public String name; }
Annotation 30: @JacksonAnnotationsInside
This annotation can be used to create customized Jackson annotations.
@Retention(RetentionPolicy.RUNTIME) @JacksonAnnotationsInside @JsonInclude(Include.NON_NULL) @JsonPropertyOrder({ "name", "id", "dateCreated" }) public @interface CustomAnnotation {}
Note: Various other annotations are used to rename properties or ignore them or choose the more or less specific types.
While using these annotations, Jackson itself uses the default constructors while creating value instances. However, one may alter it by putting the customize constructor. Also, Jackson can handle polymorphic types, that is, objects with various subtypes. This does by enabling the inclusion of type information. Therefore, these were a few points and essential information on the Jackson annotations, their use, and their importance in Java. | https://www.geeksforgeeks.org/jackson-annotations-for-java-application/?ref=rp | CC-MAIN-2022-40 | en | refinedweb |
I was building a form with Formik and I needed a single checkbox to mark a post as "published". In Formik 1.5.8, my values values weren't mapping correctly to checkboxes, so I created a generic Checkbox component to use instead of the Formik Field component.
import { Field } from "formik"; export default function Checkbox({ id, name, className }) { return ( <> <Field name={name} render={({ field, form }) => { return ( <input type="checkbox" id={id} className={className} checked={field.value} {...field} /> ); }} /> </> ); }
I only used for a single true/false value, so your mileage may vary if you're working on something else.
I extracted the code above from this CodeSandbox, so please check it out. I think it'll show you how to do a little more than my implementation does.
It looks like the checkbox issue will be fixed in version 2 of Formik according to its author Jared Palmer, but this should be a workable solution until then.
Top comments (4)
Working great thank you, I'm using it with TypeScript so here is my component for anybody that may be interested.
This post helped me out of a jam, thanks! I had to modify the
classprop into
classNamebut otherwise it worked great!
Glad it helped, and good catch! I changed it to className on my snippet.
Cool, but the field can't be unchecked with this solution 😂 | https://dev.to/tylerlwsmith/how-to-implement-a-working-checkbox-component-in-formik-1-5-8-5dmj | CC-MAIN-2022-40 | en | refinedweb |
Before you start coding you will first need to have a working installation of PyQt5 on your system. If you don't have PyQt5 set up yet, the following sections will guide you through how to do this on Windows, macOS and Linux.
This guide is also available for macOS and Linux.
Note that the following instructions are only for installation of the GPL licensed version of PyQt. If you need to use PyQt in a non-GPL project you will need to purchase an alternative license from Riverbank Computing to release your software.
Installation on Windows
PyQt5 for Windows can be installed as for any other application or library. As of Qt 5.6 installers are available to install via PyPi, the Python Package archive. To install PyQt5 from Python3 simply run --
pip3 install pyqt5
After install is finished, you should be able to run
python and
import PyQt5.
Note that if you want access to Qt Designer or Qt Creator you will need to download this from[the Qt downloads site].
To support developers in [[ countryRegion ]] I give a [[ localizedDiscount[couponCode] ]]% discount with the code [[ couponCode ]] — Enjoy!
For [[ activeDiscount.description ]] I'm giving a [[ activeDiscount.discount ]]% discount with the code [[ couponCode ]] — Enjoy! | https://www.pythonguis.com/installation/install-pyqt-windows/ | CC-MAIN-2022-40 | en | refinedweb |
Options to configure a
DataLoader.
More...
#include <dataloader_options.h>
Options to configure a
DataLoader.
Definition at line 13 of file dataloader_options.h.
The number of worker threads to launch.
If zero, the main thread will synchronously perform the data loading.
The maximum number of jobs to enqueue for fetching by worker threads.
Defaults to two times the number of worker threads.
Whether to enforce ordering of batches when multiple are loaded asynchronously by worker threads.
Set to
false for better performance if you do not care about determinism.
Whether to omit the last batch if it contains less than
batch_size examples. | https://caffe2.ai/doxygen-c/html/structtorch_1_1data_1_1_data_loader_options.html | CC-MAIN-2022-40 | en | refinedweb |
public class StreamProtectEvent extends Object
implements Parcelable
implements Parcelable
A
StreamProtectEvent
instance is a
Parcelable that indicates the status of Stream Protect.
Constant Summary
Inherited Constant Summary
From interface android.os.Parcelable
Field Summary
Public Constructor Summary
Public Method Summary
Inherited Method Summary
From class java.lang.Object
From interface android.os.Parcelable
Constants
public static final int DISABLED
Constant Value: 2
public static final int ENABLED
Constant Value: 1
public static final int STOPPED_INTERNAL_ERROR
Stream Protect has stopped because of internal errors.
Constant Value: 13
public static final int STOPPED_NO_FRAME_INFO
public static final int STOPPED_SESSION_IN_BACKGROUND
Stream Protect has detected that the current app has gone to background and stopped.
Constant Value: 12
public static final int STOPPED_WIFI_DISCONNECTED
Stream Protect has stopped as the device is no longer connected to Wifi.
Constant Value: 11
public static final int UNKNOWN_EVENT
Event codes
Constant Value: 0
Fields
public static final Creator<StreamProtectEvent> CREATOR
Public Constructors
public StreamProtectEvent (int eventCode)
Indicates the status of Stream Protect.
Public Methods
public int getEventCode ()
Extracts the status of Stream Protect. | https://developers-dot-devsite-v2-prod.appspot.com/android/reference/com/google/android/gms/streamprotect/StreamProtectEvent?hl=zh-tw | CC-MAIN-2022-40 | en | refinedweb |
1. Building a JIT: Starting out with KaleidoscopeJIT¶
1.1. Chapter 1 Introduction¶: Extend the basic KaleidoscopeJIT by adding a new layer that will optimize IR and generated code.
- Chapter #3: Further extend the JIT by adding a Compile-On-Demand layer to lazily compile IR.
- Chapter #4: Improve the laziness of our JIT by replacing the Compile-On-Demand layer with a custom layer that uses the ORC Compile Callbacks API directly to defer IR-generation until functions are called.
- Chapter #5: Add process isolation by JITing code into a remote process with reduced privileges using the JIT Remote APIs.
To provide input for our JIT we will use a lightly modified version of the Kaleidoscope REPL from Chapter 7.
1.2. JIT API Basics¶
The purpose of a JIT compiler is to compile code “on-the-fly” as it is needed, rather than compiling whole programs to disk ahead of time as a traditional compiler does. To support that aim our initial, bare-bones JIT API will have just two functions:
Error addModule(std::unique_ptr<Module> M): Make the given IR module available for execution.
Expected<JITEvaluatedSymbol> lookup(): Search for pointers to symbols (functions or variables) that have been added to the JIT.
A basic use-case for this API, executing the ‘main’ function from a module, will look like:.
1.3. KaleidoscopeJIT¶
In the previous section we described our API, now we examine a simple implementation of it: The KaleidoscopeJIT class [1] that was used in the Implementing a language with LLVM tutorials. We will use the REPL code from Chapter 7 of that tutorial to supply the input for our JIT: Each time the user enters an expression the REPL will add a new IR module containing the code for that expression to the JIT. If the expression is a top-level expression like ‘1+1’ or ‘sin:
: ExecutionSession ES; RTDyldObjectLinkingLayer ObjectLayer; IRCompileLayer CompileLayer; DataLayout DL; MangleAndInterner Mangle; ThreadSafeContext Ctx; public: KaleidoscopeJIT(JITTargetMachineBuilder JTMB, DataLayout DL) : ObjectLayer(ES, []() { return std::make_unique<SectionMemoryManager>(); }), CompileLayer(ES, ObjectLayer, ConcurrentIRCompiler(std::move(JTMB))), DL(std::move(DL)), Mangle(ES, this->DL), Ctx(std::make_unique<LLVMContext>()) { ES.getMainJITDylib().addGenerator( cantFail(DynamicLibrarySearchGenerator::GetForCurrentProcess(DL.getGlobalPrefix()))); }
constructed std::make_unique<KaleidoscopeJIT>(std::move(*JTMB), std::move(*DL)); } classes’
1.4. Full Code Listing¶
Here is the complete code listing for our running example.++ -*-===// // // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. // See for license information. // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception // //===----------------------------------------------------------------------===// // // Contains a simple JIT definition for use in the kaleidoscope tutorials. // //===----------------------------------------------------------------------===// /ExecutorProcessControl: std::unique_ptr<ExecutionSession> ES; DataLayout DL; MangleAndInterner Mangle; RTDyldObjectLinkingLayer ObjectLayer; IRCompileLayer CompileLayer; JITDylib &MainJD; public: KaleidoscopeJIT(std::unique_ptr<ExecutionSession> ES, JITTargetMachineBuilder JTMB, DataLayout DL) : ES(std::move(ES)), DL(std::move(DL)), Mangle(*this->ES, this->DL), ObjectLayer(*this->ES, []() { return std::make_unique<SectionMemoryManager>(); }), CompileLayer(*this->ES, ObjectLayer, std::make_unique<ConcurrentIRCompiler>(std::move(JTMB))), MainJD(this->ES->createBareJITDylib("<main>")) { MainJD.addGenerator( cantFail(DynamicLibrarySearchGenerator::GetForCurrentProcess( DL.getGlobalPrefix()))); } ~KaleidoscopeJIT() { if (auto Err = ES->endSession()) ES->reportError(std::move(Err)); } static Expected<std::unique_ptr<KaleidoscopeJIT>> Create() { auto EPC = SelfExecutorProcessControl::Create(); if (!EPC) return EPC.takeError(); auto ES = std::make_unique<ExecutionSession>(std::move(*EPC)); JITTargetMachineBuilder JTMB( ES->getExecutorProcessControl().getTargetTriple()); auto DL = JTMB.getDefaultDataLayoutForTarget(); if (!DL) return DL.takeError(); return std::make_unique<KaleidoscopeJIT>(std::move(ES), std::move(JTMB), std::move(*DL)); } const DataLayout &getDataLayout() const { return DL; } JITDylib &getMainJITDylib() { return MainJD; } Error addModule(ThreadSafeModule TSM, ResourceTrackerSP RT = nullptr) { if (!RT) RT = MainJD.getDefaultResourceTracker(); return CompileLayer.add(RT, std::move(TSM)); } Expected<JITEvaluatedSymbol> lookup(StringRef Name) { return ES->lookup({&MainJD}, Mangle(Name.str())); } }; } // end namespace orc } // end namespace llvm #endif // LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H | https://llvm.org/docs/tutorial/BuildingAJIT1.html | CC-MAIN-2022-40 | en | refinedweb |
A system for dealing with local settings in Django projects
Project description
Local settings for Django projects.
TODO: Maybe add support for different config file format (e.g., YAML)?
Once the local settings are defined, any missing settings will be prompted for in the console (with pretty colors and readline support).
Features
- 2.7 and 3.5 - 3.8 (using six)
- Python 3.3 and 3.4 aren't officially supported but there shouldn't be any issues on those versions since 2.7 is officially supported (for now).
- Supports Django 1.7 - 2.2
Basic usage
At the top of your project's settings module, import the
load_and_check_settingsfunction along with the types of settings you need:
from local_settings import load_and_check_settings, LocalSetting, SecretSetting
Then define some base settings and local settings:
PACKAGE = 'top_level_package_name' DEBUG = LocalSetting(default=False) DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': LocalSetting(default='{{ PACKAGE }}'), 'USER': LocalSetting(''), 'PASSWORD': SecretSetting(), 'HOST': LocalSetting(''), 'PORT': '', }, } SECRET_KEY = SecretSetting(doc='The secret key for doing secret stuff')
As you can see, local settings can be defined anywhere within the definition of a top level setting. They can also have doc strings, which are displayed when prompting.
This also demonstrates interpolation. The
DATABASES.default.NAMEsetting will be replaced with the
PACKAGEsetting, so that its default value is effectively
'top_level_package'.
After all the local settings are defined, add the following lines:
_settings = load_and_check_settings(globals()) globals().update(_settings)
These two lines merge the project's local settings into the settings module's namespace. Passing
globals()initializes the local settings loader with base settings (e.g.,
PACKAGEin the example above) and by "telling" it which settings are local settings.
load_and_check_settings()loads the project's local settings from a file (
$PWD/local.cfgby default), prompting for any that are missing, and returns a new dictionary with local settings merged over any base settings. When not running on a TTY/console, missing local settings will cause an exception to be raised.
globals().update(_settings)merges all of the settings into the settings module's namespace. After this line runs, you will be able to use the local settings just like any other settings. For example, you could do
if DEBUG: ...; at this point,
DEBUGis no longer a
LocalSettinginstance--it's a regular old bool.
Note: You could just write
globals().update(load_and_check_settings(globals())). The spelling above is just intended to make it more clear what's happening.. | https://pypi.org/project/django-local-settings/1.0b11/ | CC-MAIN-2022-40 | en | refinedweb |
sorry for the noob question.
i'm new to python. i'm trying to learn about classes and def.
i created a simple one here called test2.py
class test: def Test1(T1): T1 = "this is a test"
i have another python file, which is the main program called test1.py
from test2 import test print Test1
but i keep getting this error:
Traceback (most recent call last): File "test1.py", line 5, in <module> print Test1 NameError: name 'Test1' is not defined
am i doing this correct? can someone help me? | https://www.daniweb.com/programming/software-development/threads/340065/print-error-plz-help | CC-MAIN-2018-47 | en | refinedweb |
tag:blogger.com,1999:blog-20725068039118517152018-11-12T01:47:41.469-08:00No Joke ITInformation technology notes, tips and rants worth sharing.SixDimensionalArray down ESXi 5.1 guest VMs and the host (free edition) via SSH - the easy way!Thanks go out to reader Everett for pointing out an easier way to gracefully shutdown guest VMs and the host on a VMware ESXi 5.1 (free) server. <br /><br />This is much easier than the method described in the <a href="" target="_blank">previous post</a>.<br /><br />You may want to gracefully shut down your guest VMs and host ESXi 5.1 server via SSH, for example, on the triggering of a UPS power outage event or something similar. <br /><br />The method is as follows:<br /><br />1) Install VMware Tools in all guest VMs.<br /><br />2) Make sure each guest VM is setup to perform the shutdown action "Guest Shutdown" (or you could also use a suspend, if you wanted to) in the virtual host settings "Virtual Machine Startup and Shutdown" section.<br /><br />3) The following two commands, run in sequence, will shutdown the properly configured guest VMs and the host server also:<br /><br />/sbin/shutdown.sh && /sbin/poweroff<br /><br />These commands can be run in sequence via an SSH connection from another system (for example, a batch file and plink on Windows, on a machine running a UPS). The poweroff will only run if the shutdown.sh script runs successfully.<br /><br />4) That's it!<br /><br />Thanks Everett!<img src="" height="1" width="1" alt=""/>SixDimensionalArray shutdown of an ESXi 5.1 host and guest VMs (free edition) using the shell/command line/scripting (UPS friendly)<i>Update 2/11/2013: A much easier method for doing this has been <a href="" target="_blank">documented in this blog post</a>. Thanks to reader Everett for the suggestion!</i><br /><i><br /></i><i>Update 2/7/13: A shell script that does what this post describes has been <a href="" target="_blank">posted at github</a>. Enjoy!</i><br /><br />On a single ESXi 5.1 host (INCLUDING the free edition), I have been able to gracefully shutdown, poweroff or reboot the host and guest VMs using the commands documented below from the ESXi 5.1 shell.<br /><br />You may want to do this in response to an uninterruptible power supply (UPS) power failure event trigger. In that case, you will need to install at least one guest VM (consider the <a href="" target="_blank">VMware Virtual Management Appliance</a>) that can run your UPS' software or Linux's <a href="" target="_blank">Network UPS Tools (NUT)</a>..<br /><br /. <br /><br />Or you might just want to shut things down or do other maintenance via the shell/command line which these commands allow you to do.<br /><br /><b>The two command-line tools used here are vim-cmd and esxcli. </b><br /><br />If you type vim-cmd, or vim-cmd <namespace> the tool has pretty good command-line help for figuring out what it can do - and that is quite a bit! <br /><br />NOTE: I have not seen this method documented elsewhere and so you must assume this method is not officially supported by VMware - but it seems to work fine (and it may be able to be be improved on as well)!<b> </b><br /><br /><b>Command List/Sequence:</b><br /><br /><b>1) list all vms</b><br /><br />~ # vim-cmd vmsvc/getallvms<br /><br /><b>2) gracefully shutdown a vm (uses the VM's "world id") - you can also use power.off, power.reboot, power.suspend, etc.</b><br /><b> </b> <br />~ # vim-cmd vmsvc/power.shutdown <VM/"world id" from step 1><br /><br /><b>3) enter maintenance mode (immediately with no delay, this can only be done if ALL guest VMs have been shut down)</b><br /><br />~ # esxcli system maintenanceMode set -e true -t 0 <br /><br /><b>4) shutdown the ESXi host server</b><br /><br />~ # esxcli system shutdown poweroff -d 10 -r "Shell initiated system shutdown"<br /><br /><b>5) try to exit maintenance mode real quick before shutdown!</b> <br /><br />~ # esxcli system maintenanceMode set -e false -t 0<br /><br />If step # 5 does not succeed, your system will reboot in maintenance mode and you will have to manually take the system out of maintenance mode and restart your guest VMs. <br /> <br />These commands can be built into a simple shell script that you can then deploy on the ESXi host server itself. I have written one such script, and you can download it from GitHub.<br /><br /><a href="" target="_blank">Download esxidown (via github)</a><br /><br />There may be more information available on <a href="" target="_blank">this VMware forums post</a> (11/30/2012).<img src="" height="1" width="1" alt=""/>SixDimensionalArray Chrome pages not loading, pages appear gray<b>Update 11/27/2012 7:25PM: </b><br /><b><br /></b>This <a href="" target="_blank">Microsoft website</a> and on <a href="" target="_blank">VirusTotal</a>.<br /><br /.<br /><br />Scans with the latest up-to-date version of Microsoft's Security Essentials caught the virus (and hopefully other anti-virus vendors have now implemented signatures for it as well).<br /><br />Try an updated anti-virus scan and see if resolves your issue!<br /><br /><b>Update 9/6/2012 11:08AM< - <a href="" target="_blank">see this post on the Google product forums</a>.<br /><br /).<br /><br />The gray pages look like this:<br /><br /><div style="text-align: center;"><img alt="" height="237" src=" width="320" /> </div><br />I located the folder in which Chrome is installed. In this case (Windows 7 64-bit), it was:<br /><br />C:\Users\<your username>\AppData\Local\Google\Chrome\Application<br /><br />where <your username> is the Windows user account you used to log on. <br /><br /><b>TEMPORARY/POTENTIAL FIX:</b> <br /><br />Although it is odd and may not work for everyone, I was able to run the program "C:\Users\<your username>\AppData\Local\Google\Chrome\Application\old_chrome.exe", and it loaded up a previous version of Chrome which loaded as it was normally expected.<br /><br /><b>Navigate using Windows Explorer in Windows 7 to: </b><br /><br />C:\Users\<username>\AppData\Local\Google\Chrome\Application\old_chrome.exe<br /><br /><br />;"><b>Run "old_chrome" just once and then close the program and run Chrome as you normally do.</b></td></tr></tbody></table.<br /><br />I'm not sure completely why it fixed the problem (for that, we will have to wait and see what Google says regarding the issue), but it did.<br /><br />Here is a <a href="" target="_blank">related post ("Page not loading in Chrome") on the Google product forums</a> that might help you if the fix above doesn't solve the problem for you.<br /><br />Looks like a little Google bug. Oops!<img src="" height="1" width="1" alt=""/>SixDimensionalArray Curse of "Being the IT Guy/Gal"This is a truth that my fellow "IT Guys/Gals" can most likely identify with:<br /><br / <i>really</i> don't want to mess with your <b>own</b> personal IT stuff.<br /><br / <br /><br />I find it especially true for folks like me whose hobby became their job. Hey, I'm grateful I have a job.. and damn lucky my hobby fit the bill - don't get me wrong. That said, if you happen to be the local IT hero or <a href="">MacGyver programmer</a> of your office, where your day is spent doing anything from fixing printers, to writing shell scripts in Linux, to supporting legacy code, answering that support call (or 100s of them) or writing reams of new code - you know exactly what I'm talking about.<br /><br /!<br /><br /. <br /><br />Maybe better said, even if you love technology and want to mess with it 24/7, eventually, it will mess with <b>you</b> and when you feel that intense need to take a break and do something else - DO IT!<br /><br />After all, if you don't, the machines win. And we can't have that happen, can we? It didn't go too well for <a href="">John Connor</a>.<br /><br />Actually... envisioning my router with glowing blue LEDs as a <a href="">T2</a>... oh brother.. we're already there. Haha, pulled the power cable - die T2 die!! Oh crap, maybe I should plug that backbone connection back in. I felt the power and now I feel the pain! <br /><br />Heed the call fellow IT warriors, for constant IT work at home and on the job is a surefire path to burnout! Remember to have some fun once in a while! <br /><br />Now to try to take my own advice, and put down the keyboard and mouse for you know, ten minutes. At least until the next server alert or upgrade or status bar appears.<img src="" height="1" width="1" alt=""/>SixDimensionalArray 5.0 Auto-Start Broken - Fix/Patch Released!!If <a href="" target="_blank">revert back to the old version (5.0.0)</a>. <br /><br />It appears that a <a href="">patch has been released</a> - see this <a href="" target="_blank">VMWare blog post</a> for more information. Here's the direct link to the patch on the VMWare website: <a href="">ESXi500-201207001.zip</a><br /><br />Here is a <a href="" target="_blank">method of installing the patch via the CLI</a>.<br /><br />I applied the patch against an ESXi 5.0.0 U1 server in my lab by uploading it using vSphere Client to the main datastore, SSH'ing into the machine, and then running the following command:<br /><br />esxcli software vib install --depot=/vmfs/volumes/datastore1/ESXi500-201207001.zip<br /><br />I'm hoping they will roll this patch into the next major release... no idea when that comes out though.<img src="" height="1" width="1" alt=""/>SixDimensionalArray I buy Facebook stock? Why is FB valuable?Should I buy Facebook stock? The better question is, <b>why is Facebook valuable</b>? <br /><br />On the eve of the <a href="" target="_blank">Facebook</a> (<a href="" target="_blank">NASDAQ:FB</a>) IPO, I felt compelled as a technologist to share an important observation about the company and its products:<br /><blockquote class="tr_bq"><b>T</b><b>he value of Facebook rests in identity, communication, sharing, and recording/making relationships between data about people, places, events and things.</b></blockquote><br />Facebook has done what no other organization in the entire world has managed to do, and that is to <b>catalog the identity</b> (usually at least a name, photograph, maybe a hometown) of 800 million people.<br /><br />Even such projects as <a href="" target="_blank">OpenID</a>, explicitly designed to try solve the problem of giving you one unique identifier to use at websites around the world, have never taken hold to the extent that Facebook has.<br /><br />Using the simplest search tools, and the relationships you have with your contacts/friends (and the power of <a href="" target="_blank">6 degrees of separation</a>), there is a very likely chance that you can locate and identify nearly any person you know, who is on Facebook, and that they can locate you.<br /><br /.<br /><br /.<br /><br />They are all made somewhat extraneous if one can verify identity using Facebook. <br /><br />Your Facebook page is a public way of identifying yourself, to a large number of people and organizations. This is the reason for the advice - watch what you say on Facebook, you never know who will see it and how long it will be around for!<br /><br />The <a href="" target="_blank">IPO is set to price around $38 a share </a>and raise approximately $16 billion dollars at that price.<br /><br />Think about what $16 billion can do in making your life better through ancillary services that surround Facebook, but making use of this core <b>identity</b> feature.<br /><br /><br />Add to it, the fact that every website you see these days, every news article, everything on the web, has a giant "LIKE" button next to it. Everything is personalized to suit what these systems think your tastes are (also known as a "<a href="" target="_blank">filter bubble</a>"). What about all the other actions that could be tracked?<br /><br /><b>The like button is an action</b>. It is a communication tool, it is a sharing tool. By clicking it, content from around the world gets tagged and stored in Facebook's giant "<a href="" target="_blank">open graph</a>" database.<br /><br />I... like... something (already exists) or someone.<br />I... am in a relationship with... someone (already exists). <br />I... bought.. something.<br />I... talk to... someone. <br />I... went.... somewhere (think "Check-ins").<br />I... ate...... something.<br />I... made... something.<br />I... work... somewhere (job info).<br />I... have... something.<br />I... saw... something (why did they buy Instagram?). <br /><br /).<br /><br />If you are familiar with the popular game <a href="" target="_blank">The Sims</a> made by <a href="" target="_blank">Will Wright</a> you know how much data about mood, physical status, etc. the game tracks about your characters. Imagine a Sims character generated from all this data that Facebook has collected about you - I wonder what that would look like!<br /><br /><br />If Facebook gets this right, $16 billion or more will help them <b>turn open graph into the world's largest centralized repository of data about human activity</b> of many different types <b>that has ever existed,</b> outside of perhaps technologies used by governments through intelligence gathering organizations.<br /><br />Yes, there are privacy concerns. Yes, the government wants to get at this data.<br /><br />But there is more. Just because the <b>open graph database</b> exists, does not mean that everything needs to exist within it.<br /><br />Instead, Facebook could work with partner companies to build private, closed databases, where you are identified by your Facebook ID, but the Facebook application itself has NO access to the data inside these repositories.<br /><br /><b>Facebook wants to get into business organizations</b>. They are going to use much of this IPO money to try (or so I believe).<br /><br /><b>Facebook wants to be an integral part of your life, </b? <br /><br />Lately, there has been much discussion about electronic health records, and centralized healthcare systems, and health information exchange in the United States. <br /><br />Do you think such data should be on Facebook? How about your banking data? How about any private data at all?<br /><br /><b>Of course not!</b> But, that's the thing people don't realize - the data doesn't HAVE to live inside of Facebook. The data can be stored securely away, anywhere else, but <b>access and identification</b> of who you are could be done through Facebook.<br /><br /><b>Facebook is a platform to build on</b>. I believe this is what they will push the company forward with. Use Facebook as a platform to build around.<br /><br />Look at their <a href="" target="_blank">recent announcement</a> of the "App Center/Store". Look at the tools that Facebook offers for <a href="" target="_blank">building applications</a> that live inside of Facebook?<br /><br />There is value here! It's crazy, scary and powerful.<br /><br />Or is there value? And is it crazy, scary, powerful?<br /><br />One thing is for sure - they didn't get to where they are without having succeeding at implementing many major innovations and new ideas, and I would wager a guess that they will continue pushing hard and growing out however they can.<br /><br /><img src="" height="1" width="1" alt=""/>SixDimensionalArray Zywall - VPN issues, firmware updates, problems, review<u><b>Firmware updates for Zywall products</b></u> <br /><br />If you happen to have any Zyxel <a href="" target="_blank">Zywall products</a> (such as the USG 50, USG 200, etc.), keep your eyes out for firmware updates. It is clear to me that they are consistently having to update their firmware, and there have been a lot of changes recently. For the USG 200 product alone, there have been at least two firmware updates in only 3 months!<br /><br />Zyxel Zywall USG 200 firmware updates - <a href=""></a><br /><br />All Zyxel support downloads - "Download Library" - <a href=""></a>.<br /><br />It would be nice if they sent out an email notification every time their was a new firmware release!<br /><br /><u><b>Nailed-up VPNs</b></u><br /><br />Regarding setting up VPNs on Zywall USG products, if you have a problem where your VPN connections do not restore automatically after a reboot, you may consider activating the "nailed-up" option in the Advanced Settings section of the VPN Connection tab for that particular connection.<br /><br />According to a <a href="" target="_blank">knowledge base article on the Zyxel site (article 1633)</a>, "nailed-up" as applied to PPP connections means: <br /><div style="text-align: left;"><blockquote class="tr_bq"><i><span style="font-size: small;">"A nailed-up connection is always up regardless of there is traffic transmitted. The ZyXEL Device performs two actions when the nailed-up feature is enabled. First, connection idle timeout is disabled. Second, the ZyXEL Device will try to bring up the connection when turned on and whenever the connection is down. A nailed-up connection can be very expensive for some reasons. It is always a better idea not to enable a nailed-up connection unless the broadband service provider offers flat-rate service or you need a constant connection and the cost is not a concern. You can enable/disable WAN connection nail-up in SMT menu 11 or the web GUI."</span></i></blockquote></div><span style="font-size: small;">I did not find useful documentation, but I believe the term "nailed up" as applies to VPN has the same meaning - so in case you want your VPN connections to dial automatically after a reboot, consider this setting! So far, I have not seen any negative affect for having used it. If you were paying for metered/limited bandwidth</span> and leave your VPN "nailed up", though, you may have consequences from your connection being in use at all times, so do be careful. That said, for most people, site-to-site VPN connections are meant to be up constantly, so it isn't a problem.<br /><br /><u><b>Quick review of Zyxel as a vendor</b></u><br /><br />Overall, having worked with a lot of vendor's firewalls over the years (<a href="" target="_blank">Sonicwall</a>, <a href="" target="_blank">Watchguard</a>, <a href="" target="_blank">Zyxel</a> and <a href="" target="_blank">Cisco</a> to name a few), I have to say, the Zyxel stuff is affordable, but not entirely intuitive and somewhat roughly documented. That said, their tech support seems pretty responsive and helpful, as long as you only need their help during business hours (8-5PM Pacific Time). <br /> <span style="font-size: x-small;"> </span><br />Phew, it's been a while since I had time to post any tips!<img src="" height="1" width="1" alt=""/>SixDimensionalArray Firewall blocked netsession_win.exe - Akamai NetSession Installware.<br /><br />Of course, we all know <a href="" target="_blank">Akamai</a>, one of the leading providers of content caching and distribution networks among other things.<br /><br />Upon further review, it appears that the <a href="" target="_blank">Akamai NetSession Interface</a> is some sort of download accelerator/caching tool, but it is not clear how the user got that particular tool on their system. It does have an entry in the Windows control panel with some administrative tools. <br /><br />This app appears to be <a href="" target="_blank">installware</a> - i.e. a program, not necessarily malicious (but annoying) that was installed without the user's knowledge or direct consent, but included with some other download or via an automatic download mechanism. There is a <a href="" target="_blank">long list of companies</a> that appear to use this tool.<br /><br /!<br /><br />The <a href="" target="_blank">Akamai website also includes another uninstallation method</a>:<br /><br />"<b>How do I uninstall the Akamai NetSession Interface?</b><br /"<br /><br /><b>UPDATE 11/8/2011:</b> <a href="" target="_blank">list of companies</a>) do often bundle the installer.<br /><br /.". <br /><br /. <br /><br />It seems almost a guarantee that something automatically triggered the installer, whether it was timed, an update of some sort, or some other process.<img src="" height="1" width="1" alt=""/>SixDimensionalArray: The "Cloud" and BandwidthThis post is the start of a new type of post in which I will make predictions for the future of IT. These are purely my opinion and should be taken as such - don't bet the farm on any of them! <br /><br />To start it off, here's my first prediction:<br /><br /><i>When national bandwidth infrastructure improves drastically, the "cloud" finally has a chance to be relevant for storing/retrieving large data efficiently.</i><br /><br />By the way, by "cloud", I do mean a private cloud, or a public/shared cloud. <br /><br />What I mean by this is, right now, so many of us are limited to a few Mbps (usually 3 or less) download speed. For businesses with symmetric connections, upload speed is roughly the same (3 or less), and for residential areas, unless you can get FIOS or another high speed connection, usually upload is below 1Mbps.<br /><br />I predict that when we have at least 100Mbps, if not much greater (such as 1Gbps) to the home, and to the business, and transferring bits becomes cheaper and more affordable, having all our data (large and small) live in the cloud can finally be more feasible. The only problem standing in the way of the cloud at that point is security. No predictions about that!<br /><br />Think about trying to back up your servers from your datacenter to a cloud location or off-site. Got enough bandwidth? How many hours does it take? How about backing up your own personal photo collections off-site? How many hours does it take to get it to a service like Mozy or Carbonite?<br /><br />These types of services, among others, such as streaming audio and video, will only improve with more bandwidth. And maybe, just maybe, with more bandwidth, the idea that my operating system lives mostly in the cloud becomes ever closer a reality. Should we want that? Again, that's not a prediction for today!<br /><br />We're hungry for bandwidth. Who will feed the need?<img src="" height="1" width="1" alt=""/>SixDimensionalArray 5.5 and NX Server >3.4If you've set up NX Server on CentOS 5.5 by downloading it directly from the NoMachine website, and you try to connect to your newly minted install using SSH and a DSA key, and you encounter a problem where the server gives you a message something like this:<br /><br /><i>NX> 203 NXSSH running with pid: NNNN <br />NX> 285 Enabling check on switch command <br />NX> 285 Enabling skip of SSH config files <br />NX> 285 Setting the preferred NX options </i><br /><i>NX> 200 Connected to address: NNN.NNN.NNN.NNN on port: 22 <br />NX> 202 Authenticating user: nx <br />NX> 208 Using auth method: publickey </i><br /><i>NX> 204 Authentication failed.</i><br /><br />Check to make sure that you have synchronized the name of the authorized_keys file in the NX server.cfg, node.cfg and your sshd_config files. I discovered that server.cfg and node.cfg were looking for authorized_keys2 and the sshd_config was looking for authorized_keys. Match those values and restart the server (as described <a href="">here</a>) and you should have better luck. Apparently, authorized_keys2 was deprecated a long time ago.<br /><br />Other useful links:<br /><ul><li><a href="">NX Server Administrator's Guide</a></li></ul><img src="" height="1" width="1" alt=""/>SixDimensionalArray Search and Network File/Share Locations<b>NOTE:</b> <i>If you're finding this post, and having a terrible time setting up Windows Search features in Windows 7, particularly to index network file locations, I feel your pain!</i><br /><br />Windows Desktop Search 4.0 in Windows XP was, in my opinion, a legitimate competitor to Google Desktop Search. Using this product, you can easily build search indexes for your desktop as well as networked file locations.<br /><br />In Windows 7, Microsoft (in its infinite wisdom) chose to more deeply integrate this search technology with the core of the operating system. It works, <i>kind of</i>, although it seems to be lacking in documentation and strangely more complex, even though I think MS intended it to be easier, actually.<br /><br /.<br /><br />First, I enabled the Windows Search Service Role in File Services on the Windows 2008 R2 file server. This caused the network file shares to be indexed.<br /><br />Second, I created a library on the Windows 7 desktop whose sole purpose was to be a pointer to the network file share. This key step seems to be what "simplified" and made searching the network file share much easier.<br /><br />With these two steps completed, I could now search both the local machine (after including relevant locations in the index), and the network file shares, and the results showed in the Start Menu's search area.<br /><br /.<br /><br />Now to figure out how to automate the creation of the Windows 7 library creation via Group Policy..... if possible!<img src="" height="1" width="1" alt=""/>SixDimensionalArray Policy to Turn Off DisplayIf you want to turn off the user's monitor/display to save power and extend monitor life using Group Policy & Active Directory (Server 2008 R2, probably works in earlier versions as well), consider the following group policy setting:<br /><br /><ul><li>Computer Configuration > Policies > Administrative Templates > System > Power Management > Video and Display Settings > Turn Off the Display (Plugged In or On Battery)</li></ul>Just set a timeout value, and go!<br /><br /.<br /><br />The group policy wording states: "<i.</i>". It's confusing because they say "Turn Off" is "enabled" - but I think what they mean is,<b> the adaptive timeout for turning off the display policy is enabled or disabled.</b> That makes more sense to me at least. I will test it and see what happens.<br /><br />There is also <a href="">nice post over on the Microsoft Directory Services team blog</a> that discusses other power management options with AD.<img src="" height="1" width="1" alt=""/>SixDimensionalArray out and ping someone!If you are testing network connectivity, packet loss, latency, etc. and you need an external server to ping (ICMP echo request), consider using the IP <b><i>8.8.8.8</i></b>. This IP represents <a href="">Google's public DNS</a>.<br /><br />Funny, in the past, I remember using <b><i>4.2.2.2</i></b> and never knowing exactly who it was that ran that IP. Turns out, as with most things strange and interesting on the net, <a href="">there is a small story behind that IP</a>. Kudos to <a href="">Tummy.com</a> for posting it!<img src="" height="1" width="1" alt=""/>SixDimensionalArray a Network Device by MAC/Hardware AddressIf you are trying to identify a network device, and all you have is a MAC address for that device, you might try identifying which hardware vendor the MAC address range is associated with. For example, I had a device connected to my wireless which had a MAC address starting with the prefix 30:69:4B. I could not identify exactly which device it was, although it was most likely a valid one.<br /><br />The device also did not show up in other tools, such as the Windows command line arp -a command, <a href="">Angry IP Scanner</a> or <a href="">Colasoft's MAC Scanner</a>, so identifying it that way was not possible. ARP stands for <a href="">Address Resolution Protocol</a>, i.e. the protocol used to determine MAC addresses from IP addresses so that transmission at the link layer can occur. You didn't forget your <a href="">OSI model</a> did you? :)<br /><br />Using <a href=""></a> I was able to determine the device was a coworker's Blackberry phone which was associating with the wireless access point (AP). The MAC prefix is owned by the manufacturer RIM/Research In Motion. Another similar search site is <a href="">Vendor/Ethernet/Bluetooth MAC Address Lookup and Search</a>, although I didn't find what I was looking for on that one.<img src="" height="1" width="1" alt=""/>SixDimensionalArray Remote Desktop in Server 2008 R2 Active Directory Group PolicyIf you need to configure Windows Server 2008 R2 Active Directory group policy so that Remote Desktop is enabled on a domain, note that it is no longer referred to as Terminal Services in the Group Policy Management interfaces, it was renamed Remote Desktop Services.<br /><br />Two group policy changes should do the trick, followed by a gpupdate /force or waiting for the policy to be distributed to domain members/clients:<br /><ol><li>Computer Configuration > Administrative Templates > Network > Network Connections > Windows Firewall > Domain Profile > Allow inbound Remote Desktop exception. Note that I recommend limiting the IP addresses that have access as explained in the notes of that policy, if possible, as a best practice.</li><li>Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections > Allow users to connect remotely using Remote Desktop Services </li></ol>Now you should be able to remote desktop into any domain member which the policy is applied to! <br /><ol></ol><img src="" height="1" width="1" alt=""/>SixDimensionalArray Profile Migration with USMTI recently had to migrate about 20 domain accounts from a small, dying, badly designed Windows 2000 domain controller to a newly named and properly configured Windows 2008 R2 domain. The domain names had to be different between the two systems, and all the users were using local Windows profiles on XP or Windows 7. The customer didn't want to implement roaming profiles or folder redirection due to concerns about the extra burden on the network. This meant I had to migrate the local user profiles from one domain to another.<br /><br />I tried using several different tools, including: Microsoft's <a href="">Active Directory Migration Tool version 3.2</a> (which is probably best for Server 2003 -> Server 2008 and above migrations in larger enterprises, where both AD servers are functioning normally and the old one still is online), evaluated the built in Easy Transfer or the Files and Settings Transfer Wizards, and ended up with the User State Migration Tool (USMT).<br /><br />USMT can basically copy the files from one profile to another, to a local disk or to a network share. This had to be done in order to change the association of the profile, unfortunately, from the old domain to the new one. <br /><br />Here are some useful download links:<br /><ul><li><a href="">USMT 3.01 - for Windows XP and Vista migrations</a></li><li><a href="">USMT 4 (part of the Automated Installation Kit - AIK for Windows 7)</a> - by the way, this one is an annoying 1.7 GB ISO file. There is a repackaged version provided from <a href="">wintools.com.au</a>, but it may not be always up-to-date. If you download the ISO, you then have to install it (or maybe you can extract directly from a CAB file? I didn't try that).</li><li><a href="">Hotfix for USMT 4</a> which migrates Outlook 2010 settings (oops, they left that part out of the RTM version!). Also, USMT DOES migrate Outlook files - but it screwed up a lot of user settings, including email rules and Outlook Address books which required manual reconfiguration.</li></ul><i>I will post a follow up with more instructions on how I applied these tools soon</i>, and the issues/workarounds I encountered. I highly encourage testing the USMT tools, as my migrations have worked but things didn't exactly go flawlessly (the most problems revolved around Outlook settings).<br /><br />I don't know how the heck a large organization would use any of these tools to successfully migrate like I did without some degree of manual intervention. What a headache!<br /><br />Also, quick tip - for small Active Directory domains that might grow up into larger ones some day, the best internal Active Directory domain naming scheme is an unused corporate public domain or subdomain that you own and will keep for a very long time. For example, I might own <i>mycompany.com</i> and <i>mycompany.net</i>. Therefore, you might use <i>corp.mycompany.com</i> for your internal domain name, or <i>mycompany.net</i> if you are not using it for anything else. I <b>DO NOT</b> recommend using anything else, including <i>.lan</i> or <i>.local</i>.<img src="" height="1" width="1" alt=""/>SixDimensionalArray Post!Greetings, and welcome to No Joke IT. I have a favorite saying in the information technology business - that IT vendors build or invent wonderful technologies in their "infinite wisdom", and then don't bother to document them for the rest of us. Hey, we all do it! Anyway, I will try to share tips and hints that crop up as I go along, so that other IT folks searching (just like I am) will find the solution to that odd problem that isn't documented anywhere else.<br /><br />It's always great to get first post, even on your own blog! :)<img src="" height="1" width="1" alt=""/>SixDimensionalArray | http://feeds.feedburner.com/NoJokeIt | CC-MAIN-2018-47 | en | refinedweb |
Framework for combining different python interpeters
Project description
Multi-intepreter execution environment
cpy2py allows multiple interpreters to act as one application. In parallel to the main interpreter, other interpreters are run to execute parts of the application.
Table of Contents
Quick Guide
Twinterpreters and TwinMasters
A twinterpreter is simply another interpreter running as a subprocess - with some glue and magic sprinkled on it. You can control and create them using a cpy2py.TwinMaster.
You should only ever worry about two methods: TwinMaster.start launches the twinterpreter. TwinMaster.execute executes an arbitrary callable in the twinterpreter.
from cpy2py import TwinMaster from my_module import my_function twinterpreter = TwinMaster('pypy') twinterpreter.start() if __name__ == "__main__": twinterpreter.execute(my_function, 1, 2, 3, 'ka-pow!', doctor="who?")
TwinObjects
The real power of cpy2py are Twins - objects living in one twinterpreter and being represented by proxies in any other interpeter. Using twins, you can seamlessly split your application across multiple twinterpreters.
You create twins by inheriting from cpy2py.TwinObject instead of object and setting a __twin_id__. That’s it.
from cpy2py import TwinObject class SuperComputer(TwinObject): __twin_id__ = 'pypy' # makes class native to pypy twinterpeter def megaloop(self, x, y): return sum(a+b for a in range(x) for b in range(y)) class CWrapper(TwinObject): __twin_id__ = 'python' # makes class native to python twinterpeter def callme(self, who, what="buy milk"): return some_clib.c_fcn_cll_cplx_xmpl(who, what)
If you don’t set __twin_id__ on a child of cpy2py.TwinObject, the class will always be native to the main interpreter. Handy for all the stuff that’s needed everywhere but really doesn’t belong anywhere.
TwinFunctions
Instead of full-fletched objects, you can also define functions as twins. These are automatically called in their native twinterpreter.
from cpy2py import twinfunction @twinfunction('pypy') def superlooper(count=1000, add=3, start=0): for _ in range(count): start += add return add print(superlooper(int(1E6), 1))
Debugging
The core of cpy2py supports some logging facilities. All such loggers are children of the __cpy2py__ logger. By default, no active handlers are attached and propagation is disabled. If needed, you reconfigure them like any other logging logger to suit your needs. Note that if python is run with the -O flag, several logging calls are skipped entirely to improve performance.
For small scale debugging, one can set the environment variable CPY2PY_DEBUG. If it is defined and not empty, logging output is written to stderr. In addition, if it names a valid logging level, that logging level is used.
Note that loggers are meant for development and only address the internal state. Your application should not depend on this information. Unless cpy2py misbehaves (or you suspect it to), ignore its logging.
Current Status
CPy2Py is stable at its core, but still has some features missing. What’s there is more than sufficient to significantly enhance your applications.
Features
- Seamlessly integrates into python code.
- All internals are wrapped away behind the plain python interfaces. No eval, exec or code strings required.
- Lightweight hooks optimize objects and functions for use with cpy2py.
- If needed, any pickle’able callable can be dispatched to another interpreter.
- Objects natively integrate with twinterpreters.
- Objects can live in a specific interpreter, with proxies replacing them in others. Classes and instances transparently interact with cpy2py in the background.
- Both class and instance attributes work as expected. Methods, classmethods, staticmethods and descriptors are fully supported.
- Inheritance is fully supported, including multiple inheritance. Affiliation to interpreters can be changed freely.
- A wide range of interpeters is supported.
- Pure python, no dependencies means perfect portability.
- Any interpreter compatible with python 2.6 to 3.7 is supported.
- Virtual Environments work out of the box.
- Tested with cpython and pypy, on Linux and Mac OSX.
Gotchas/Limitations
- Importing functions and classes from __main__ may fail if the module can only be imported via its path.
- By default, calls across interpreters are blocking and not threadsafe. If recursion switches between twinterpreters, cpy2py.TwinMaster must use the 'async' kernel.
- Module level settings are not synchronized. For example, configuration of logging is not applied to twinterpreters. Use cpy2py.twinterpreter.group_state.TwinGroupState for initialisation, write modules aware of twinterpreters, or use immutable module-level initializers.
- A weakref to objects only takes local references into account, not cross-interpreter references.
Performance
Dispatching to another twinterpreter adds about 200 - 300 us of overhead. This is mainly due to serialization for the IPC between the interpreters. Using the asynchronous kernel, there is an additional overhead for creating threads.
In general, twinterpreters get faster the shorter they have to wait between requests. pypy twinterpreters benefit from a high number of requests, allowing their JIT to warm up. Python3 connections are the fastest, provided that both twinterpreters support pickle protocol 4.
A notable fraction of time is spent on debugging output via logging. Even if no output is produced, cpy2py is optimized to a point where the logging call is noticeable. If needed, any per-call logging can be disabled by running python in optimized mode. See the python documentation on the -O option and PYTHONOPTIMIZE environment variable.
You can benchmark the overhead yourself using the cpy2py_benchmark tools.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/cpy2py/ | CC-MAIN-2018-47 | en | refinedweb |
OLED Interfaced to NodeMCU
2,596
41
1
Featured
Intro: OLED Interfaced to NodeMCU
OLED!!).
Step 1: Need to Be Collected
Here is the list of components required to get started with the Instructable,
Hardware Components
- NodeMCU
- 0.96” SSD1306 OLED
- Bread Board
- Jumper Wires
- Micro USB Cable
Software Components
- Arduino IDE
Step 2: Connections
Create an instance for the SSD1306 OLED display in SPI mode.
Connection scheme:
1. CS - D1
2. DC - D2
3. Reset - D0
4. SDA - D4
5. SCL - D3
6. VDD - 3.3v
7. GND - GND
Check the schematic and pin configuration to make connections.
Step 3: Library Download
Before you download library you need Arduino IDE to get started.
To download Arduino IDE and for NodeMCU setup, you can check my previous instructacle.
Interface Servo Motor with NodeMCU
Here’s the library you need for this project:
OLED can be easily coded with a library file called Ug8lib.
Ug8lib is a graphics library with support for many different monochrome displays.
The library file can be downloaded by following steps
- Go to Sketch
- Include Library
- Manage Library
- Download U8glib library file.
Step 4: Time to Play With OLED
CODE
#include <U8glib.h> U8GLIB_SSD1306_128X64 u8g(5, 4, 16, 2, 0);
void setup() { /* nothing to do here */ }
void loop() { u8g.firstPage(); /* Keep looping until finished drawing screen */ do { u8g.setFont(u8g_font_osb18); u8g.drawStr(30, 20, "Hello"); //(horizontal spacing,vertical spacing,"string") u8g.drawStr(20, 50, "Makers!"); } while(u8g.nextPage()); }
Download the "OLED 5: Output
Yipeeee!!
That's all makers!
I hope you found this instructable most useful. You have successfully completed one more NodeMCU Instructable.
Stay Tuned for more Projects!
You can contact me by leaving a comment. If you like this instructable probably you might like my next ones.
Discussions
1 year ago
Great work ...
what if OLED done not have CS pin ? these OLED displays are common with six pins
gnd , vcc , clk , data_in , rst , DC | https://www.instructables.com/id/OLED-Interfaced-to-NodeMCU/ | CC-MAIN-2018-47 | en | refinedweb |
In this great age of open standards, virtually every company is tempted to re-create the benefits of closed and proprietary systems through private extension of the open standards. Even the Internet's domain naming system is not immune. Numerous private companies have sought to capitalize on ICANN's inability to quickly agree on an expansion of the Internet's namespace.
No credit card required | https://www.oreilly.com/library/view/ip-addressing-fundamentals/1587050676/1587050676_ch08lev1sec3.html | CC-MAIN-2018-47 | en | refinedweb |
14.21 VALUATION OF COCA-COLA USING MARKET MULTIPLES. The Coca-Cola Company is a global soft-drink beverage company (ticker symbol = KO) that is a primary and direct competitor with PepsiCo. The data in Chapter 12’s Exhibits 12.13–12.15 include the actual amounts for 2008 and projected amounts for Year +1 to Year +6 for the income statements, balance sheets, and statements of cash flows for Coca- Cola (in millions).
The market equity beta for Coca-Cola at the end of 2008 is 0.61. Assume that the risk-free interest rate is 4.0 percent and the market risk premium is 6.0 percent. Coca-Cola has 2,312 million shares outstanding at the end of 2008, when Coca-Cola’s share price was $44.42.
In this problem, we use these actual and projected financial statement data to apply the techniques in Chapter 14 to compute Coca-Cola’s required rate of return on equity and share value based on the value-to-book valuation model. We also compare our value-to-book ratio
estimate to Coca-Cola’s market-to-book ratio at the end of 2008 to determine an invest- ment recommendation. In addition, we compute the value-earnings and price-earnings ratios and the price differential and we reverse-engineer Coca-Cola’s share price as of the end of 2008.
a. Use the CAPM to compute the required rate of return on common equity capital for Coca-Cola.
b. Using the projected financial statements in Chapter 12’s Exhibits 12.13–12.15, derive the projected residual ROCE (return on common shareholders’ equity) for Coca- Cola for Years +1 through +5.
c. Assume that the steady-state long-run growth rate will be 3 percent in Year +6 and beyond. Project that the Year +5 income statement and balance sheet amounts will grow by 3 percent in Year +6; then derive the projected residual ROCE for Year +6 for Coca-Cola.
d. Using the required rate of return on common equity from Part a as a discount rate, compute the sum of the present value of residual ROCE for Coca-Cola for Years +1 through +5.
e. Using the required rate of return on common equity from Part a as a discount rate and the long-run growth rate from Part c, compute the continuing value of Coca- Cola as of the start of Year +6 based on Coca-Cola’s continuing residual ROCE in Year +6 and beyond. After computing continuing value as of the start of Year +6, discount it to present value at the start of Year +1.
f. Compute Coca-Cola’s value-to-book ratio as of the end of 2008 with the following three steps: (1) Compute the total sum of the present value of all future residual ROCE (from Parts d and e). (2) To the total from (1), add 1 (representing the book value of equity as of the beginning of the valuation as of the end of 2008). (3) Adjust the total sum from (2) using the midyear discounting adjustment factor.
g. Compute Coca-Cola’s market-to-book ratio as of the end of 2008. Compare the value-to-book ratio to the market-to-book ratio. What investment decision does the comparison suggest? What does the comparison suggest regarding the pricing of Coca-Cola shares in the market: underpriced, overpriced, or fairly priced?
h. Use the value-to-book ratio to project the value of a share of common equity in Coca-Cola.
i. If you computed Coca-Cola’s common equity share value using the free cash flows to common equity valuation approach in Problem 12.16 in Chapter 12 and/or the residual income valuation approach in Problem 13.19 in Chapter 13, compare the value estimate you obtained in those problems with the estimate you obtained in this case. You should obtain the same value estimates under all three approaches. If you have not yet worked those problems, you would benefit from doing so now.
Earnings Ratio, Price Differentials, and Reverse Engineering
j. Use the forecast data for Year +1 to project Year +1 earnings per share. To do so, divide the projection of Coca-Cola’s comprehensive income available for common shareholders in Year +1 by the number of common shares outstanding at the end of 2008. Using this Year +1 earnings-per-share forecast and using the share value com- puted in Part h, compute Coca-Cola’s value-earnings ratio.
k. Using the Year +1 earnings-per-share forecast from Part j and using the share price at the end of 2008, compute Coca-Cola’s price-earnings ratio. Compare Coca-Cola’s value-earnings ratio with its price-earnings ratio. What investment decision does the comparison suggest? What does the comparison suggest regarding the pricing of Coca-Cola shares in the market: underpriced, overpriced, or fairly priced? Does this comparison lead to the same conclusions you reached when comparing value-to- book ratios with market-to-book ratios in Part g?
l. Compute Coca-Cola’s price differential at the end of 2008. Compute Coca-Cola’s price differential as a percentage of Coca-Cola’s risk-neutral value. What dollar amount and what percentage amount has the market discounted Coca-Cola shares for risk?
m. Reverse-engineer Coca-Cola’s share price at the end of 2008 to solve for the implied expected rate of return. First, assume that value equals price and that the earnings and growth forecasts through Year +6 and beyond are reliable proxies for the mar- ket’s expectations for Coca-Cola. Then solve for the implied expected rate of return (the discount rate) the market has impounded in Coca-Cola’s share price. (Hint: Begin with the forecast and valuation spreadsheet you developed to value Coca-Cola shares. Vary the discount rate until you solve for the discount rate that makes your value estimate exactly equal the end of 2008 market price of $44.42 per share.)
n. Reverse-engineer Coca-Cola’s share price at the end of 2008 to solve for the implied expected long-run growth. First, assume that value equals price and that the earn- ings forecasts through Year +5 are reliable proxies for the market’s expectations for Coca-Cola. Also assume that the discount rate implied by the CAPM (computed in Part a) is a reliable proxy for the market’s expected rate of return. Then solve for the implied expected long-run growth rate the market has impounded in Coca-Cola’s share price. (Hint: Begin with the forecast and valuation spreadsheet you developed to value Coca-Cola shares and use the CAPM discount rate. Set the long-run growth parameter initially to zero. Increase the long-run growth rate until you solve for the growth rate that makes your value estimate exactly equal the end of 2008 market price of $44.42 per share.)
Get this Question solved
Get this solution now
Get notified immediately when an answer to this question is available.
You might
want to try the quicker alternative (needs Monthly Subscription)
No, not that urgent
affecting financial performance. Our Company manages income taxes and financial costs, such as interest income and expense, on a global basis within the Corporate operating segment. We evalu- ate segment performance based on income or loss before income taxes. Below are selected segment data for......
) | https://www.transtutors.com/questions/14-21-valuation-of-coca-cola-using-market-multiples-the-coca-cola-company-is-a-globa-1367587.htm | CC-MAIN-2018-47 | en | refinedweb |
Face hole työt
Saya membutuhkan website baru Membuat dan mendesainnya Toko online Saya menjual produk healthy care, skin care, face and decorative care
Kinect Game worked like hole-in-wall. Player controller character model to pose like the game character. System would access if the pose is passed or not
I would like you to morph a photo of the real face and doll face in order not to produce any morphing artifacts.
We are looking for Ethical Hackers, who can assess the constraint or issue ..
...the router configuration. So You have to forward packets between the VPN server and the clients through a relay server. Some things to consider in this project are TCP/UDP hole punching and the TUN/TAP virtual network interface....
..
.. standard one Calibri or Arial I neeed a bill of course Best regards
I need someone to enhance images, even out face color. Have around 10 images to enhance....
.. to clearly visualize the hole(see attached).
I need a logo design for my company called Black Hole Tv. I have a couple of ideas from some images attached. my idea is to blend the black hole design in the "O" in hole
I need a Object Recognition expert for my current project. If you have knowledge please bid. Details will be shared in message with the selected freelancers.
...system user and the container environment (Linux SE) 3. use of a vulnerability scanner on the IP address of [kirjaudu nähdäksesi URL:n] 4. checking the malware detection (Watering Hole / HIDS) 5. segmentation of the application according to the DMZ scheme per namespace 6. implement multi-factor authentication when accessing the servers 7. to provide us with
Apple Android mobile app in augmented reality for the face
...image called [kirjaudu nähdäksesi URL:n]). - The lid (preferably also made out of glass) should be embedded, with a silicon seal. ( see image called: [kirjaudu nähdäksesi URL:n] and [kirjaudu nähdäksesi URL:n]) - The hole must be able to fit a straw and be semicircular in shape, with the flat edge against the edge of the lid (see attached image called [ki...
...>>apache directry listing is on which cause the other users to list files on server via browser like lisabettwä[kirjaudu nähdäksesi URL:n] domain can list the directories on server which is a security hole so, that need to turned off. Mail relaed :- >>I checked for dns records for few domain like [kirjaudu nähdäksesi URL:n] & [kirjaudu nähd...
Need real time multiple face recognition in python with good accuracy using webcam. Should be able to recognize unknown person too.
Please work as written in the attachment, I think it is describing everything properly and more than enough but still if you need any assistance please contact me
1 image only. Remove baby hairs from face and patch the background beside her elbow.
Face recognition for Pose variation based on the attached sheet.
...the selected freelancer. The delivery period is one month from the beginning. PLEASE NOTE THAT MY BUDGET IS FIXED AND BID ONLY IF YOU ACCEPT TO WORK WITH IT. Please read the hole description before biding and start your proposal by “I read and accept to work with your budget” Thank you very much....,
I need White Hat SEO ONLY! Industry : Landscape Country : Malaysia I would like to rank my client's website on the first page of google search. Only white hat methods. ...start of with 3 months and if its working well we will continue working together. Use the code to start your intro and differentiate yourself from spammers: Operation Rabbit Hole. want good face paint ar result by javascript library that must be cover entire face. Candidate must provide one sample. Thanks.
I need a 2D drawing with both autocad and pdf format output. The drawing is a simple one with a rectangle containing rows of elongated holes. This is depicted in the attached photo. All dimensions need to be depicted on the drawing.
You would create a website for us (shop in the China). We are not publishing the hole outline for the website here. We would like to use Django as framwork. We will provide the theme to you. If you are intrested in the project please provide the following informations: I want to discuss with you in detail. | https://www.fi.freelancer.com/job-search/face-hole/ | CC-MAIN-2018-47 | en | refinedweb |
Forest Ecology and Management
- Austin Kennedy
- 3 years ago
- Views:
Transcription
1 Forest Ecology and Management 258 (2009) Contents lists available at ScienceDirect Forest Ecology and Management journal homepage: Selective logging of lowland evergreen rainforests in Chiloé Island, Chile: Effects of changing tree species composition on soil nitrogen transformations Cecilia A. Pérez a, *, Martín R. Carmona b, José M. Fariña a, Juan J. Armesto a,b a Center for Advanced Studies in Ecology & Biodiversity, Departamento de Ecología, Pontificia Universidad Católica de Chile, Alameda 340, Santiago, Chile b Instituto de Ecología y Biodiversidad, Casilla 653, Santiago, Chile ARTICLE INFO ABSTRACT Article history: Received 28 February 2009 Received in revised form 8 July 2009 Accepted 10 July 2009 Keywords: Non-symbiotic nitrogen fixation Net N mineralization Denitrification Laureliopsis philippiana C/N ratios Lowland evergreen rainforests in southern Chile growing on highly productive soils and accessible sites have been subjected to traditional and industrial logging of valuable timber trees. Old-growth rain forests in this area are characterized by highly conservative N cycles, which results in an efficient N use of ecosystems. We hypothesize that different logging practices, by changing forest structure and species composition, can alter the quantity and quality (i.e. C/N ratio) of litterfall and soil organic matter and soil microbial processes that determine N storage and availability. To test this hypothesis we investigated chemical properties, microbial N transformations, N fluxes and N storage in soils of lowland evergreen rainforests of Chiloé Island after 10 years since industrial selective logging (ISL) and in stands subjected to traditional selective logging (TSL) by landowners in small properties. We compared them to reference unlogged old-growth stands (OG) in the same area. Tree basal area was more reduced in the stands subjected to ISL than to TSL. Litterfall inputs were similar in both logging treatments as in OG stands. This was due to greater biomass of understory species after logging. In TSL understory tree species determined a higher litterfall C/N ratio than ISL. We found higher soil N availability and content of base cations in surface soils of logged forests than in OG. The litter horizon of OG forest had significantly higher rates of non-symbiotic N fixation than logged forests. In the ISL treatment there was a trend toward increasing soil denitrification and significantly higher NO 3 N/N t ratio in spring waters, which led to a stronger d 15 N signal in surface and deep soils. We conclude that massive understory occupation by the shade-intolerant native bamboo Chusquea quila in ISL led to enhanced litter quality (lower C/N ratios) relaxing the tightness of the N cycle, which increased soil N availability leading to a higher proportion of nitrate in spring waters and higher gaseous N losses. In contrast, under TSL a higher litterfall C/N ratio slowed decomposition and net N mineralization rates thus reducing the chances for N losses, and enhancing C and N storage in soil. We suggest that sustainable logging practices in these rain forests should be based on lower rates of canopy removal to enhance colonization of the understory by shadetolerant trees, which are associated with a more efficient N cycle. ß 2009 Elsevier B.V. All rights reserved. 1. Introduction Old-growth temperate rainforests of southern South America are strongly nitrogen-limited. The nutrients are efficiently retained in soils and above ground biomass reflecting that theses ecosystems are very conservative in nutrient use (Hedin et al., 1995; Pérez et al., 1998; Vann et al., 2002; Perakis and Hedin, 2001; Satti et al., 2003; Diehl et al., 2008). Limited nitrogen (N) inputs to these southern forest ecosystems derive primarily from nonsymbiotic nitrogen fixation in forest soils, at the same time low rates of internal nitrogen cycling and denitrification have also been reported (Pérez et al., 2003a). * Corresponding author. address: (C.A. Pérez). Logging practices that alter forest structure and tree species composition can be detrimental to ecosystem functions, especially those that depend on the chemical quality (e.g. C/N ratio) and quantity of organic matter entering the soil. Processes that are highly sensitive to the chemical quality and quantity of litter are those controlled by heterotrophic soil bacteria, responsible for N inputs and transformations in soils; e.g. non-symbiotic N fixation, denitrification, and net N mineralization and nitrification. In fact, more than 99.9% of total nitrification came from soil organic matter in a Chilean Andisol (Rütting et al., 2008). In northern temperate and boreal forests, logging often increases net N mineralization rates, soil N availability (Reynolds et al., 2000; Thibodeau et al., 2000; Hope et al., 2003; Lindo and Visser, 2003; Inagaki et al., 2008) and litter decomposition rates (Prescott, 1997; Brais et al., 2002). Such effects are generally associated with increases in soil temperature and enhanced soil moisture as a result of tree /$ see front matter ß 2009 Elsevier B.V. All rights reserved. doi: /j.foreco
2 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) removal (Reynolds et al., 2000; Thibodeau et al., 2000; Heithecker and Halpern, 2006). However, other studies have found limited or no effects of selective logging on soil N transformations and litter decomposition (Berg and Edmonds, 1999; Brais et al., 2002; Kranabetter and Coates, 2004; Westbrook et al., 2006; Idol et al., 2006; Jerabkova et al., 2006) and no effects of logging on soil carbon and nitrogen storage (Johnson and Curtis, 2001). These conflictive results may be due to differences in site factors among the studied ecosystems, such as climate, vegetation, time since logging disturbance, type of machinery used in the selective logging and land use history. Changes in N availability following disturbance can alter other microbial processes in ecosystems such as non-symbiotic N fixation and denitrification, but there is less documentation of the effect of logging on these ecosystems processes (but see Shaffer et al., 2000; Ballard, 2000; Griffiths and Swanson, 2001). Most studies of biogeochemistry in southern Chilean temperate forests have been conducted in montane rain forests. Lowland primary rain forests developed on highly productive, glacial soils are disappearing much faster than higher elevation forests, however, due to logging, fire and land use changes, especially in the last decades (Wilson and Armesto, 1996; Echeverría et al., 2007). Nowadays lowland primary rain forests in Chiloé Island occupy less than one-third of its original distribution. Chile belongs to the group of countries that have increased its overall deforestation rate from 1.02% during 1980s to 1.76% during the 1990s (Jha and Bawa, 2006), which has greatly altered the landscape of this temperate forest region. It has been estimated that only 5% of logging of native forests is based on controlled silvicultural practices (Lara, 1996). Depending on logging intensity, selective logging scenarios can substantially alter forest structure and tree species composition (Rüger et al., 2007), mainly because of the broad diversity of light requirements of different timber species (Donoso et al., 1999; Gutierrez et al., 2004; Figueroa and Lusk, 2001). The main hypothesis of this work was that the removal of tree biomass by logging practices, generally consisting of selectively removing valuable timber species, should alter N transformations mediated by soil heterotrophic bacteria, which are highly dependent of organic matter quality in unpolluted forests. To test this hypothesis, we compared nitrogen cycling in a forest stand affected by industrial selective logging of trees for timber production (65% of the canopy removed 10 years ago), with a stand subjected to traditional selective logging by local people (continuously harvested at lower rate) and a reference unlogged, old-growth stand under similar climate and soils. For this purpose, we measured the following responses to logging treatments: (1) tree species composition and cover, (2) chemical quality (e.g. C/N ratio) and quantity of litterfall, (3) N return via litterfall from vegetation to soil, (4) chemical properties of soils and spring waters, (5) soil microbial N transformations (e.g. non-symbiotic nitrogen fixation, net N mineralization and denitrification), (6) litter decomposition rates, and (7) microclimate. 2. Materials and methods 2.1. Study sites Study sites were located in Melleico ( S, W), 12 km west from Chonchi, Isla Grande de Chiloé, Chile (Fig. 1). Forest type in the study area is evergreen Valdivian rain forest dominated by broad-leaved tree species, such as Laureliopsis philippiana (Monimiaceae) and different Myrtaceae species (Armesto and Figueroa, 1987). Prevailing climate is wet-temperate with a strong oceanic influence (Di Castri and Hajek, 1976). Meteorological records (4 years) at Senda Darwin Biological Station, located about 100 km north of the study site, indicate an annual rainfall of 2090 mm and a mean annual temperature of 12 8C. Maximum monthly temperatures (January) are 16 8C and minimum monthly temperatures (July August) are 5 8C. Rainfall occurs throughout the year, but 64% of the precipitation falls between April (austral fall) and September (austral spring). The forests studied are situated at the foothills of the Coastal Range at ca m above sea level. A full description of the flora, vegetation structure and dynamics is provided by Gutierrez et al. (2009) and Pérez et al. (in press). Within an area of 2 km 2 (Fig. 1), relatively homogeneous regarding topography and soils, we selected forest stands subjected to two logging treatments: (1) forests continuously logged by small landowners for limited timber extraction and Fig. 1. Map of the study area in central Chiloé Island. Three reference spring waters in the OG forest: bottom left, three reference spring waters in ISL: bottom right, and three reference spring waters and plots in the TSL in the top. The orthophoto was taken in year 1993, so the ISL was still not applied.
3 1662 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) harvest of firewood, the so-called traditional selective logging (TSL) and (2) forests industrially logged 10 years ago to extract timber for the production of wooden panels; known as industrial selective logging (ISL). In the second treatment, individuals >50 cm diameter at breast height (dbh) of the valuable timber species L. philippiana were extracted, leaving behind about 35% of the original mixed species canopy cover. An unlogged stand of oldgrowth forest (OG) ca. 300 years old was sampled as control, characterized by a fairly continuous canopy and regeneration dynamics associated with small tree-fall gaps. Estimates of canopy cover, with a spherical densitometer placed 1.30 m above the ground, indicated a 95.2% ground cover for the control forest, 94.4% for the stand subjected to ISL and 94% for the forest with TSL. Similar cover values among silvicultural treatments obtained with this method is due to the abundant colonization of the understory by species reaching more than 1.30 m height, such as the native bamboo Chusquea quila in ISL and abundant regeneration of shadetolerant Myrtaceae tree species in the forest under TSL after opening of the canopy Vegetation, soil and litter sampling Six permanent plots (50 m 20 m) were set up in the study area; two in the OG forest, one in the ISL and three in the TSL. Plots were located at the centre of each forest stand (at least 200 m from forest margins) to eliminate the edge effect. Between year 2002 and 2005, all trees with a trunk diameter of >5 cm at breast height (1.3 m) and rooted within the plots were identified, tagged and their diameter at breast height (dbh) measured to the nearest centimetre. Within the OG and logged forested watersheds, three spring waters were selected for sampling water chemistry and as references for soil sampling. Springs were located about m from each other. In each spring, a first sample point was located 12 m away and perpendicular to the watercourse; the second sample point was located 12 m in the opposite direction of the first point. In total there were six soil-sampling points in each silvicultural treatment. Because the impacts of TSL on forests were more heterogeneous, three small watersheds were selected for sampling (Fig. 1), each of them had a spring crossing the forest. In each watershed a transect line was laid down across the mid point of the 0.1 ha permanent plot, selecting six sampling points, separated by about 12 m each other. In total there were 18 sampling points in the TSL. Litterfall was collected in 0.1 m 2 surface buckets covered with a nylon net, mesh size of 2 mm, one trap per sampling point. Litter samples were retrieved periodically and taken to the lab to obtain dry weights. The litter of three out of six traps per watershed was sorted by tree species and other components (leaves and fine woody debris) in each season and dry weighted. Litter collected was ground for the determination of total C and N. N return to the forest floor via litterfall was obtained by multiplying litter biomass input per season by their total N concentration. Each of four seasons (spring, summer, fall and winter), from April 2005 to February 2008, mineral soil (A h : 0 10 cm, B v : cm) and litter samples (O l horizon), were collected from the sample points in each forest stand or watershed and used for in situ experiments to assess N fluxes, and soil chemical properties as described below Litter decomposition experiment The litterbag approach (Singh and Gupta, 1977) was used to estimate mass losses of fresh leaf litter through time in each plot. A 2-mm mesh size was used to allow the access of most of the mesofauna and all of the microfauna of decomposers. In August 2005, decomposition bags were filled with 5 g of dry material from the O l horizon consisting in recently fallen leaves and fine woody debris collected in July Eight litter decomposition bags were deposited on the forest floor under closed canopy in four sampling points per forest. Litter bags were removed 61, 94,152, 276, 450 and 822 days after the initiation of the experiment, taken to the lab, dried for 48 h at 70 8C, and then weighted to estimate mass loss. To obtain the decay constant (k) a negative exponential model was fitted to the mass loss trend over time (Olsen, 1963) Spring water sampling Each season, samples of running spring water were taken in acid washed 60 ml plastic bottles. Samples of running water were filtered on site using 0.45 mm pore size filter paper and sent for chemical analysis to the Institute of Ecosystem Studies, USA and to the Universität Trier, Germany. Spring water samples were analyzed to determine ammonium (colorimetric), nitrate (by ion chromatography) and total N (N t ; by catalytic combustion), including both dissolved organic and inorganic nitrogen concentrations Total C and N in soil and litterfall Dry surface soil samples corresponding to the initial time for assessment of net N mineralization and litter collected from traps were ground for the determination of total N and C by means of flash combustion using a NA2500 Carlo Erba Element Analyzer at the Biogeochemistry Laboratory, Pontificia Universidad Católica de Chile (PUC). The C/N mass ratios of plant and soil samples were obtained in order to assess the chemical quality of organic matter. Litterfall and soil samples obtained in January 2007 were analyzed for the determination of natural abundances of d 15 N(%) ina Thermo Delta V Advantage Isotope Ratio mass spectrometer, at the Universität Trier, Germany. These stable isotope ratios ( 15 N/ 14 N) indicate the amount of heavier in relation to lighter isotope. This type of measurement helps in the final interpretation of nitrogen dynamics in soils, in the sense that a higher amount of the heavier isotope in soils indicates faster turnover rates Exchangeable base cations, soil reaction, and water content of surface soils The exchangeable base cations; calcium, magnesium potassium and sodium in soil, were extracted with in a 1 M ammonium acetate solution (1:10) and determined in Perkin Elmer 2380 AAS at the biogeochemistry laboratory, PUC. Soil reaction was determined with a ph electrode in a 1:5 soil:water suspension. Water content was determined gravimetrically In situ non-symbiotic N fixation rates The acetylene reduction activity (ARA, Myrold et al., 1999) was used to estimate in situ non-symbiotic N fixation rates for different soil components: fine leaf litter (O l horizon), fine woody debris (FWD; twigs and branches < 5 cm diameter) and mineral soil (A h horizon). In each sampling point and for all forest stands, samples of O l horizon (composed mainly of leaves) and samples of FWD were taken and deposited inside 500 ml glass jars. Intact soil cores were taken from each sampling point using a 100 cm 3 steel cylinder. The soil core was carefully removed from the cylinder and deposited inside a glass jar. Immediately afterwards, the jars were hermetically closed. The samples were incubated in a mixture of air and acetylene at 10% (v/v) for up to 2 days. An additional sample per substrate type was incubated without acetylene as a control. Ethylene gas concentration at times 0, 1 and 2 days of incubation
4 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) was determined with a Schimadzu GC-8A gas chromatograph equipped with a FID detector, at the biogeochemistry laboratory, PUC. More details about methods are given by Pérez et al. (2004) In situ net N mineralization rates Soil samples from the surface (A h ) and deep (B v ) horizons were obtained and sieved in the field from each sampling point in control and logged stands. Each sample was divided in two subsamples, one of them was taken to the laboratory to determine the initial content of ammonium and nitrate in the soil solution (initial sample) and the second subsample was deposited inside polyethylene zip lock bags and returned to the soil at the same point; i.e. final sample (Eno, 1960). After days of field incubation, final soil samples were recovered and taken to the laboratory for extraction of dissolved ammonium and nitrate in a mol/l KAl(SO 4 ) 2 solution (1:4) and determination of concentration by fractionated steam distillation. More details about methods are given by Pérez et al. (1998) Potential denitrification rates Denitrification rates were determined by the acetylene inhibition assay in intact soil cores (Groffman et al., 1999) taking into account that soil nitrate concentration was relatively high and therefore the effect of nitrate reductase inhibition by acetylene would be minimal. In each sampling point in logged and control stands one sample was taken with a 100 cm 3 steel cylinder and stored for up to 6 h before incubation at room temperature and field water content. Soil samples were placed inside 500 ml hermetic glass jars and incubated for 6 h at room temperature under a 10% (v/v) acetylene atmosphere. Gas samples were taken at 0, 2 and 6 h during the incubation and stored in 3 ml Venojets. Samples were frozen until analyzed. The N 2 O concentration of gas samples was determined with a Shimadzu GC-8A gas chromatographer, equipped with a Porapack column Q 80/100 and electron capture detector, at the biogeochemistry laboratory, PUC. Calibration curves were prepared from dilutions of a standard gas of 1 ppm nitrous oxide balance in nitrogen of Scotty analyzed gases. The N 2 O concentrations in the gas samples were determined from the lineal fit of the calibration curve. Denitrification rates were estimated from N 2 O N concentration difference estimated from an area basis between incubation times 6 and 2. basal area of stands, as only one stand of ISL was considered for vegetation analysis. Differences in decomposition rates were assessed with a Tukey s test for multiple comparisons of slopes (Zar, 1996). In cases when the assumption of variance homogeneity was not fulfilled, the data were either ranked or logtransformed. Weighted average of C/N ratio from dominant leaf litter components was estimated, in order to compare these values with the C/N ratio of the bulk leaf litter obtained from the litter traps. In order to test the effect of natural variation of soil moisture on net nitrogen mineralization rates using the buried bag method, a three factors ANOVA (silvicultural treatment, soil depth and time of incubation; i.e. initial and final samples) for repeated measurements (11 months) of soil water content was used. 3. Results 3.1. Vegetation structure and species contribution to litterfall Total tree basal area tended to be lower in ISL than in TSL or in OG stands (Table 1). The canopy of unlogged forests was dominated by the evergreen, broad-leaved species L. philippiana (Fig. 2A). Basal are of this species decreased to less than one-third in both types of logged stands. Amomyrtus luma and Caldlcuvia paniculata increased their overall importance under TSL and under ISL, respectively (Fig. 2A). In TSL stands other tree species such as Nothofagus nitida and Drimys winteri also increased their relative importance. We found no statistical differences in the total input of fine litter and litter-associated N return to the forest floor in both logged forests compared to the reference OG forest (Table 1). In both OG and TSL litter flux was co-dominated by L. philippiana and several Myrtaceae species, but in the stand subjected to ISL the understory became dominated by the shade-intolerant native C. quila (Fig. 2B). The woody liana Hydrangea serratifolia was an important component of litterfall in both logged and control stands. One of the dominant tree species in the stand subjected to TSL, A. luma, had a significantly higher litterfall C/N ratio (i.e. lower litter quality) than the bamboo C. quila which became overly abundant in the litterfall of ISL stands and also higher than the litterfall C/N ratio of L. philippiana, the canopy dominant in OG forests (Fig. 3) Microclimate In August 2005 three Hobo data loggers per silvicultural treatment and control stands were installed within the forests, located about 1.3 m above the ground and set to monitor hourly air temperature and humidity. Data were averaged monthly during the study period Statistical analyses In order to assess the effect of the silvicultural treatments (3 levels; ISL, TSL, and control) and substrates (2 3 levels; O l,a h and B v soil horizons) on N transformations and availability in forest soil, the data collected during the entire study period (April 2005 January 2008) was analyzed by two-way analysis of variance (ANOVA) and a-posteriori Tukey s tests. For d 15 N analysis only the sampling period of January 2007, was used. One-way ANOVA and a-posteriori Tukey s tests was applied to test for the effect of ISL, TSL and control treatments on annual rates of N transformations, N return to the forest floor, litterfall, air temperature and humidity (i.e. microclimate), and chemical properties of surface soils and spring water. None statistics were performed in relation to total Fig. 2. Tree species contribution (%) to (A) stand basal area in unlogged old growth (OG), traditional selective logging (TSL) and industrial selective logging (ISL) and (B) litterfall biomass, in lowland evergreen rainforests of Chiloé Island.
5 1664 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) Fig. 3. Average C/N ratio of senescent leaves based on 3-year seasonal data of dominant species in OG, TSL and ISL in lowland evergreen rainforests of Chiloé Island (average SE, n = 6 for OG and ISL and n = 18 for TSL). Different letters indicate significant differences among species, according to Tukey tests (P < 0.05) Decomposition, litterfall, soil and spring water chemistry The lowest decomposition rate, associated with significantly higher C/N ratios (i.e. lower litter quality) of both litterfall and surface soil, was measured in the stands subjected to TSL compared to OG forest and ISL stands (Table 1). Weighted average of C/N ratios are the following; OG = 53.0, ISL = 42.9, TSL = 56.8, which closely resemble the values obtained in the bulk litter (Table 1). Significantly lower C and N storage in surface soil were recorded in ISL stands compared to TSL treatment or OG forest. Litterfall was always depleted in d 15 N (%) (i.e. negative values) and none significant differences were detected among silvicultural treatments (Table 1). A significantly higher NO 3 N/N t ratio in the spring water was observed in forest under ISL than in the unlogged control forests (Table 1). The total concentrations of soil exchangeable base cations were significantly higher in both logged forests (TSL and ISL) than in OG forest. Soil ph was significantly lower under TSL than under ISL (Table 1). Fig. 4. Average rates based on 3 years of seasonal data of (A) annual rates of nonsymbiotic nitrogen fixation in the O l and A h horizons and fine woody debris (FWD), (B) N availability in A h and B v horizons, (C) annual rates of net N mineralization in A h and B v horizons and (D) annual rates of N 2 O N accumulation in surface soil in OG, TSL and ISL in lowland rainforests of Chiloé Island. Different letters indicate significant differences among silvicultural treatments and substrates, according to Tukey tests (P < 0.05) (average SE, n = 6 for OG and ISL and n = 18 for TSL) Soil nitrogen transformations and natural abundance of d 15 N There was a significant effect of both silvicultural treatment and substrate on non-symbiotic N fixation (Fig. 4A and Table 2). OG Table 1 Tree basal area of permanent plots, decomposition rate and averages (SE) of 3 years seasonal data of; litterfall flux, N return, net nitrification (nitrif.), and chemical properties of litterfall, surface soil and spring water, and carbon and nitrogen storage in surface soils of OG, ISL and TSL in lowland rainforests of Chiloé Island. F and P values belong to one-way ANOVAs. Different letters indicate significant differences among silvicultural treatments according to Tukey tests (P < 0.05) (n = 6 for OG and ISL and n = 18 for TSL). OG ISL TSL F P Basal area (m 2 /0.1 ha) a 10 (4) 4 7 (3) Litterfall (ton/ha/year) 5.5a (0.5) 4.8a (0.8) 5.4a (0.4) N return (kg/ha/year) 57.8a (9.2) 63.7a (10.3) 51.4a (0.4) Nitrif. (NO 3 N/ha/year) 110.0a (58.2) 77.3a (34.1) 41.7a (9.1) k (year 1 ) 0.3a 0.3a 0.1b 9.5 b <0.001 C/N litterfall 44.6a (3.1) 39.1a (2.8) 53.6b (2.7) C/N soil 15.0a (0.4) 16.5a (0.9) 19.6b (0.6) 10.7 <0.001 C pool (kg/ha) 7328ab (294) 6429a (287) 7358b (250) N pool (kg/ha) 105.4ab (9) 99.1a (7) 123.6b (7.1) d 15 N(%) litterfall c 2.1a (0.8) 2.0a (0.5) 2.6a (0.5) NO 3 N/N t spring water 0.1ac (0.0) 0.4b (0.2) 0.2bc (0.1) Base cations (cmol/kg DW) 10.4a (1.8) 23.4b (4.9) 20.1b (5.9) ph soil (H 2 O) 4.8ab (0.1) 4.9b (0.1) 4.4a (0.2) a n = 2 for OG, n = 1 for ISL, and n = 3 for TSL. b Multiple slopes Tukey test. c n = 3 for OG and ISL and n = 9 for TSL.
6 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) Table 2 Two-way ANOVAs of N transformations: ARA (acetylene reduction activity), N availability (Ndis), natural abundance of d 15 N, Nmin (net N mineralization), nitrification and denitrification in soils of OG forests, TSL and ISL in lowland rainforests of Chiloé Island. ARA (nmol C 2 H 4 /g DW/day) Silvicultural treatment Substrate <0.001 Silvicultural treatment substrate 6.4 <0.001 Ndis (mg N/kg DW) Silvicultural treatment 15.0 <0.001 Substrate 17.3 <0.001 Silvicultural treatment substrate d 15 N(%) Silvicultural treatment 16.0 <0.001 Substrate Silvicultural treatment substrate Nmin (kg/ha/year) Silvicultural treatment Substrate Silvicultural treatment substrate Nitrification (mg NO 3 N/kg DW/month) Silvicultural treatment Substrate Silvicultural treatment substrate Denitrification (mg N 2 O N/m 2 /day) Silvicultural treatment forest presented significantly higher non-symbiotic N-fixation rate in the O l horizon than soils of both logging treatments, and the upper soil horizon was significantly more active than samples from the A h horizon or FWD, except in ISL treatment where FWD presented similar rates to the O l horizon (Fig. 4A). There was a significant effect of both silvicultural treatment and substrate on soil nitrogen availability (Fig. 4B and Table 2). The concentration of available nitrogen in surface soils was significantly higher in both logging treatments than in the unlogged old-growth forest (Fig. 4B). Under ISL the concentration of available N was significantly higher in the A h mineral horizon than in the B v.in the other forests we found no significant differences in available N among soil horizons (Fig. 4B). There was a significant effect of both silvicultural treatment and substrate on the natural abundance of d 15 N(Fig. 5 and Table 2). The natural abundance of d 15 N was always positive in soils and significantly higher in both soil horizons in the ISL than in both other forests. In TSL deeper soil horizon presented a significantly higher d 15 N than surface soil (Fig. 5). The significant interactive effect in net N mineralization Fig. 5. Natural abundance of d 15 N in surface (A h horizon) and deep soils (B v horizon) in OG, TSL and ISL in lowland rainforests of Chiloé Island. Different letters indicate significant differences among silvicultural treatments and substrates, according to Tukey tests (P < 0.05) (average SE, n = 3 for OG and ISL and n = 9 for TSL). F P rates (Table 2) indicate that only in the surface soil there are significant differences among silvicultural treatments as is the case of the reported higher rates in OG than in TSL (Fig. 4C). The water content of the in situ incubated soil samples within the buried bags did not differ from the initial samples (F 1,81 = 0.51; P = 0.47). Net nitrification remained similar between logged and control forests and at both soil depths (Tables 1 and 2). There was a trend toward higher denitrification rates in the mineral soil horizon A h of the ISL, however this difference was not statistically significant (Fig. 4D and Table 2) Seasonal trends Mean monthly air temperature and air humidity did not differ (F 2,87 = 1.234; P = and F 2,87 = 0.558; P = 0.574, respectively) between stands subjected to different logging practices and did not differ from unlogged old-growth stands (Fig. 6A and B). The average annual temperature over 2 years of study was 9.9 8C in OG unlogged forest, 9.5 8C in ISL and 8.7 8C in TSL stands. Average air humidity remained most of the time near 100% in all forest stands compared. Soil moisture was significantly lower by about 10% in the OG forest than in both logging treatments during the study period (F 2,27 = ; P < ) (Fig. 6B). Non-symbiotic N fixation, measured by ARA, decreased significantly during the warmer and drier austral summer (Fig. 6C). Net N mineralization (Fig. 6D) and denitrification rates in forest soils (Fig. 6E) showed no seasonal trends. Net N mineralization rates declined during 2007, except for summer N return via litterfall was similar in logged and unlogged forests (Fig. 6F and Table 1), with lower fluxes during austral spring (October). 4. Discussion 4.1. The effect of logging on vegetation and soil According to remaining basal area, tree removal in the TSL stand was less intense than in forests subjected to ISL. The fact that the reduction in basal area with ISL was not reflected in a lower litter input in this forest can be explained by the additional contribution of the fast growing understory species of C. quila, which follows an intense canopy opening in ISL, by the reduction to 35% from the original canopy cover. Differences in understory species composition between logging treatments are due to the higher light requirement of the shade-intolerant bamboo C. quila compared to the shade-tolerant A. luma, which presents abundant advanced regeneration under the forest canopy (González et al., 2002; Lusk and Kelly, 2002). Ten years after the application of industrial selective logging, shade-intolerant species such as C. paniculata dominate the basal area in the ISL stands with an understory of C. quila that dominates litter flux. Under TSL, other shade-intolerant species co-dominate basal area, such as D. winteri and N. nitida, whereas C. quila is not an important component of the understory. Silvicultural practices did not affect N return via litterfall to the forest floor, but significantly changed the C/N ratio of falling litter. The higher litterfall C/N ratio of TSL stands is due to a shift in canopy dominance from Laureliopsis to other shade-tolerant canopy species such as A. luma and Myrceugenia planipes, which have a higher C/N ratio than the bamboo species C. quila in ISL and L. philippiana in the canopy of OG stands. Other co-dominant species in TSL stands such as N. nitida and D. winteri also have a higher litterfall C/N ratio (Pérez et al., 2003b), which collectively contribute to the higher C/N ratio of bulk litterfall in these forests compared to those subjected to ISL. The similar values of C/N ratios obtained in the bulk litter and the weighted average from leaf litter of dominant tree species confirms the fact that they are the main determinants to the bulk C/N ratio at ecosystem level.
7 1666 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) Fig. 6. Temporal variation of (A) air temperature, (B) air relative humidity (open symbols) and water content of soil (filled symbols), (C) acetylene reduction activity, (D) net N mineralization, (E) denitrification and (F) N return in the litterfall in OG, ISL and TSL in lowland rainforest of Chiloé Island. Shaded areas in the graph indicate the length of austral summer (average SE, n = 6 for OG and ISL and n = 18 for TSL). Carbon and nitrogen storage and soil C/N ratio were differentially affected by logging type, so ISL had a lower carbon and nitrogen storage and lower soil C/N ratio than TSL. Increases in the content of exchangeable of base cations in soils with logging found in this study have also been reported in northern temperate forests due to a possible decreased vegetation uptake after the removal of tree biomass (Brais et al., 2004) Seasonal trends and the effect of logging on microclimate Logging did not significantly change microclimatic conditions within the forest stand about a decade after tree harvesting. Air temperature and humidity did not differ from OG, unlogged forest, however the lower soil moisture found in the OG forest than in logged stands may result from higher evapotranspiration associated with higher canopy leaf area in unlogged forests. Díaz et al. (2007) have shown that water table may rise in forest cleared stands of Chiloé Island due to reduced canopy interception and transpiration of rainfall. The observed seasonal trends lead us to conclude that nonsymbiotic N fixation by soil bacteria will be inhibited and litter N flux enhanced during the warmer and drier summers of the study area. Such conditions may vary among years, presumably in direct relationship with climate variability driven by El Niño Southern Oscillation (ENSO) in the Pacific coast of southern South America (Montecinos and Aceituno, 2003) The effect of logging on decomposition and N transformations A higher litterfall C/N ratio in TSL, together with lower decomposition and net N mineralization rates in surface soils compared to OG forest resulted in greater soil storage of C and N. In contrast, lower C/N ratio of litterfall in ISL stands, together with enhanced denitrification and decomposition rates decreased soil C and N storage. Forest management had similar effects on litter C/N ratios and storage of these elements in organic soils of northern temperate forests (Klemedtsson et al., 2005). In both logged forests, observed increases of soil N availability were not associated with higher rates of soil N mineralization, but it is probably due to a decreased root sorption capacity associated to a decreased basal area in ISL. In other terms, N consumption by aboveground biomass would decrease in this logged forest,
8 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) increasing N availability on soils. In TSL the increase in N availability may be due to lower N requirements by the aboveground biomass as it is suggested by the high C/N ratio of litterfall. Higher soil N availability could enhance a higher NO 3 N/N t ratio in the spring waters of ISL. N losses are manifested in the enhanced rates of denitrification measured in the ISL stands. High losses of N due to denitrification in soils are supported by the more positive d 15 N signal in stands subjected to ISL, because denitrification causes one of the highest fractionation of soil N (Robinson, 2001) by which light fraction of N gaseous products leave the ecosystems, leaving behind 15 N enriched soils. Higher denitrification rates, associated with increased soil N availability, have been measured in young stands of post-logging chronosequences in Douglas-fir forests from north western USA (Griffiths and Swanson, 2001). The fact that both logging practices significantly increase N availability in soils, but denitrification was higher only under ISL is related to the fact that C/N ratio was higher in TSL, suggesting a decrease in carbon lability for heterotrophic denitrifiers in the latter. Higher litterfall C/N ratio was also associated to lower net N mineralization in surface soils of TSL stands than in OG forest, suggesting an increase in N immobilization by soil bacteria under TSL. The marked decrease in non-symbiotic, bacterial N fixation in the surface soil of both logged forests with respect to the unlogged stand was associated to decreases of net N mineralization in TSL. Studies in northern temperate forests have shown that clear cutting changed the species composition of leaf litter and, at the same time, the community composition of diazotrophic bacteria (Shaffer et al., 2000), which may alter microbial activity. Lower soil N mineralization, associated with reductions in soil microbial biomass and mesofauna, has also been documented in logged northern temperate forests (Lindo and Visser, 2003; Thibodeau et al., 2000). Hence, our results support the argument that logging may alter the rates of processes controlling N fluxes and recycling in forest stands. Denitrification rates reported here for Chilean unlogged, oldgrowth forests are similar to values of 1.9 kg N/ha/year reported for northern temperate forest soils (Barton et al., 1999). Because the denitrification rates measured in the unlogged evergreen rain forests studied in Chiloé Island are very similar to non-symbiotic N fixation rates per unit area of soil, it can be suggested that both gaseous inputs and outputs of N are closely balanced. A study of soil microbial diversity using molecular markers in a similar oldgrowth, lowland rain forest of Chiloé Island reported abundant representation of different clones of N-fixing Rhizobiales, Azospirillum and Flavobacterium (Guevara, 2007). These diazotrophs are also capable of denitrification (Paul and Clark, 1989). The particular bacterial assemblage present in OG forest soils, which perform two opposite ecosystem functions, may explain the tight annual balance between N-fixation and denitrification reported in this paper. 5. Conclusions The higher C/N ratio of litterfall and soil found in TSL is due to the dominance of shade-tolerant tree species with higher C/N ratio. This affected in lower decomposition rates, higher carbon and nitrogen pools and lower soil ph under TSL. Exchangeable base cations and nitrogen availability was higher in both logged forests than in OG. A higher NO 3 N/N t ratio in spring waters and a trend toward higher denitrification rates was found in ISL than in OG forests. Non-symbiotic N fixation was drastically reduced in both logging treatments. In summary ISL showed a more leaky N cycle than OG and TSL as it is reflected in the more positive 15 N signal in its soils. The fact that TSL evidenced a tighter N cycle than ISL can be attributed to the less intensive canopy removal under TSL, which allowed the rapid colonization of the understory by shadetolerant tree species, such as several species of Myrtaceae. This response contrasts with the massive occupation of the understory of the ISL stand by the shade-intolerant bamboo C. quila. This invasive native bamboo has lower C/N ratio of litterfall, i.e. a better litter quality and dominates post-logging litter flux in ISL. Therefore, sustainable logging practices in lowland southern temperate forest should consider leaving behind a higher proportion of basal area, such as to decrease the area of canopy openings that favours bamboo invasion. Smaller openings would favour regeneration by intermediate tolerance and shade-intolerant tree species, which make the majority of the successional tree species assemblage in these rain forests (Figueroa and Lusk, 2001), hence maintaining the tightness of the nitrogen cycle. Acknowledgements Support for this study was provided by Fondecyt (2005), Fondecy-Fondap to CASEB, Pontificia Universidad Católica de Chile, and Iniciativa Científica Milenio, MIDEPLAN grant P We thank the following people for allowing the access to study sites: Javier Bruna, Arturo Gallardo, Elemías Gomez. This study is part of the research program of Senda Darwin Biological Station. References Armesto, J.J., Figueroa, J., Stand structure and dynamics in the temperate rain forests of Chiloé Archipielago, Chile. J. Biogeogr. 14, Ballard, T.M., Impacts of forests management on northern forest soils. Forest Ecol. Manage. 133, Barton, L., McLay, C.D., Schipper, L.A., Smith, C.T., Annual denitrification rates in agricultural and forest soils: a review. Aust. J. Soil Res. 37, Berg, A.K., Edmonds, R.L., Influence of partial cutting on site microclimate, soil nitrogen dynamics, and microbial biomass in Douglas-fir stands in western Washington. Can. J. Forest Res. 29, Brais, S., Paré, D., Camiré, C., Rochon, P., Vasseur, C., Nitrogen net mineralization and dynamics following whole-tree harvesting and winter windrowing on clayed sites on northwestern Quebec. Forest Ecol. Manage. 157, Brais, S., Harvey, B.H., Bergeron, Y., Messier, C., Green, D., Belleau, A., Paré, D., Testing forest ecosystem management in boreal mixedwoods of northwestern Quebec: initial response of aspen stands to different levels of harvesting. Can. J. Forest Res. 34, Díaz, F., Bigelow, S., Armesto, J.J., Alteration of the hydrologic cycle due to forest clearing and its consequences for rainforest succession. Forest Ecol. Manage. 244, Di Castri, F., Hajek, E.R., Bioclimatología de Chile. Vicerrectoría de Comunicaciones. Universidad Católica de Chile, Santiago, pp Diehl, P., Mazzarino, M.J., Fontela, S., Plant limiting nutrients in Andean Patagonian woody species: effects of interannual rainfall variation, soil fertility and mycorrhizal infection. Forest Ecol. Manage. 255, Donoso, C., Donoso, P., González, M., Sandoval, V., Los bosques siempreverdes. In: Donoso, C., Lara, A. (Eds.), Silvicultura de los bosques nativos de Chile. Editorial Universitaria, Santiago, pp Echeverría, C., Newton, A.C., Lara, A., Benayas, J.M.R., Coomes, D.A., Impacts of forest fragmentation on species composition and forest structure in the temperate landscape of southern Chile. Global Ecol. Biogeogr. 16, Eno, C.F., Nitrate production in the field by incubating the soil in polyethylene bags. Soil Sci. Soc. Am. Proc. 24, Figueroa, J.A., Lusk, C.H., Germination requirements and seedling shade tolerance are not correlated in a Chilean temperate rain forest. New Phytol. 152, González, M.E., Veblen, T.T., Donoso, C., Valeria, L., Tree regeneration responses in a lowland Nothofagus-dominated forest after bamboo dieback in South-Central Chile. Plant Ecol. 161, Griffiths, R.P., Swanson, A.K., Forest soil characteristics in a chronosequence of harvested Douglas-fir forests. Can. J. Forest Res. 31, Groffman, P.M., Holland, E.A., Myrold, D.D., Robertson, G.P., Zou, X., Denitrification. In: Robertson, G.P., Coleman, D.C., Bledsoe, C.S., Sollins, P. (Eds.), Standard Soil Methods for Long Term Ecological Research. Oxford University Press, New York, pp Guevara, R., Diversidad genética y funcional de grupos bacterianos en suelos con diferente tipo de cobertura vegetal: Efectos biogeográficos y perturbación humana. PhD Thesis. Universidad de Chile, Santiago, pp Gutierrez, A., Armesto, J.J., Aravena, J.C., Disturbance and regeneration dynamics of an old-growth Nord-Patagonian rainforest in Chiloé Island, Chile. J. Ecol. 92,
9 1668 C.A. Pérez et al. / Forest Ecology and Management 258 (2009) Gutierrez, A., Armesto, J.J., Aravena, J.C., Carmona, M., Carrasco, N., Christie, D., Peña, M.P., Pérez, C., Huth, A., Structural and environmental characterization of old-growth temperate rainforests of northern Chilé Island, Chile: regional and global relevance. Forest Ecol. Manage. 258, Hedin, L.O., Armesto, J.J., Johnson, A.H., Patterns of nutrient loss from unpolluted old-growth temperate forests: evaluation of biogeochemical theory. Ecology 76, Heithecker, T.D., Halpern, C.B., Variation in microclimate associated with dispersed-retention harvests in coniferous forests of western Washington. Forest Ecol. Manage. 226, Hope, G.D., Prescott, C.E., Blevins, L.L., Responses of available soil nitrogen and litter decomposition to openings of different sizes in dry interior Douglas-fir forests in British Columbia. Forest Ecol. Manage. 186, Idol, T.W., Pope, P.E., Ponder Jr., F., N mineralization, nitrification, and N uptake across a 100-year chronosequence of upland hardwood forests. Forest Ecol. Manage. 176, Inagaki, Y., Kuramoto, S., Torii, A., Shinomiya, Y., Fukata, H., Effects of thinning on leaf-fall and leaf-litter nitrogen concentration in hinoki cypress (Chamaecyparis obtusa Endlicher) plantations stands in Japan. Forest Ecol. Manage. 255, Jerabkova, L., Prescott, C., Kishchuk, B.E., Effect of variable-retention harvesting on soil nitrogen availability in boreal mixed forests. Can. J. Forest Res. 36, Jha, S., Bawa, K.S., Population growth, human development, and deforestation in biodiversity hotspots. Conserv. Biol. 20, Johnson, D.W., Curtis, P.S., Effects of forest management on soil C and N storage: meta analysis. Forest Ecol. Manage. 140, Klemedtsson, L., Von Arnold, K., Weslien, P., Gundersen, P., Soil C/N ratio as a scalar parameter to predict nitrous oxide emissions. Global Change Biol. 11, Kranabetter, J.M., Coates, K.D., Ten-year postharvest effects of silviculture systems on soil-resource availability and conifer nutrition in a northern temperate forest. Can. J. Forest Res. 34, Lara, A., Una propuesta general de silvicultural para Chile. Ambiente y Desarrollo 12, Lindo, Z., Visser, S., Microbial biomass, nitrogen and phosphorus mineralization, and mesofauna in boreal conifer and deciduous forest floors following partial and clear-cut harvesting. Can. J. Forest Res. 33, Lusk, C.H., Kelly, C.K., Interspecific variation in seed size and safe sites in a temperate rain forest. New Phytol. 158, Montecinos, A., Aceituno, P., Seasonality of the ENSO-related rainfall variability in central Chile and associated circulation anomalies. J. Climate 16, Myrold, D.D., Ruess, R.R., Klug, M.J., Dinitrogen fixation. In: Robertson, G.P., Coleman, D.C., Bledsoe, C.S., Sollins, P. (Eds.), Standard Soil Methods for Long Term Ecological Research. Oxford University Press, New York, pp Olsen, J.S., Energy storage and the balance of producers and decomposers in ecological systems. Ecology 44, Paul, E.A., Clark, F.E., Soil Microbiology and Biochemistry. Academic Press, USA, pp Perakis, S., Hedin, L., Fluxes and fates of nitrogen in soil of an unpolluted oldgrowth temperate forest, southern Chile. Ecology 82, Pérez, C.A., Hedin, L.O., Armesto, J.J., Nitrogen mineralization in two unpolluted old-growth forests of contrasting biodiversity and dynamics. Ecosystems 1, Pérez, C.A., Carmona, M.R., Armesto, J.J., 2003a. Non-symbiotic nitrogen fixation, net nitrogen mineralization, and denitrification in evergreen forest of Chiloé Island, Chile: a comparison with other temperate forests. Gayana 60, Pérez, C.A., Armesto, J.J., Torrealba, C., Carmona, M.R., 2003b. Litterfall dynamics and nutrient use efficiency in two evergreen temperate rain forests of southern Chile. Aust. Ecol. 28, Pérez, C.A., Carmona, M.R., Aravena, J.C., Armesto, J.J., Successional changes in soil nitrogen availability, non symbiotic nitrogen fixation and C/N ratios in southern chilean forest ecosystems. Oecologia 140, Pérez, C.A., Carmona, M.R., Aravena, J.C., Fariña, J.M., Armesto, J.J. Land Cover Change from Primary to Secondary Lowland Forests: Effects on Tree Species Composition and C/N Ratio of Litter and Soil. Academia Press, Belgium, in press. Prescott, C., Effects of clearcutting and alternative silvicultural systems on rates of decomposition and nitrogen mineralization in a coastal montane coniferous forest. Forest Ecol. Manage. 95, Reynolds, P.E., Thevathasan, N.V., Simpson, J., Gordon, A., Lautenschlager, A.M., Bell, R.A., Gresch, W.F., Buckley, D.A., Alternative forest release treatments affect microclimate and soil nitrogen mineralization. Forest Ecol. Manage. 133, Robinson, D., d15n as an integrator of the nitrogen cycle. Trends Ecol. Evol. 16, Rüger, N., Gutiérrez, A.G., Kissling, W.D., Armesto, J.J., Huth, A., Ecological impacts of different harvesting scenarios for temperate evergreen rain forest in southern Chile: a simulation experiment. Forest Ecol. Manage. 252, Rütting, T., Huygens, D., Müller, C., Van Cleemput, O., Godoy, R., Boeckx, P., Functional role of DNRA and nitrite reduction in a pristine south Chilean Nothofagus forest. Biogeochemistry 90, Satti, P., Mazzarino, M.J., Gobbi, M., Funes, F., Roselli, L., Fernandez, H., Soil N dynamics in relation to leaf litter quality and soil fertility in north-western Patagonian forests. J. Ecol. 91, Shaffer, B.T., Widmer, F., Porteus, L.A., Seidler, R.J., Temporal and spatial distribution of the nifh gene of N 2 fixing bacteria in forests and clearcluts in western Oregon. Microb. Ecol. 39, Singh, A.P., Gupta, S.R., Plant decomposition and soil respiration in terrestrial ecosystems. Bot. Rev. 43, Thibodeau, L., Raymond, P., Camiré, C., Munson, A.D., Impact of precomercial thinning in balsam fir stands on soil nitrogen dynamics, microbial biomass, decomposition and foliar nutrition. Can. J. Forest Res. 30, Vann, D.R., Joshi, A., Pérez, C., Johnson, A.H., Frizano, J., Zarin, D.J., Armesto, J.J., Distribution and cycling of C, N, Ca, Mg, K and P in three pristine, old-growth forests in the Cordillera de Piuchué, Chile. Biogeochemistry 60, Westbrook, C., Devito, K.J., Allan, C.J., Soil N cycling in harvested and pristine boreal forests and peatlands. Forest Ecol. Manage. 234, Wilson, M. F., Armesto, J. J., The natural history of Chiloé: on Darwin s trail. Revista Chilena de Historia Natural 69, Zar, J.H., Biostatistical Analysis, third edition. Prentice Hall, Upper Saddle River, pp. 622.
EFFECTS OF NITRATE AND LABILE CARBON ON DENITRIFICATION OF SOUTHERN TEMPERATE FOREST SOILS
RESEARCH 251 EFFECTS OF NITRATE AND LABILE CARBON ON DENITRIFICATION OF SOUTHERN TEMPERATE FOREST SOILS Cecilia A. Pérez 1*, Martín R. Carmona 2, José M. Fariña 1, and Juan J. Armesto 1,2 ABSTRACT The
Environmental impacts of harvesting biomass from the Nordic forests. Nicholas Clarke Norwegian Forest and Landscape Institute
1 Environmental impacts of harvesting biomass from the Nordic forests Nicholas Clarke Norwegian Forest and Landscape Institute Background 2 Increased use of forest biomass for energy might lead to conflict
Key Words Forest Ecosystem, Carbon Dynamics, Boreal Forests, Tropical Forests, Plots Network
1 - i Global Environment Research Account for National Institutes Advancement of East Asia Forest Dynamics Plots Network -Monitoring forest carbon cycling for the development of climate change adaptation-(abstract
REVIEW UNIT 10: ECOLOGY SAMPLE QUESTIONS
Period Date REVIEW UNIT 10: ECOLOGY SAMPLE QUESTIONS A. Sample Multiple Choice Questions Complete the multiple choice questions to review this unit. 1. All of the following are density-dependent factors
Ecosystems. The two main ecosystem processes: Energy flow and Chemical cycling
Ecosystems THE REALM OF ECOLOGY Biosphere An island ecosystem A desert spring ecosystem Biosphere Ecosystem Ecology: Interactions between the species in a given habitat and their physical environment.
Policy & Management Applications of Blue Carbon. fact SHEET
Policy & Management Applications of Blue Carbon fact SHEET Policy & Management Applications of Blue Carbon Coastal Blue Carbon - An Important Wetland Ecosystem Service Coastal Blue Carbon refers to the
Effect of canopy openness on growth, specific leaf area, and survival of tree seedlings in a temperate rainforest of Chiloé Island, Chile
Chacón & Armesto Canopy openness effects on growth and survival of seedlings 71 New Zealand Journal of Botany, 2005, Vol. 43: 71 81 0028 825X/05/4301 0071 The Royal Society of New Zealand 2005 Effect of
Vorstellung des Fachgebiets Umweltchemie
Vorstellung des Fachgebiets Umweltchemie Dr. Mirjam Helfrich 04.06.08 Department of Environmental Chemistry Dynamics of nutrients and pollutants in the atmosphere, hydrosphere, pedosphere, and biosphere
Belowground Ecology in a Restoration Context
Objectives: How can the foundations of and theory in soil science restoration ecology ecological restoration? Topographic heterogeneity Overview of soil physical, chemical and biological properties 1 Heterogeneity
Ecosystem Responses to High-severity Wildfire
Ecosystem Responses to High-severity Wildfire August 22, 2005 Final Report submitted by Steven Overby, RMRS, Flagstaff, AZ. Vegetation Analysis PI: John D. Bailey, Professor of Ecosystem Ecology, NAU Pre-treatment
EFFECT OF ORGANIC MATTER ON NITROGEN MINERALIZATION IN FLOODED AND DRY SOIL
VOL. 7, NO. 8, AUGUST 212 ISSN 1996145 26212 Asian Research Publishing Network (ARPN). All rights reserved. EFFECT OF ORGANIC MATTER ON NITROGEN MINERALIZATION IN FLOODED AND DRY SOIL Linca Anggria, A. organisms, | https://docplayer.net/540162-Forest-ecology-and-management.html | CC-MAIN-2018-47 | en | refinedweb |
The Cloud Natural Language API lets you extract entities from text, perform sentiment and syntactic analysis, and classify text into categories. In this lab, we'll focus on text classification. Using a database of 700+ categories, this API feature makes it easy to classify a large dataset of>
Using the Natural Language API's classifyText method, we can sort our text data into categories with a single API call. This method returns a list of content categories that apply to a text document. These categories range in specificity, from broad categories like /Computers & Electronics to highly specific categories such as /Computers & Electronics/Programming/Java (Programming Language). A full list of the 700+ possible categories can be found here.
We'll start by classifying a single article, and then we'll see how we can use this method to make sense of a large news corpus. To start, let's take this headline and description from a New York Times article in the food section:
A Smoky Lobster Salad With a Tapa Twist. This spin on the Spanish pulpo a la gallega skips the octopus, but keeps the sea salt, olive oil, pimentón and boiled potatoes.
In your Cloud Shell environment, create a
request.json file with the code below. You can either create the file using one of your preferred command line editors (nano, vim, emacs) or use the built-in Orion editor in Cloud Shell:
{ "document":{ "type":"PLAIN_TEXT", "content":"A Smoky Lobster Salad With a Tapa Twist. This spin on the Spanish pulpo a la gallega skips the octopus, but keeps the sea salt, olive oil, pimentón and boiled potatoes." } }
Now we can send this text to the NL API's classifyText method with the following curl command:
curl "{API_KEY}" \ -s -X POST -H "Content-Type: application/json" --data-binary @request.json
Let's take a look at the response:
{ categories: [ { name: '/Food & Drink/Cooking & Recipes', confidence: 0.85 }, { name: '/Food & Drink/Food/Meat & Seafood', confidence: 0.63 } ] }
The API returned 2 categories for this text: /Food & Drink/Cooking & Recipes and /Food & Drink/Food/Meat & Seafood. The text doesn't explicitly mention that this is a recipe or even that it includes seafood, but the API is able to categorize it for us. Classifying a single article is cool, but to really see the power of this feature we should classify lots of text data.
To see how the classifyText method can help us understand a dataset with lots of text, we'll use this public dataset of BBC news articles. The dataset consists of 2,225 articles in five topic areas (business, entertainment, politics, sport, tech) from 2004 - 2005. We've put a subset of these articles into a public Google Cloud Storage bucket. Each of the articles is in a .txt file.
To examine the data and send it to the NL API, we'll write a Python script to read each text file from Cloud Storage, send it to the classifyText endpoint, and store the results in a BigQuery table. BigQuery is Google Cloud's big data warehouse tool - it lets us easily store and analyze large datasets.
To see the type of text we'll be working with, run the following command to view one article (gsutil provides a command line interface for Cloud Storage):
gsutil cat gs://text-classification-codelab/bbc_dataset/entertainment/001.txt
Next we'll create a BigQuery table for our data.
Before we send the text to the Natural Language API, we need a place to store the text and category for each article - enter BigQuery! Navigate to the BigQuery web UI in your console:
Then click on the dropdown arrow next to your project name and select Create new dataset:
Name your dataset news_classification_dataset. You can leave the defaults in the Data location and Data expiration fields:
Click on the dropdown arrow next to your dataset name and select Create new table. Under Source Data, select "Create empty table". Then name your table article_data and give it the following 3 fields in the schema:
After creating the table you should see the following:
Our table is empty right now. In the next step we'll read the articles from Cloud Storage, send them to the NL API for classification, and store the result in BigQuery.
Before we write a script to send the news data to the NL API, we need to create a service account. We'll use this to authenticate to the NL API and BigQuery from our Python script. First, export the name of your Cloud project as an environment variable:
export PROJECT=<your_project_name>
Then run the following commands from Cloud Shell to create a service account:
gcloud iam service-accounts create my-account --display-name my-account gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:my-account@$PROJECT.iam.gserviceaccount.com --role=roles/bigquery.admin gcloud iam service-accounts keys create key.json --iam-account=my-account@$PROJECT.iam.gserviceaccount.com export GOOGLE_APPLICATION_CREDENTIALS=key.json
Now we're ready to send the text data to the NL API. To do that we'll write a Python script using the Python module for Google Cloud (note that you could accomplish the same thing from any language, there are many different cloud client libraries). Create a file called classify-text.py and copy the following into it, making sure to replace YOUR_PROJECT with the name of your project.
from google.cloud import storage, language, bigquery # Set up our GCS, NL, and BigQuery clients storage_client = storage.Client() nl_client = language.LanguageServiceClient() # TODO: replace YOUR_PROJECT with your project name below bq_client = bigquery.Client(project='YOUR_PROJECT') dataset_ref = bq_client.dataset('news_classification') dataset = bigquery.Dataset(dataset_ref) table_ref = dataset.table('article_data') table = bq_client.get_table(table_ref) # Send article text to the NL API's classifyText method def classify_text(article): response = nl_client.classify_text( document=language.types.Document( content=article, type=language.enums.Document.Type.PLAIN_TEXT ) ) return response rows_for_bq = [] files = storage_client.bucket('text-classification-codelab').list_blobs() print("Got article files from GCS, sending them to the NL API (this will take ~2 minutes)...") # Send files to the NL API and save the result to send to BigQuery for file in files: if file.name.endswith('txt'): article_text = file.download_as_string() nl_response = classify_text(article_text) if len(nl_response.categories) > 0: rows_for_bq.append((article_text, nl_response.categories[0].name, nl_response.categories[0].confidence)) print("Writing NL API article data to BigQuery...") # Write article text + category data to BQ errors = bq_client.create_rows(table, rows_for_bq) assert errors == []
We're ready to start classifying articles and importing them to BigQuery. The script takes about two minutes to complete, so while it's running we'll discuss what's happening. Run the script with the following:
python classify-text.py
We're using the google-cloud Python client library to access Cloud Storage, the NL API, and BigQuery. First we create a client for each service we'll be using, and then we create references to our BigQuery table. files is a reference to each of the BBC dataset files in the public bucket. We iterate through these files, download the articles as strings, and send each one to the NL API's in our classify_text function. For all articles where the NL API returns a category, we save the article and its category data to a rows_for_bq list. When we're done classifying each article, we insert our data into BigQuery using create_rows().
When your script has finished running, it's time to verify that the article data was saved to BigQuery. Navigate to your article_data table in the BigQuery web UI and click Query Table:
Enter the following query in the compose query box, replacing YOUR_PROJECT with your project name:
#standardSQL SELECT * FROM `YOUR_PROJECT.news_classification.article_data`
You should see your data when the query completes. The category column has the name of the first category the NL API returned for our article, and confidence is a value between 0 and 1 indicating how confident the API is that it categorized the article correctly. We'll learn how to perform more complex queries on the data in the next step.
First, let's see which categories were most common in our dataset. Enter the following query, replacing YOUR_PROJECT and YOUR_DATASET with your project and dataset names:
#standardSQL SELECT category, COUNT(*) c FROM `YOUR_PROJECT.news_classification.article_data` GROUP BY category ORDER BY c DESC
You should see something like this in the query results:
Let's say we wanted to find the article returned for a more obscure category like /Arts & Entertainment/Music & Audio/Classical Music. We could write the following query:
#standardSQL SELECT * FROM `YOUR_PROJECT.YOUR_DATASET.article_data` WHERE category = "/Arts & Entertainment/Music & Audio/Classical Music"
Or, we could get only the articles where the NL API returned a confidence score greater than 90%:
#standardSQL SELECT article_text, category FROM `YOUR_PROJECT.YOUR_DATASET.article_data` WHERE cast(confidence as float64) > 0.9
To perform more queries on your data, explore the BigQuery documentation. BigQuery also integrates with a number of visualization tools. To create visualizations of your categorized news data, check out the Data Studio quickstart for BigQuery. Here's an example of a Data Studio chart we could create for the query above:
You've learned how to use the Natural Language API's text classification method to classify news articles. You started by classifying one article, and then learned how to classify and analyze a large news dataset using the NL API with BigQuery. | https://codelabs.developers.google.com/codelabs/cloud-nl-text-classification/index.html?index=..%2F..%2Findex | CC-MAIN-2018-47 | en | refinedweb |
When I typed the code below in
app/controllers/application_controller.rb:
class ApplicationController < ActionController::Base
protect_from_forgery with: :exception
def hello
render html: "hello, world!"
end
end
and the code below in
config/routes.rb:
Rails.application.routes.draw do
root 'application#hello'
end
The root route still returns the default Rails page while I was expecting it would return "hello, world!". Please help me with this small issue.
I just built a test app and used your code above...it worked great! I went to and successfully got the "hello, world!" message.
What version of rails are you using?
What happens when you run
rake routes (or
rails routes if you're on 5+) from the command line? Mine looks like this:
$ rake routes Prefix Verb URI Pattern Controller#Action root GET / application#hello
Note - you might need to restart your rails server depending on how you have everything set up but if your server is running on your laptop, the reboot should have handled that. | http://m.dlxedu.com/m/askdetail/3/629406036e54032947a36d013eb8af4f.html | CC-MAIN-2018-47 | en | refinedweb |
Gentoo Wiki:Meetings/2010-05-10
From Gentoo Wiki
Contents
- 1 Meeting Time
- 2 Participants
- 3 Agenda
- 4 Logs
- 5 Summary
- 5.1 <hwoarang> infra services
- 5.2 <a3li> mission statement
- 5.3 <a3li> content
- 5.3.1 <quantumsummers> draft for guide-xml
- 5.3.2 other documentations, tipps & tricks, special cases
- 5.3.3 <spatz> writing agendas for meetings, writing summaries
- 5.3.4 <spatz> drafting news items
- 5.3.5 <quantumsummers> event planning
- 5.3.6 <fekepp> needed categories
- 5.3.7 <amoskvin> multiple package versions
- 5.3.8 <winterheart> How about i18n then?
- 5.3.9 <quantumsummers> QA on the articles
- 5.4 <a3li> policies
- 5.5 Open questions
- 6 Comments
Meeting Time
- 2010-05-10 20:00 UTC
- #gentoo-wiki on freenode
Participants
- hwoarang (developer, moderator)
- a3li (developer, now project lead)
- Mousee
- fekepp
- quantumsummers
- idl0r
- ali_bush
- opcode0x90
- willikins
- ni1s_eee
- amoskvin
- Monkeh
- yporti
- spatz (developer)
- darkside_ (infra guy)
- Poly-C_atwork
- dansan
- rafaelmartins
- crimer
- slep
- lk4d4
Agenda
none
Logs
Summary
(The questions and answers are just grouped)
<hwoarang> infra services
- <hwoarang> robbat2 (infra guy) asked, what we need.
- <hwoarang> work on a3li wiki page and then sent the configs file to robbat
- <darkside_> use a git repo with configuration files
<a3li> mission statement
- <Mousee> we've a "draft mission statement"
- <hwoarang> the mission plan is to host an official wiki on our infra machine
- <hwoarang> wiki could be a nice place for devs+users to cooperate
- <hwoarang> using bugs+cvs for docs like we do now is not an option
<a3li> content
<quantumsummers> draft for guide-xml
- <quantumsummers> would like to think of this as a collab space for docs in progress
- <quantumsummers> then the wiki doc gets guildexml-ified
- <hwoarang> quantumsummers: that was a huge conversion on month ago
- <hwoarang> if we need to write everything on guide-xml
- <quantumsummers> I would be happy to assist with writing the converter using mwlib
- <spatz> there's no intention to convert everything to guidexml, the wiki is supposed to be separate from official documentation
- <spatz> both, just look at the content of g-w.com. not everything should be official documentation, and there's no way it can be maintained properly as official documentation
other documentations, tipps & tricks, special cases
<spatz> writing agendas for meetings, writing summaries
<spatz> drafting news items
<quantumsummers> event planning
- <quantumsummers> there is an events plugin, so meeting planning
- <quantumsummers> calendar in general, conferences
<fekepp> needed categories
- <quantumsummers> we could base it off of the existing doc structure
- <quantumsummers> perhaps off of system profiles
- <a3li> we basically can do one level of proper subclassing with namespaces
- <fekepp> as a wiki evolves also the structure evolves, but a wise choosen structure at the start which can be extended later could be useful
- <fekepp> beside categories there are indexes too
- <quantumsummers> we could have system, desktop, server, multimedia, security, hardware portals
- <ni1s_eee> yes, I would rather have that than ToC templates
- <quantumsummers> perhaps system, should be toolchain && system set
- <quantumsummers> that can be further dissected for hardened & non-hardened
- <Mousee> How are specific architectures handled then, in such a portal layout?
- <quantumsummers> each portal has outlines for the arches
- <quantumsummers> profiles still seem to me a good way of organizing thing as well
- <ni1s_eee> another idea is to have them per purpose or endeavour, i.e system, web server, spam bot, etc.
- <quantumsummers> seems too large a set, to use a per-endeavor schema
- <quantumsummers> the various projects could have portals, yes. or we have a portal for all projects, which are then represented in outlines
- <ni1s_eee> perhaps, i suspect there's going to be overlaps all over the place regardless of the portal 'mode'
- <Mousee> quantumsummers: the later idea sounds more "user friendly", in terms of navigation, at least
- <winterheart> Portals is maybe some replacement for existing projects on g.org
<amoskvin> multiple package versions
- <amoskvin> as in, when would information for old packages be cleaned off?
- <hwoarang> i guess this is up to the editor to keep it up to date
- <a3li> when the packages are no longer in the tree
- <ni1s_eee> on g-w.com its often killed when it leaves the tree and diffrent version are talked bout under diffrent headings
- <hwoarang> I wonder if we should delete the articles or just mark them as "old" "depcreated" or what. some ppl might have very old systems
- <Mousee> We could always duplicate a page and throw it into an "archive" of sorts. You'd still have to create a policy on when to flush the archives out though. It'd add a bit of extra work.
- <a3li> there's the revision history. people needing old info can see old revisions
- <fekepp> a3li: if there is a change for old versions you are not able to edit the historic version. i would suggest to write some hints at the end of an article if there are only small differences, or use an own wiki page only for the old version if there are big differences. in that case it must be explicited marked as old and maybe linked to the new version article. something like unmaintained of course possible too
- <a3li> fekepp: true, but old stuff shouldn't change. if it is old enough to be no longer included in the latest revision, there should be nothing to change.
- <fekepp> yes of course, i would suggest to keep only text about in-tree-versions
- <ni1s_eee> a page with with really old stuff could just have a warning at top
- <amoskvin> maybe add links, like "This article is for kernel 2.6.XX. For this kernel version 2.6.OLD, use this revision"
- <quantumsummers> at some point, it will need an "unmaintained" mark
- <hwoarang> deprecate or mark something as unmaintained it is more preferable than deleting an article
- <ni1s_eee> i've found that its easier to seperate bits after portage seperation of stable and testing, and not specific versioning
<winterheart> How about i18n then?
- <winterheart> fr.wiki.gentoo.org?
- <hwoarang> or wiki.gentoo.org/fr
- <quantumsummers> or wiki.gentoo.org/LANG
- <winterheart> then there must be wiki.gentoo.org/en first
- <Mousee> We could probably setup a "dropdown" list of sorts on the main (english) page so you could set your language that way, first, and then later set it in your profile. But I haven't ever dealth with mediawiki's i18n setup before, so not sure if that would work well even.
- <fekepp> for me it sounds good, english should be default, but wikis for other languages should be configured as soon as there are user willed to write in that language
- <fekepp> and pages cross-linked
<quantumsummers> QA on the articles
- <Mousee> Just need others to review/critique it etc
<a3li> policies
- <Mousee> I'll be happy to write up some draft policies if we need a push in that direction
<hwoarang> When are articles locked down?
- <a3li> edit wars.
- <a3li> or teams request pages to be locked (x11 if my memory serves)
- <fekepp> which leads to the question: who has the right to moderate, and how are they become moderators (and maybe somehow "controlled")
- <hwoarang> pages which describe core packages should be handled with care and restrict edit on them
- <hwoarang> i was thinking about a core team actually
- <hwoarang> consisted from 4-5 people. Like QA team or something
- <hwoarang> like you do have modearators and super moderators
- <hwoarang> this team will try to coordinate editors<->moderators<->gentoo projects
- <winterheart> it is admins and bureaucrats technically
Open questions
- <a3li> the project is lead-less atm
- <a3li> differ from g-w.com?
- <quantumsummers> Who gets write access?
- <quantumsummers> Who are the editors?
- <ni1s_eee> artsy stuff, design and layout, templete style?
- <ni1s_eee> a IRC commit bot like g-w.com | https://wiki.gentoo.org/wiki/Gentoo_Wiki:Meetings/2010-05-10 | CC-MAIN-2018-47 | en | refinedweb |
The class Filtered_exact<CT,ET> is a wrapper type for the number
type CT, with the difference
that all predicates are specialized such that they are guaranteed to be exact.
Speed is achieved via a filtering scheme using interval arithmetic (see
Section
). Here are the necessary requirements:
#include <CGAL/Filtered_exact.h>
The following member functions are used to access the numerical value for the different number types:
This type actually has additional parameters for experimental features. They will be documented when they will be considered stable, in a next release.
You might use at the beginning of your program a typedef as follows:
#include<CGAL/Filtered_exact.h> #include<CGAL/leda_real.h> #include<CGAL/double.h> typedef Filtered_exact<double, leda_real> NT;
Or if you are sure that the predicates involved do not use divisions nor square roots:
#include<CGAL/Filtered_exact.h> #include<CGAL/Gmpz.h> #include<CGAL/int.h> typedef Filtered_exact<int, Gmpz> NT;
And if you know that the double variables contain integer values, you can use:
#include<CGAL/Filtered_exact.h> #include<CGAL/Gmpz.h> #include<CGAL/double.h> typedef Filtered_exact<double, Gmpz> NT;
As a general rule, we advise the use of Filtered_exact<double, leda_real>.
The template definition of the low level predicates of CGAL are overloaded for the type Filtered_exact<CT,ET>.
For each predicate file, the overloaded code is generated automatically by the PERL script (scripts/filtered_predicates_generator.pl) that you can use for your own predicates. This script parses the template functions and generates the overloaded code the following way:
The low level template predicates of CGAL are in files named CGAL/predicates/kernel_ftC2.h (resp. ftC3), the script is used to produce the files CGAL/Arithmetic_filter/predicates/kernel_ftC2.h (resp. ftC3).
At the moment, only the predicates of the Cartesian and Simple_cartesian kernels are supported, as well as the power tests used by the regular triangulations. | http://www.cgal.org/Manual/3.1/doc_html/cgal_manual/NumberTypeSupport_ref/Class_Filtered_exact.html | crawl-001 | en | refinedweb |
#include <CGAL/Object.h>
Objects of type Object are normally created via the global function make_object.
Assignment of an object of type Object to an object of type T is done using assign.
There is also a member function to check whether an object of type Object contains an object.
{ Point_2< Cartesian<double> > point; Segment_2< Cartesian<double> > segment, segment_1, segment_2; std::cin >> segment_1 >> segment_2; Object obj = intersection(segment_1, segment_2); if (assign(point, obj)) { /* do something with point */ } else if ((assign(segment,(); } | http://www.cgal.org/Manual/3.1/doc_html/cgal_manual/Kernel_23_ref/Class_Object.html | crawl-001 | en | refinedweb |
See:
Description
A promise interface.
The ref_send API provides a language for expressing eventual control flow, where operations are only scheduled to happen later, instead of being executed immediately, as is the case with the normal flow of control in Java. To support eventual control flow, the ref_send API uses a different kind of reference, called a promise. A promise is a reference to an object which has yet to be determined. It's this flexibility that enables coding of an algorithm that manipulates values which won't be calculated until later, as is done in eventual control flow.
One way to think about promises is as a generalization of floating point
numbers. Floating point arithmetic has a special way of dealing with error
conditions, different from that used in integer arithmetic. For example, the
expression "
0 / 0" will throw an
ArithmeticException, which aborts the current flow of
control. In contrast, the expression "
0.0 / 0.0" does not throw
an exception, instead returning a special value called a
NaN and
allowing the current flow of control to continue.
Like a floating point number, a promise can represent either a normal
value or an error condition. A
Fulfilled promise
is a wrapper containing a normal Java reference. A
Rejected promise is a wrapper containing an
Exception specifying the details of the error condition.
Instead of throwing an exception, thus terminating the current flow of
control, an expression that returns a
Promise
can return a
Rejected promise to indicate an
error condition, or a
Fulfilled promise for a
normal result. Most often, an expression will only need to be coded to return
a
Fulfilled promise, so some syntactic sugar is
provided to facilitate construction. For example:
import static org.ref_send.promise.Fulfilled.ref; … private int balance; … public Promise<Integer> getBalance() { return ref(balance); } …
The static
ref() function takes
a normal Java reference and returns a corresponding
Promise.
Constructing a
Rejected promise is a little
more verbose. For example:
import static org.ref_send.promise.Fulfilled.ref; … private int balance; … public Promise<Integer> getBalance() { if (balance < 0) { return new Rejected<Integer>(new Overdraft()); } return ref(balance); } …
In floating point arithmetic, the
NaN value is contagious,
meaning that any other expression that uses it also returns
NaN.
For example, the expression "
0.0 / 0.0 + 1.0" also returns
NaN. An algorithm that uses floating point numbers can thus be
coded such that it always runs to completion and the error condition is
propagated through to the return value. A
Rejected promise can be used in a similar way by
turning it into a proxy, which
can then be used like any other object implementing the specified interface.
For example:
public interface Account { public Promise<Integer> getBalance(); } public class Customer { … public Account getSavings() { if (frozen) { return new Rejected<Account>(new Frozen())._(Account.class); } return savings; } } … final Customer client = … final Promise<Integer> current = client.getSavings().getBalance(); …
In the above code, the
current balance will be a
Rejected promise, with
reason
Frozen,
if the customer's savings account has been frozen by the bank. The
Rejected promise was originally produced by the
getSavings() method, but propagated through the
getBalance() invocation to the
current balance.
In addition to representing a normal or error condition, a promise is most
useful for representing a value which is yet to be determined. Such a promise
may be used to refer to a value which will be calculated later, based on
inputs which are not yet known. The
Eventual class supports creating this
kind of
deferred promise,
as well as doing
conditional
operations on promises.
Copyright 1998-2007 Waterken Inc. under the terms of the MIT X license. | http://waterken.sourceforge.net/javadoc/org/ref_send/promise/package-summary.html | crawl-001 | en | refinedweb |
Microblog Headlines
Aha! Okay,.
This is based on the User Control sample that goes with the video that hasn't yet been posted (you don't mind that, do you?) but will be in a couple days. I'll strip it down so as not to get hung up in the parts we don't care about.
First, let's look at the effects. When the application begins there is just a single button marked "Create".
Clicking on that button creates two text blocks and two user controls,
In the UserControl video the User Controls are quite nicer looking but here we're interested in their ability to self-destruct; hence the close button.
When you click the close button, not only does the User Control remove itself from its parent panel's children collection, it raises an event to which the page can subscribe so that it can clean up anything else that might be lingering about; in this case the text block (which is not part of the control). Thus, if I close the upper control, I want also to remove the "Event Address" prompt.
Here's how it all works. I assume we have the custom control already and I add the button to it. The key is to give that user control its own EventArgs type (to hold its unique ID ) and thus also give it a delegate and an event.
public partial class AddressUserControl : UserControl
{
public class AddressEventArgs : RoutedEventArgs
{
public object Tag { get; private set; }
public AddressEventArgs(object theTag)
{
this.Tag = theTag;
}
}
Notice both that AddressEventArgs is derived from RoutedEventArgs adn that it is nested within AddressUserControl (my User Control). It has a constructor and a public property called Tag (to parallel the idea that the control itself has a Tag of type object).
We now give the control a delegate and an event
public delegate void AddressEventHandler(object o, AddressEventArgs e);
public event AddressEventHandler Closed;
The closed event is what the page will subscribe to, in order to be alerted when the control is closed. This event is fired as part of the control's handling of the button's click event,
public AddressUserControl()
{
InitializeComponent();
Close.Click += new RoutedEventHandler(Close_Click);
}
void Close_Click(object sender, RoutedEventArgs e)
{
Panel parent = this.Parent as Panel;
if (parent != null)
{
parent.Children.Remove(this);
if (Closed != null && this.Tag != null)
{
Closed(this, new AddressEventArgs(this.Tag));
}
}
}
In the constructor we wire up the Close.Click; our internal handler for when the button is pressed. That handler does two things; it first makes sure we're in a panel, and if so, it removes us from the panel. It then checks to see if anyone has registered with our Closed event and that our Tag is not null; if so then it fires the Closed event to anyone who is interested.
All of the above falls into place when you see the control created dynamically. The XAML has nothing but the stack panel to hold the dynamically created controls,
Page.xaml
<UserControl x:Class="UserControlDemo.Page"
xmlns=""
xmlns:x=""
xmlns:
<StackPanel x:
<Button x:
</StackPanel>
</UserControl>
Here's how the control is created in Page.xaml.cs:
void Create_Click(object sender, RoutedEventArgs e)
{
TextBlock tb = new TextBlock();
tb.Text = "Event Address";
tb.FontFamily = new FontFamily("Verdana");
tb.FontSize = 24;
tb.HorizontalAlignment = HorizontalAlignment.Left;
tb.Margin = new Thickness(15, 0, 0, 0);
tb.Tag = "1";
MasterContainer.Children.Add(tb);
AddressUserControl auc = new AddressUserControl();
auc.Tag = "1";
auc.Closed += new AddressUserControl.AddressEventHandler(auc_Closed);
MasterContainer.Children.Add(auc);
tb = new TextBlock();
tb.Text = "Billing Address";
tb.FontFamily = new FontFamily("Verdana");
tb.FontSize = 24;
tb.HorizontalAlignment = HorizontalAlignment.Left;
tb.Margin = new Thickness(15, 0, 0, 0);
tb.Tag = "2";
MasterContainer.Children.Add(tb);
auc = new AddressUserControl();
auc.Tag = "2";
auc.Closed += new AddressUserControl.AddressEventHandler(auc_Closed);
MasterContainer.Children.Add(auc);
}
Unpacking this, we start by dynamically creating a textblock and adding it to the stack panel. We then create an AddressUserControl and assign it the same tag as the TextBlock and then we register with the user control's closed event, passing in the name of the method to be invoked when that event is raised (auc_closed). Finally, we add the user control to the stack panel.
This is repeated for the second text block and the second user control.
When the user clicks on the button, the user control takes care of removing itself from the stack panel, but it also fires the Closed event, which the page has now registered for. Per the registration, the method auc_Closed is called,
void auc_Closed(object o, AddressUserControl.AddressEventArgs e)
{
foreach (UIElement uie in MasterContainer.Children)
{
TextBlock tb = uie as TextBlock;
if (tb != null)
{
if (tb.Tag.ToString().Equals(e.Tag.ToString()))
{
MasterContainer.Children.Remove(uie);
break;
}
}
}
}
Auc_Closed iterates through the stack panel's children collection looking for textBlocks. If it finds one it checks the Tag against the tag in the AddressEventArgs (put there when the event was fired) and if they match, then it removes that text block from the stack panel as well.
Sweet.
I've put the entire source code Here
Pingback from » Dynamically Creating User Controls That Fire Events Back To You Available Domains:
>>Sweet<<
You took the words out of my mouth...
I'm stunned! In such a short time since my response, you cooked up all these codes and explanations. My hat off to you Jesse.
Let me first thank Nick and yourself for going over board to answer my question. I'm a bit embarrassed and didn't ask for all this.
Secondly, I have a question which I also discussed it with Nick;
Here in this code which is part of the Page, you register the auc.Closed Handler:
auc.Closed += new AddressUserControl.AddressEventHandler(auc_Closed);
And here in the UserControl:
parent.Children.Remove(this);
You remove the UserControl from it's parent.children.
My question is, just by removing the child from the parent, is no indication to GC to permanently "destroy" the UserControl (auc). The only way for GC to see that, is if auc=null. However, for GC to do that it has to make sure there is no reference to that object or it's internal. So shouldn't we also do the following as part of the closing?
auc.Closed -= new AddressUserControl.AddressEventHandler(auc_Closed);
Personally, I would remove the UserControl (auc) and setting it to null, in the parent Page when responding to Closed.
>> The only way for GC to see that, is if auc=null <<.
I don't know if having the registration on the event handler matters; I suppose it must because the page has to be able to get to the event even though the control has been removed from its children collection. I'd have to look into the order of operations, but it would be a simple matter to deregister it while cleaning up.
In any case, no problem; it was a fun project and a good blog topic. I'll come back to it with more time and make it into a video at some point.
>.<<
So, you're saying when we create a User Control and add the reference to Grid's collection the User Control gets created and then when we remove the reference from from the collection, the actual User Control gets destroyed.
And if I add that reference back to the collection, the user control object gets created again?
Oh wait, let me take back my previous question.
When we add a User Control to a collection (even though as part of the constructor, we pass the reference to that object to the collection (auc)), but from that point on, the collection now maintains (as you said) a new internal reference to the User Control object and then when we remove the object from collection, the internal collection reference and the User Control object are destroyed. Even though I may still have my original Reference auc hanging around, but that doesn't mean it's pointing to the old object any more. And if it is not, then GC will eventually remove the auc too.
Did I get right?
Pingback from Dew Drop - May 8, 2008 | Alvin Ashcraft's Morning Dew
>>Even though I may still have my original Reference auc hanging around, but that doesn't mean it's pointing to the old object any more. And if it is not, then GC will eventually remove the auc too.<<
You certainly did. Now, the question is whether I have the order of operations correct and what happens if there are no other references to that object... that is, if the GC cleans up the object is there a chance it won't fire its event, or is the fact that the page is registered with the event handler enough to keep it alive. For this, I need to make inquiries.
>>or is the fact that the page is registered with the event handler enough to keep it alive. For this, I need to make inquiries.<<
This is something that's been in my head. So I figured, just to be on the safe side, I'd do two extra step to ensure the object and it's original reference has been destroyed, one by deregetering the event and secondly setting the reference (auc) to null. Because if the user clicks on the create button again, a new object is created and is hooked up to reference and then gets added to children.
When I used to write in Delphi for Win32 (which didn't have it's own GC), we had to be very careful about leaving objects around.
Thanks to you, this blog has been very valuable [to me at least].
Pingback from Mind Gravy » Blog Archive » links for 2008-05-10
Pingback from Mind Gravy ?? Blog Archive ?? links for 2008-05-10 | My Geek Solutions
Ben,
I double checked, you do not have to take extra steps either to ensure that the parent page will receive the notification, nor to ensure that the control will be destroyed; simply fire the event and remove the control from the parent's collection and it will work as it should. | http://silverlight.net/blogs/jesseliberty/archive/2008/05/07/dynamically-creating-user-controls-that-fire-events-back-to-you.aspx | crawl-001 | en | refinedweb |
By Doug Tidwell
Price: $39.95 USD
£28.50 GBP
Cover | Table of Contents | Colophon
<td>12304</td> download the latest stable build of the code. (If you're feeling brave, feel free to download last night's build instead.)
CLASSPATH. The three files include the .jar file for the Xerces parser, the .jar file for the Xalan stylesheet engine itself, and the .jar file for the Bean Scripting Framework. As of this writing, the .jar files are named xerces.jar, xalan.jar, and bsf.jar.
java org.apache.xalan.xslt.Process
java org.apache.xalan.xslt.Process =xslproc options: -IN inputXMLURL [-XSL XSLTransformationURL] [-OUT outputURL] [-LXCIN compiledStylesheetFileNameIn] [-LXCOUT compiledStylesheetFileNameOutOut]
<>
java org.apache.xalan.xslt.Process -in greeting.xml -xsl greeting.xsl -out greeting.html
<html> <body> <h1> Hello, World! </h1> </body> </html>
<xsl:output>element that specifies HTML as the output format and two
<xsl:template>elements that specify how parts of our XML document should be transformed.
<xsl:stylesheet>element is typically the root element of an XSLT stylesheet.
<xsl:stylesheet xmlns:
<xsl:stylesheet>element defines the version of XSLT we're using, along with a definition of the
xslnamespace. To be compliant with the XSLT specification, your stylesheet should always begin with this element, coded exactly as shown here. Some stylesheet processors, notably Xalan, issue a warning message if your
<xsl:stylesheet>element doesn't have these two attributes with these two values. For all examples in this book, we'll start the stylesheet with this exact element, defining other namespaces as needed.
xml,
html, and
text. We're creating an HTML document, so HTML is the output method we want to use. In addition to these three methods, an XSLT processor is free to define its own output methods, so check your XSLT processor's documentation to see if you have any other options.
<xsl:output
method="xml", you can use
doctype-publicand
doctype-systemto define the public and system identifiers to be used in the the document type declaration. If you're using
method="xml"or
method="html", you can use the
indentattribute to control whether or not the output document is indented. The discussion of the
<xsl:output>element in Appendix A has all the details.
"/", the XPath expression for the document's root element.
<?xml version="1.0"?> <xsl:stylesheet <xsl:output <xsl:template <svg width="8cm" height="4cm"> <g> <defs> <radialGradient id="MyGradient" cx="4cm" cy="2cm" r="3cm" fx="4cm" fy="2cm"> <stop offset="0%" style="stop-color:red"/> <stop offset="50%" style="stop-color:blue"/> <stop offset="100%" style="stop-color:red"/> </radialGradient> </defs> <rect style="fill:url(#MyGradient); stroke:black" x="1cm" y="1cm" width="6cm" height="2cm"/> <text x="4cm" y="2.2cm" text- <xsl:apply-templates </text> </g> </svg> </xsl:template> <xsl:template <xsl:value-of </xsl:template> </xsl:stylesheet>
<greeting>elements similarly. We've gone over the basics of what stylesheets are and how they work..
<para>element, the
quantityattribute of the
<part-number>element, all
<first-name>elements that contain the text
"Joe", and many other variations. An XSLT stylesheet uses XPath expressions in the
matchand
selectattributes of various elements to indicate how a document should be transformed. In this chapter, we'll discuss XPath in all its glory.
$x*6) and Unix-like path expressions (such as
/sonnet/author/last-name). In addition to the basic syntax, XPath provides a set of useful functions that allow you to find out various things about the document.
<?xml version="1.0"?> <?xml-stylesheet <!ELEMENT auth:author (last-name,first-name,nationality, year-of-birth?,year-of-death?)> <!ELEMENT last-name (#PCDATA)> <!ELEMENT first-name (#PCDATA)> <!ELEMENT nationality (#PCDATA)> <!ELEMENT year-of-birth (#PCDATA)> <!ELEMENT year-of-death (#PCDATA)> <!ELEMENT title (#PCDATA)> <!ELEMENT lines (line,line,line,line, line,line,line,line, line,line,line,line, line,line)> <!ELEMENT line (#PCDATA)> ]> <!-- Default sonnet type is Shakespearean, the other allowable --> <!-- type is "Petrarchan." --> <sonnet type="Shakespearean"> <auth:author xmlns: <last-name>Shakespeare</last-name> <first-name>William</first-name> <nationality>British</nationality> <year-of-birth>1564</year-of-birth> <year-of-death>1616</year-of-death> </auth:author> <!-- Is there an official title for this sonnet? They're sometimes named after the first line. --> <title>Sonnet 130</title> <lines> <line>My mistress' eyes are nothing like the sun,</line> <line>Coral is far more red than her lips red.</line> <line>If snow be white, why then her breasts are dun,</line> <line>If hairs be wires, black wires grow on her head.</line> <line>I have seen roses damasked, red and white,</line> <line>But no such roses see I in her cheeks.</line> <line>And in some perfumes is there more delight</line> <line>Than in the breath that from my mistress reeks.</line> <line>I love to hear her speak, yet well I know</line> <line>That music hath a far more pleasing sound.</line> <line>I grant I never saw a goddess go,</line> <line>My mistress when she walks, treads on the ground.</line> <line>And yet, by Heaven, I think my love as rare</line> <line>As any she belied with false compare.</line> </lines> </sonnet> <!-- The title of Sting's 1987 album "Nothing like the sun" is --> <!-- from line 1 of this sonnet. -->
matchand
selectattributes of various XSLT elements. Those location paths described the parts of the XML document we wanted to work with. Most of the XPath expressions you'll use are location paths, and most of them are pretty simple. Before we dive in to the wonders of XPath, we need to discuss the context.
sonnetis a directory at the root level of the filesystem. The
sonnetdirectory would, in turn, contain directories named
auth:author,
title, and
lines. In this example, the context would be the current directory. If I go to a command line and execute a particular command (such as
dir *.js), the results I get vary depending on the current directory. Similarly, the results of evaluating an XPath expression will probably vary based on the context.
<li>elements in a given document. The context size refers to the number of
<li>items selected by that expression, and the context position refers to the position of the
<table>element like this:
<table border="{@size}"/>
@sizeis evaluated, and its value, whatever that happens to be, is inserted into the output tree as the value of the
borderattribute. Attribute value templates can be used in any literal result elements in your stylesheet (for HTML elements and other things that aren't part of the XSLT namespace, for example). You can also use attribute value templates in the following XSLT attributes:
nameand
namespaceattributes of the
<xsl:attribute>element
nameand
namespaceattributes of the
<xsl:element>element
format,
lang,
letter-value,
grouping-separator, and
grouping-sizeattributes of the
<xsl:number>element
nameattribute of the
<xsl:processing-instruction>element
lang,
data-type,
order, and
case-orderattributes of the
<xsl:sort>element
node-set
boolean
trueor
false. Be aware that the
trueor
falsestrings have no special meaning or value in XPath; see Section 4.2.1.2 in Chapter 4 for a more detailed discussion of boolean values.
number
integer(or
int) datatype does not exist in XPath and XSLT. Specifically, all numbers are implemented as IEEE 754 floating-point numbers, the same standard used by the Java
floatand
doubleprimitive types. In addition to ordinary numbers, there are five special values for numbers: positive and negative infinity, positive and negative zero, and
NaN, the special symbol for anything that is not a number.
string
namespacenodes.
<sonnet>element. The
<sonnet>element, in turn, contains two attributes and an
<auth:author>element. The
<auth:author>element contains a namespace node and an element. Be aware that this stylesheet has its limitations; if you throw a very large XML document at it, it will generate an HTML file with many levels of nested tables—probably more levels than your browser can handle.
<xsl:template <html> <head> <title>XPath view of your document</title> <style type="text/css"> <xsl:comment> span.literal { font-family: Courier, monospace; } </xsl:comment> </style> </head> <body> <h1>XPath view of your document</h1> <p>The structure of your document (as defined by the XPath standard) is outlined below.</p> <table cellspacing="5" cellpadding="2" border="0"> <tr> <td colspan="7"> <b>Node types:</b> </td> </tr> <tr> <td bgcolor="#99CCCC"><b>root</b></td> <td bgcolor="#CCCC99"><b>element</b></td> <td bgcolor="#FFFF99"><b>attribute</b></td> <td bgcolor="#FFCC99"><b>text</b></td> <td bgcolor="#CCCCFF"><b>comment</b></td> <td bgcolor="#99FF99"><b>processing instruction</b></td> <td bgcolor="#CC99CC"><b>namespace</b></td> </tr> </table> <br />
testevaluates to
false, then the contents of the
<xsl:if>element are ignored. (If you want to implement an if-then-else statement, check out the
<xsl:choose>element described in the next section.)
>instead of
>in the attribute value. You're always safe using
>here, although some XSLT processors process the greater-than sign correctly if you use
>instead. If you need to use the less-than operator (
<), you'll have to use the
<entity. The same holds true for the less-than-or-equal operator (
<=) and the greater-than-or-equal (
>=) operators. See Section B.4.2 for more information on this topic.
<xsl:if>element is pretty simple, but it's the first time we've had to deal with boolean values. These values will come up later, so we might as well discuss them here. Attributes like the
testattribute of the
<xsl:if>element convert whatever their values happen to be into a boolean value. If that boolean value is
<xsl:apply-templates>element to invoke other templates. You can think of this as a limited form of polymorphism; a single instruction is invoked a number of times, and the XSLT processor uses each node in the node-set to determine which
<xsl:template>to invoke. Most of the time, this is what we want. However, sometimes we want to invoke a particular template. XSLT allows us to do this with the
<xsl:call-template>element.
name.
<xsl:call-template>element to invoke the named template.
<xsl:template <!-- interesting stuff that generates the masthead goes here --> </xsl:template> ... <xsl:template <html> <head> <title><xsl:value-of</title> </head> <body> <xsl:call-template ...
<xsl:call-template>to invoke those templates and create the look and feel you want.
<xsl:import>or
<xsl:include>), you can create a set of stylesheets that generate the look and feel of the web site you want. If you decide to redesign your web site, redesign the stylesheets that define the common graphical and layout elements. Change those stylesheets, regenerate your web site, and voila! You will see an instantly updated web site. (See Chapter 9 for an example.)
<xsl:param>and
<xsl:with-param>elements allow you to pass parameters to a template. You can pass templates with either the
<call-template>element or the
<apply-templates>element; we'll discuss the details in this section.
<xsl:param>element. Here's an example of a template that defines two parameters:
<xsl:template <xsl:param <xsl:param <xsl:value-of </xsl:template>
widthand
height, and outputs their product.
selectattribute on the
<xsl:param>element:
<template name="addTableCell"> <xsl:param <xsl:param <xsl:param <td width="{$width}" bgcolor="{$bgColor}"> <xsl:apply-templates </td> </template>
bgColorand
widthare
'blue'and
150, respectively. If we invoke this template without specifying values for these parameters, the default values are used. Also notice that we generated the values of the
widthand
bgcolorattributes of the HTML
<td>tag with attribute value templates, the values in curly braces. For more information, see Section 3.3 in Chapter 3.
blue, but we didn't do it around the value
150. Without the single quotes around
blue, the XSLT processor assumes we want to select all the
<blue>elements in the current context, which is probably not what we want. The XSLT processor is clever enough to realize that the value
<xsl:variable>element, which allows you to store a value and associate it with a name.
<xsl:variable>element can be used in three ways. The simplest form of the element creates a new variable whose value is an empty string (
""). Here's how it looks:
<xsl:variable
x, whose value is an empty string. (Please hold your applause until the end of the section.)
selectattribute to the
<xsl:variable>element:
<xsl:variable
blueis used as the value of the variable. If we had left out the single quotes, this would mean the value of the variable is that of all the
<blue>elements in the current context, which definitely isn't what we want here.
35, Xalan, XT, and Saxon all assume that I mean
35as a literal value, not as an element name. Although this works with many XSLT processors, you're safer to put the single quotes around the numeric values anyway. A further aside: the value here is the string "35", although it can be converted to a number easily.
<xsl:variable>element is to put content inside it. Here's a brief example:
<xsl:variable <xsl:choose> <xsl:when <xsl:text>13</xsl:text> </xsl:when> <xsl:otherwise> <xsl:text>15</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:variable>
(^)in front of all ampersands
(&)
mkdir xslt & chdir xslt | http://www.oreilly.com/catalog/9780596000530/toc.html | crawl-001 | en | refinedweb |
Cover |>), Visual Slick Edit, and others are low-cost windowed editors (primarily for MS-Windows) that have some amount of Java recognition built in, and the ability to compile from within the editor. TextPad has quite a number of file types that it recognizes, including batch files and shell scripts, C, C++, Java, JSP (see Section 18.7), JavaScript (a client-side web technology), MS-Windows and Unix platforms; see),. the open source version from. You will get two files. First is the source code, in a file called javacooksrc.jar, which you should unzip someplace convenient or wherever you like to keep source code. Second is a file called com-darwinsys-util.jar, which you need to set in your CLASSPATH (see Section 2.6) or JDKHOME/jre/lib/ext directory. The files are roughly organized in per-chapter directories, but there is a lot of overlap and cross-referencing. Because of this, I have prepared a cross-reference file named index-bychapter.html. There is also a mechanically generated file called index-byname.html, which you can use if you know the name of the file you want (and remember that Java source files almost always have the same name as the public class they contain). The canonical index file, index.html, links to both these files.
.Of course, not everybody likes typing those commands, so there is a makefile for the make utility. make is standard on Unix and readily available for MS-Windows from, for example, the GNUwin32 project (see). There is also a top-level makefile that visits the subdirectories and runs make in each of them. These makefiles have been tested with gmake (GNU make 3.79.1), BSD make (OpenBSD 2.8), and they should work with almost any reasonably modern make program or equivalent.
HelloWorldprogram from the current directory. The compiler is actually reading a source file, while the java command is running a class, a class that might be located someplace in your CLASSPATH (see Section 2.6). It is common for JDK users to use a batch script or command file to automate this. Mine is called jr, for Java compile and Run. The Unix version is jr, a shell script:
javac $1.java && java $*
$*gets expanded to include
$1and any other arguments. The MS-Windows version is jr.bat :
javac %1.java if errorlevel 1 goto norun java %1 %2 %3 %4 %5 %6 :norun
all: javac HelloWorld.java
#) are comments for the reader and are ignored by make :
# Makefile for Acme FlutterBox program. # Uncomment one of these compiler definitions: #JAVAC= javac JAVAC= jikes +E compile: $(JAVAC) *.java clean: @rm -f *.class. Like make, Ant uses a file or files -- written in XML -- listing what to do and, if necessary, how to do it. These rules are intended to be platform-independent, though you can of course write platform-specific recipes if necessary.
="../com-darwinsys-util, and in doing so inherits the functionality it needs to be viewable inside a web page in a Java-enabled web browser..
java.util.Dateclass had some serious limitations with regard to internationalization. Accordingly, many of the
Dateclass methods and constructors are marked "deprecated." To deprecate something means, according to my Concise Oxford Dictionary of Current English, to "express wish against or disapproval of." Java's developers are therefore expressing a wish that you no longer do things the old way.for will occur:
> at the end of this chapter, in Section 1.19. My
Debugclass also provides the string "debug". as part of also advocate running your tests almost every time you compile. This group of extremists has some very well-known leaders, including Gamma and Beck of Design Patterns fame. While I am not yet ready to unconditionally endorse all aspects of Extreme Programming, I certainly"); } }
^H^@Z^C^@^@^@P^H^@[^H^@n^H^@o^H^@p^H^@q^H^@r^H^@s^H^@t^H^@v^H^@y^H ^@z^H^@{^H^@}^H^@Ç^H^@ä^H^@à^H^@á^H^@ª^H^@º^G^@ç^G^@Æ^G^@ô^G^@ö^G^@ò^G^@Û^G^@ù^G ^@ÿ^G^@...^G^@Ü^G^@¢^G^@£^G^@¥ ^@^V^@@ ^@^\^@@ ^@!^@A ^@^Y^@B ^@^[^@C
obfuscator. An obfuscator takes your program and tries to make it obscure, so that decompilation either will not work or will not be useful.
catchclause that matches it. If none is found, the Java interpreter program catches it and prints a stack traceback showing all the method calls that got from the top of the program to the place where the exception was thrown. You can print this traceback yourself in any catch clause: the
Throwableclass has several methods called
printStackTrace( ).
C:\> set JAVA_COMPILER=NONE # DOS, MS-Windows setenv JAVA_COMPILER NONE # UNIX Csh export JAVA_COMPILER=NONE # UNIX Ksh, modern sh). What I didn't tell you, but what you might have realized by extension, is that the source examples from all the O'Reilly Java books are available there too: the Java Examples in a Nutshell book; the Java Swing book;, MS-Windows, MacOS, Palm, BeOS, or whatever. The JDK source kit includes the source of all this stuff.
Debugutility mentioned in Section 1.12.'ll discuss more fully in Section 7.8. wouldand
java.class.path,; } }
Class.forName("javax.swing.JButton");
System.properties.
-). And the PATH separator (
:) was also used as a "drive letter" delimiter, as in C: or A:. So we now have commands like this:
java -classpath \c:\ian\classes MyProg
trace,
strace,
truss,
ktrace) you would probably see the
Javaprogram
open(or
stat, or
access) the following files:
sun.boot.class.path = C:\JDK1.2\JRE\lib\rt.jar;C:\JDK1.2\JRE\lib\i18n.jar;C:\ JDK1.2\JRE\classes | http://www.oreilly.com/catalog/9780596001704/toc.html | crawl-001 | en | refinedweb |
Don't ever let anyone tell you that STL is nice and abstract and all that, but it just doesn't perform well. I poke holes in that myth in several places in this book (Extended STL, Volume 1: Collections and Iterators, Sections 17.1.1, 19.1.1, 27.9, 36.2.2, and 38.2.1), but this chapter blows it clean out of the water.
The subject matter of this chapter is Scatter/Gather I/O (also known as Scatter I/O), which means the exchange of data between application code and (usually kernel) I/O routines in the form of multiple blocks per action. Its primary intent is to allow client code to manipulate application-level packet headers separately from the payloads, but it can also be turned to other cunning ends, and can yield considerable performance benefits.
The cost is that it complicates the manipulation of data when logically contiguous data is spread over physically discontiguous blocks. To aid in handling such cases, we may use classes that (re)linearize the data back to an acceptably manipulable abstraction.
Since this is a book about extending STL, the abstraction
we will seek is that of an STL collection (Section 2.2) and its
associated iterator ranges. But as we will see, there is a cost in such
abstractions, so we will go further and examine how we might optimize
the transfer of information without diluting the power of the
abstraction. This will lead us into looking into the rules regarding
overriding functions in the
std namespace and how we may accommodate one
with the other.
Scatter/Gather I/O involves the exchange of information between an I/O API
and client code in a physically discontiguous form. In all cases I've come
across, this discontiguous form involves a number of separate memory blocks.
For example, the UNIX
readv() and
writev() functions
act like their
read() and
write() siblings, but,
rather than a pointer to a single area of memory and its size, they are passed
an array of
iovec structures:
struct iovec { void* iov_base; size_t iov_len; }; ssize_t readv(int fd, const struct iovec* vector, int count); ssize_t writev(int fd, const struct iovec* vector, int count);
The Windows Sockets API has an analogous structure and corresponding functions:
struct WSABUF { u_long len; char* buf; }; int WSARecv(SOCKET s , WSABUF* lpBuffers , DWORD dwBufferCount , . . . // And 4 more parameters); int WSASend(SOCKET s , WSABUF* lpBuffers , DWORD dwBufferCount , . . . // And 4 more parameters); int WSARecvFrom(SOCKET s , WSABUF* lpBuffers , DWORD dwBufferCount , . . . // And 6 more parameters); int WSASendTo( SOCKET s , WSABUF* lpBuffers , DWORD dwBufferCount , . . . // And 6 more parameters);
You might wonder why people would want to perform I/O in such a fashion, given the obvious complication to client code. Well, if your file or network data has a fixed format, you can read one or more records/packets in or out without any need to move, reformat, or coalesce them. This can be quite a convenience. Similarly, if your records/packets have variable format but a fixed-size header, you can read/write the header directly to/from a matching structure and treat the rest as an opaque variable-size blob. And there's a third reason: performance. I once created a network server architecture using Scatter/Gather I/O that used a multithreaded nonlocking memory allocation scheme. (Suffice to say, it was rather nippy.)
But however much Scatter/Gather I/O may help in terms of performance, when dealing with variable-length records/packets, or those whose payloads contain elements that are variable-length, the client code is complicated, usually low on transparency, and bug-prone. An efficient abstraction is needed.
The challenge with Scatter/Gather I/O is that using memory scattered over multiple blocks is not a trivial matter. On projects (on Windows platforms) in the 1990s, I tended to use a custom COM stream implementation from my company's proprietary libraries, which was implemented for a different task some years previously. Permit me to talk about the COM stream architecture for a moment. (I know I promised in the last chapter there would be no more COM, but there is a point to this, even for UNIX diehards. Trust me, I'm a doctor!)
A COM stream is an abstraction over an underlying storage medium having
much in common with the file abstractions we're used to. Essentially, it
has access to the underlying medium and defines a current point within
its logical extent. A stream object exhibits the
IStream interface
(shown in abbreviated form in Listing 31.1), which contains a number of
methods, including
Seek(),
SetSize(),
Stat(),
and
Clone().
There are also methods for acquiring exclusive access to regions of the
underlying medium. The
IStream interface derives from
ISequentialStream (also shown in Listing 31.1), which defines the
two methods
Read() and
Write().You can implement a
stream for a particular underlying medium directly by deriving from
IStream and
providing suitable definitions for its methods.
Listing 31.1 Definition of the
ISequentialStream and
IStream Interfaces
interface ISequentialStream : public IUnknown { virtual HRESULT Read(void* p, ULONG n, ULONG* numRead) = 0; virtual HRESULT Write(void const* p, ULONG n, ULONG* numWritten) = 0; }; interface IStream : public ISequentialStream { virtual HRESULT Seek(. . .) = 0; virtual HRESULT SetSize(. . .) = 0; virtual HRESULT CopyTo(. . .) = 0; virtual HRESULT Commit(. . .) = 0; virtual HRESULT Revert(. . .) = 0; virtual HRESULT LockRegion(. . .) = 0; virtual HRESULT UnlockRegion(. . .) = 0; virtual HRESULT Stat(. . .) = 0; virtual HRESULT Clone(. . .) = 0; };
COM defines another stream-related abstraction, in the form of the
ILockBytes interface (shown in abbreviated form in Listing 31.2). It
abstracts arbitrary underlying mediums as a logically contiguous array
of bytes. It does not maintain any positional state. Hence, it has
ReadAt() and
WriteAt() methods rather than
Read() and
Write().
Listing 31.2 Definition of the
ILockBytes Interface
interface ILockBytes : public IUnknown { virtual HRESULT ReadAt( ULARGE_INTEGER pos, void* p , ULONG n, ULONG* numRead) = 0; virtual HRESULT WriteAt(ULARGE_INTEGER pos, void const* p , ULONG n, ULONG* numWritten) = 0; virtual HRESULT Flush() = 0; virtual HRESULT SetSize(. . .) = 0; virtual HRESULT LockRegion(. . .) = 0; virtual HRESULT UnlockRegion(. . .) = 0; virtual HRESULT Stat(. . .) = 0; };
It is a relatively simple matter to implement a COM stream in terms
of (an object that exhibits) the
ILockBytes interface. All that's
required is an
ILockBytes* and a position. My company has just such an
entity, accessible via the
CreateStreamOnLockBytes() function:
HRESULT CreateStreamOnLockBytes(ILockBytes* plb, unsigned flags , IStream** ppstm);
Obviously, the next question is, "How do we get hold of an
ILockBytes
object?" Again, there's a function for that,
CreateLockBytesOnMemory():
HRESULT CreateLockBytesOnMemory(void* pv , size_t si , unsigned flags , void* arena , ILockBytes** pplb);
This supports a whole host of memory scenarios, including using a fixed
buffer, using Windows "global" memory, using a COM allocator
(
IAllocator), and so on. One of the many flags is
SYCLBOMF_FIXED_ARRAY,
which indicates that
pv points to an array of
MemLockBytesBlock
structures:
struct MemLockBytesBlock { size_t cb; void* pv; };
I'm not going to bang on about this much more, as hindsight is a harsh
judge of things such as opaque pointers whose meanings are moderated by
flags. The point I want to get across about this stuff is that I was
able to take a set of memory blocks containing the scattered packet
contents and get back an
IStream pointer from which the packet
information can be extracted in a logical and linear manner. Such code
takes the following simple and reasonably transparent form. (Error
handling is elided for brevity.) The
ref_ptr instances are used to
ensure that the reference counts are managed irrespective of any early
returns and/or exceptions.
std::vector<WSABUF> blocks = . . . size_t payloadSize = . . . ILockBytes* plb; IStream* pstm; SynesisCom::CreateLockBytesOnMemory(&blocks[1], payloadSize , SYCLBOMF_FIXED_ARRAY | . . ., NULL, &plb); stlsoft::ref_ptr<ILockBytes> lb(pbl, false); // false "eats" the ref SynesisCom::CreateStreamOnLockBytes(plb, 0, &pstm); stlsoft::ref_ptr<IStream> stm(pstm, false); // false "eats" the ref . . . // Pass off stm to higher-layer processing
The stream can then be wrapped by a byte-order-aware Instance Adaptor class that works in partnership with a message object Factory, to complete the mechanism for efficient translation from TCP packet stream segments to instances of higher-level protocol (C++) objects. The high efficiencies obtainable by such a scheme result from there being no allocations of, and no copying into, memory that does not constitute part of the final translated message object instances.
This is a powerful basis for a communications server model, one that I've used several times, albeit in different guises. In the case described earlier, a number of characteristics of the approach might incline you to search, as I have done, for better, less technology-specific solutions.
First, the major downside of the described mechanism is that, being COM, the server code is effectively Windows-specific. Second, many developers (incorrectly) consider COM, as they (equally incorrectly) do C++ and STL, to be intrinsically inefficient, and it can be hard to disabuse them of that notion even with hard facts. Finally, add in the type-unsafe opaque pointers and the fact that the stream and lock-bytes classes were hidden proprietary implementations, and it all leaves something to be desired.
platformstl::scatter_slice_sequence—A Teaser Trailer
An alternate representation is to be found in a new, and still
evolving, component in the PlatformSTL subproject:
scatter_slice_sequence. This Facade class template maintains an array of
slice structures describing a set of I/O buffers and provides methods
for invoking native read/write functions on the set of buffers, in
addition to providing STL collection access (in the form of
begin() and
end() methods). The class works with both
iovec and WSABUF by
abstracting their features with attribute shims (Section 9.2.1)
get_scatter_slice_size,
get_scatter_slice_ptr, and
get_scatter_slice_size_member_ptr, shown in Listing 31.3.
Listing 31.3 Attribute Shims for the
iovec and WSABUF Structures
#if defined(PLATFORMSTL_OS_IS_UNIX) inline void* const get_scatter_slice_ptr(struct iovec const& ss) { return ss.iov_base; } inline void*& get_scatter_slice_ptr(struct iovec& ss); inline size_t get_scatter_slice_size(struct iovec const& ss) { return static_cast<size_t>(ss.iov_len); } inline size_t& get_scatter_slice_size(struct iovec& ss); inline size_t iovec::* get_scatter_slice_size_member_ptr(struct iovec const*) { return &iovec::iov_len; } #elif defined(PLATFORMSTL_OS_IS_WIN32) inline void const* get_scatter_slice_ptr(WSABUF const& ss) { return ss.buf; } inline void*& get_scatter_slice_ptr(WSABUF& ss); inline size_t get_scatter_slice_size(WSABUF const& ss) { return static_cast<size_t>(ss.len); } inline size_t& get_scatter_slice_size(WSABUF& ss); inline u_long WSABUF::* get_scatter_slice_size_member_ptr(WSABUF const*) { return &WSABUF::len; } #endif /* operating system */
scatter_slice_sequence currently provides for
readv()/
writev() on UNIX and
WSARecv()/
WSASend() and
WSARecvFrom()/
WSASendTo() on Windows. Listing 31.4 shows an
example that uses an
iovec specialization of the class template to read the
contents from one file descriptor into a number of buffers, processes the
content in an STL kind of way, and then writes the converted contents to
another file descriptor.
Listing 31.4 Example Use of
scatter_slice_sequence with
readv() and
writev()
int fs = . . . // Opened for read int fd = . . . // Opened for write for(;;) { const size_t BUFF_SIZE = 100; const size_t MAX_BUFFS = 10; char buffers[MAX_BUFFS][BUFF_SIZE]; const size_t numBuffers = rand() % MAX_BUFFS; // Declare an instance with arity of numBuffers platformstl::scatter_slice_sequence<iovec> sss(numBuffers); // Set up each slice in the sequence, which may be of // different sizes in reality { for(size_t i = 0; i < numBuffers; ++i) { sss.set_slice(i, &buffers[i][0], sizeof(buffers[i])); }} if(0 != numBuffers) // In real scenario, might get 0 buffers { size_t n = sss.read(::readv, fs); // Read from fs using ::readv() if(0 == n) { break; } // "Process" the contents std::transform( sss.payload().begin(), sss.payload().begin() + n , sss.payload().begin(), ::toupper); sss.write(::writev, fd, n); // Write n to fd using ::writev() } }
Obviously this example is very stripped down, but I trust your abilities to
imagine that
fs and
fd might represent sockets, that
the buffers shown here would be obtained from a shared memory arena (which may
not have any to spare at a given time), and that the "processing" would be
something less trivial than setting the contents to uppercase before
(re)transmission.
The sequence's payload (available via
payload()) provides random
access iterators over the contents of its memory blocks. Just as with
std::deque, it's important to realize that these iterators are not
contiguous (Section 2.3.6)! Pointer arithmetic on the iterators is a
constant-time operation, but iterating the range is not a linear-time
operation. The
scatter_slice_sequence is still a work in progress, and
its interface might evolve further before it's released into the
PlatformSTL subproject proper. (It is on the CD.) But what it clearly
provides is the ability to represent a given set of data blocks as an
STL sequence (Section 2.2), along with adaptor methods
read() and
write() that take a file/socket handle and a Scatter/Gather I/O function
and apply them to the blocks. This is the logical equivalent of the COM
stream object created via
CreateLockBytesOnMemory() +
SYCLBOMF_FIXED_ARRAY and
CreateStreamOnLockBytes(). The one apparent
disadvantage is that its contents have to be traversed one element at a
time, something that may have performance costs. (Hint: This is a clue
about something interesting to follow. . . .)
Adapting ACE_Message_Queue
The main subject of this chapter covers my efforts to adapt the
memory queues of the Adaptive Communications Environment (ACE) to the
STL collection concept, to serve the requirements of one of my recent
commercial networking projects, a middleware routing service. To use the
ACE Reactor framework, you derive event handler classes from
ACE_Event_Handler (overriding the requisite I/O event handler methods)
and register instances of them with the program's reactor singleton.
When the reactor encounters an I/O event of a type for which an instance
is registered, it invokes the appropriate callback method on the
handler. When used with TCP, the Internet's stream-oriented transport
protocol, the common idiom is to handle received data into instances of
ACE_Message_Block and queue them in an instance of (a specialization of)
the class template
ACE_Message_Queue, as shown (with error handling
omitted for brevity) in Listing 31.5.
Listing 31.5 A Simple Event Handler for the ACE Reactor Framework
class SimpleTCPReceiver : public ACE_Event_Handler { . . . virtual int handle_input(ACE_HANDLE h) { const size_t BLOCK_SIZE = 1024; ACE_Message_Block* mb = new ACE_Message_Block(BLOCK_SIZE); ssize_t n = m_peer.recv(mb->base(), mb->size()); mb->wr_ptr(n); m_mq.enqueue_tail(mb); return 0; } . . . private: // Member Variables ACE_SOCK_Stream m_peer; // Connection socket ACE_Message_Queue<ACE_SYNCH_USE> m_mq; // Message queue };
The
ACE_Message_Queue class acts as an ordered repository for all
blocks, thereby faithfully representing the data stream. But
ACE_Message_Queue is strictly a container of blocks; it does not attempt
to provide any kind of abstracted access to the contents of the blocks.
To access the contents of a message queue, you can use the associated
class template,
ACE_Message_Queue_Iterator, to iterate the blocks, as
shown in Listing 31.6. The
ACE_Message_Queue_Iterator::next() method
returns a nonzero result and sets the given pointer reference to the
block if a next block is available; otherwise, it returns 0. The
advance() method moves the current enumeration point to the next block
(if any).
Listing 31.6 Example Code That Uses
ACE_Message_Queue_Iterator
void SimpleTCPReceiver::ProcessQueue() { ACE_Message_Queue_Iterator<ACE_NULL_SYNCH> mqi(m_mq); ACE_Message_Block* mb; for(; mqi.next(mb); mqi.advance()) { { for(size_t i = 0; i < mb->length(); ++i) { printf("%c", i[mb->rd_ptr()]; }} mb->rd_ptr(mb->length()); // Advance read ptr to "exhaust" block } }
Obviously, if you want to process a set of blocks as a logically contiguous single block, it's going to be a bit messy. We need a sequence to flatten the stream for STL manipulation.
acestl::message_queue_sequence, Version 1
The ACESTL subproject contains a number of components for adapting ACE
to STL (and for making ACE components easier to use).
acestl::message_queue_sequence is a class template that acts as an
Instance Adaptor for the
ACE_Message_Queue. Since this component's got
quite a kick, I'm going to play my usual author's dirty trick of
presenting you with a progression of implementations. Thankfully, unlike
some material covered in other chapters, the changes between the
versions are entirely additive, which should help keep me under 40 pages
for this topic. Listing 31.7 shows the definition of the first version.
Listing 31.7 Definition of
message_queue_sequence
// In namespace acestl template <ACE_SYNCH_DECL> class message_queue_sequence { public: // Member Types typedef char value_type; typedef ACE_Message_Queue<ACE_SYNCH_USE> sequence_type; typedef message_queue_sequence<ACE_SYNCH_USE> class_type; typedef size_t size_type; class iterator; public: // Construction explicit message_queue_sequence(sequence_type> mq); public: // Iteration iterator begin(); iterator end(); public: // Attributes size_type size() const; bool empty() const; private: // Member Variables sequence_type& m_mq; private: // Not to be implemented message_queue_sequence(class_type const&); class_type& operator =(class_type const&); };
Given what we've seen with previous sequences, there's little here that
needs to be remarked on; the interesting stuff will be in the iterator class.
Note that the value type is
char, meaning that
size() returns the
number of bytes in the queue, and [
begin(),
end())
defines the range of bytes. No methods pertain to message blocks.
31.4.2
acestl::message_queue_sequence::iterator
Listing 31.8 shows the definition of the
acestl::message_queue_sequence::iterator class. Again, a lot here should
be familiar based on prior experience. (I hope by now you're building a
familiarity with these techniques, recognizing their similarities and
identifying the differences between the different cases of their
application. Naturally, it's my hope that this stands you in great stead
for writing your own STL extensions.)
The iterator category is input
iterator (Section 1.3.1). The element reference category (Section 3.3)
is transient or higher; in fact, it's fixed, with the caveat that no
other code, within or without the defining thread, changes the contents
of the underlying message queue or its blocks (in which case it would be
invalidatable). The iterator is implemented in terms of a
shared_handle,
discussed shortly. I've not shown the canonical manipulation of the
shared_handle in the construction methods since we've seen it before in
other sequences (Sections 19.3 and 20.5).
Listing 31.8 Definition of
message_queue_sequence::iterator
class message_queue_sequence<. . .>::iterator : public std::iterator>std::input_iterator_tag , char, ptrdiff_t , char*, char& > { private: // Member Types friend class message_queue_sequence<ACE_SYNCH_USE>; typedef ACE_Message_Queue_Iterator<ACE_SYNCH_USE> mq_iterator_type; struct shared_handle; public: typedef iterator class_type; typedef char value_type; private: // Construction iterator(sequence_type& mq) : m_handle(new shared_handle(mq)) {} public: iterator() : m_handle(NULL) {} iterator(class_type const& rhs); // Share handle via AddRef() (+) ~iterator() throw(); // Call Release() (-) if non-NULL class_type& operator =(class_type const& rhs); // (+) new; (-) old public: // Input Iteration class_type& operator ++() { ACESTL_ASSERT(NULL != m_handle); if(!m_handle->advance()) { m_handle->Release(); m_handle = NULL; } return *this; } class_type operator ++(int); // Canonical implementation value_type& operator *() { ACESTL_ASSERT(NULL != m_handle); return m_handle->current(); } value_type operator *() const { ACESTL_ASSERT(NULL != m_handle); return m_handle->current(); } bool equal(class_type const& rhs) const { return lhs.is_end_point() == rhs.is_end_point(); } private: // Implementation bool is_end_point() const { return NULL == m_handle || m_handle->is_end_point(); } private: // Member Variables shared_handle* m_handle; };
The iteration methods are implemented in terms of the methods of
shared_handle. The endpoint state is identified by either a NULL handle
or a handle that identifies itself as being at the endpoint. The
preincrement operator advances by calling
shared_handle::advance() and
releases the handle when
advance() returns false. The dereference
operator overloads are implemented in terms of the
current() overloads
of
shared_handle. Note that the mutating (non-const) overload returns a
mutating reference, whereas the nonmutating (const) overload returns a
char by value.
The main action lies in the
shared_handle. Listing 31.9 shows its
implementation. I'm going to invoke another low author tactic now and not
explain the fine detail of the algorithm. I'll leave it as an exercise for you
to figure out. To be fair, though, I will note that it skips empty
ACE_Message_Block instances, which is how its endpoint condition can be so
simple.
Listing 31.9 Definition of
shared_handle
struct message_queue_sequence<. . .>::iterator::shared_handle { public: // Member Types typedef shared_handle class_type; public: // Member Variables mq_iterator_type m_mqi; ACE_Message_Block* m_entry; size_t m_entryLength; size_t m_entryIndex; private: sint32_t m_refCount; public: // Construction explicit shared_handle(sequence_type& mq) : m_mqi(mq) , m_entry(NULL) , m_entryLength(0) , m_entryIndex(0) , m_refCount(1) { if(m_mqi.next(m_entry)) { for(;;) { if(0 != (m_entryLength = m_entry->length())) { break; } else if(NULL == (m_entry = nextEntry())) { break; } } } } private: ~shared_handle() throw() { ACESTL_MESSAGE_ASSERT("Shared handle destroyed with outstanding references!", 0 == m_refCount); } public: sint32_t AddRef(); // Canonical implementation sint32_t Release(); // Canonical implementation public: // Iteration Methods bool is_end_point() const { return m_entryIndex == m_entryLength; } char& current() { ACESTL_ASSERT(NULL != m_entry); ACESTL_ASSERT(m_entryIndex != m_entryLength); return m_entryIndex[m_entry->rd_ptr()]; } char current() const { ACESTL_ASSERT(NULL != m_entry); ACESTL_ASSERT(m_entryIndex != m_entryLength); return m_entryIndex[m_entry->rd_ptr()]; } bool advance() { ACESTL_MESSAGE_ASSERT("Invalid index", m_entryIndex < m_entryLength); if(++m_entryIndex == m_entryLength) { m_entryIndex = 0; for(;;) { if(NULL == (m_entry = nextEntry())) { return false; } else if(0 != (m_entryLength = m_entry->length())) { break; } } } return true; } private: // Implementation ACE_Message_Block* nextEntry() { ACE_Message_Block* entry = NULL; return m_mqi.advance() ? (m_mqi.next(entry), entry) : NULL; } private: // Not to be implemented shared_handle(class_type const&); class_type& operator =(class_type const&); };
I hope you'll agree that being able to treat a set of
ACE_Message_Block communications stream fragments as a logically
contiguous stream eases considerably the task of dealing with such
streams. In our middleware project, this enabled us to unmarshal the
high-level protocol messages as objects, by the combination of another
Instance Adaptor and a message Factory.
Permit me a small digression about the protocol. Standards Australia defines a protocol for the exchange of electronic payment implementation, called AS2805. This is a very flexible protocol, with a significant drawback. The messages do not contain any message-size information in their fixed format header, and each message can contain a variable number of fields, some of which are of variable size. This means that you can't know whether all of a message has been received from the peer until the message has been fully parsed. Consequently, being able to easily and efficiently deconstruct a message is critical.
This was achieved by applying another Instance Adaptor to the
acestl::message_queue_sequence instance to make it be treated as a
streamable object, similar to how the logically contiguous
ILockBytes
instance was turned into a streamable object with
CreateStreamFromLockBytes(). The streamable object is used by the
message Factory, which understands how to read the message type from the
packet header and then uses that type to dispatch the appropriate
unmarshaling function to read the remainder of the message contents and
create a corresponding message instance. If insufficient data is
available, the Factory invocation fails benignly, and the queue contents
remain unchanged until the next I/O event. Only when a full message is
retrieved is the requisite portion of the head of the queue removed and
released back to the memory cache. If the message parsing fails for bad
contents, the peer has sent bad data, and the connection is torn down.
So, we've got some nice abstractions going—some of which have seen
service in large-scale deployments, which is never to be sniffed at—but
you may have been receiving portents from your skeptical
subconsciousness. We're manipulating a number of blocks of contiguous
memory, but the nature of the
acestl::message_queue_sequence::iterator
means that each byte in those blocks must be processed one at a time.
This has to have an efficiency cost. And it does.
Before we proceed, I want to trot out the received wisdom on performance,
namely, to avoid premature optimization. More precisely, although it's
something of a simplification, you've usually got only one bottleneck in a
system at a time. Inefficiencies usually make themselves felt only when a
greater inefficiency has been resolved. In none of the applications in which
I've used
message_queue_sequence has it been associated with the bottleneck.
However, I tend to be a little efficiency obsessed—What's that? You've
noticed?—and since STLSoft is an open-source publicly available library, the
message_queue_sequence component might find itself being a bottleneck in
someone else's project, and that would never do. So, I want to show you how to
have your cake and eat it too, that is, how to linearize data block contents in
an STL sequence and yet have block-like efficiency.
acestl::message_queue_sequence, Version 2
First, we need to identify when a block transfer is valid. The ACE
libraries define opaque memory in terms of
char, that is, the pointers are
either
char const* or
char*, presumably to make pointer arithmetic
straightforward. I don't hold with this strategy, but that's irrelevant; it is
what it is. When transferring contents between STL iterators of type
char* or
char const* and
acestl::message_queue_sequence::iterator, we want the
sequence's contents to be block transferred. In other words, the following code
should result in 2 calls to
memcpy(), rather than 120 calls to
shared_handle::advance():
ACE_Message_Queue<ACE_NULL_SYNCH> mq; // 2 message blocks, 120 bytes char results[120]; acestl::message_queue_sequence<ACE_NULL_SYNCH> mqs(mq); std::copy(mqs.begin(), mqs.end(), &results[120]);
We want the same efficiency when transferring from contiguous memory into the message queue sequence, as in the following:
std::copy(&results[0], &results[0] + STLSOFT_NUM_ELEMENTS(results) , mqs.begin());
The first thing we need for this is to define block copy operations for
message_queue_sequence. Listing 31.10 shows the definition of two new
static methods for the sequence class, overloads named
fast_copy().
Listing 31.10 Definition of the
message_queue_sequence Algorithm Worker Methods
template <ACE_SYNCH_DECL> class message_queue_sequence { . . . static char* fast_copy(iterator from, iterator to, char* o) { #if defined(ACESTL_MQS_NO_FAST_COPY_TO) for(; from != to; ++from, ++o) { *o = *from; } #else /* ? ACESTL_MQS_NO_FAST_COPY_TO */ from.fast_copy(to, o); #endif /* ACESTL_MQS_NO_FAST_COPY_TO */ return o; } static iterator fast_copy(char const* from, char const* to , iterator o) { #if defined(ACESTL_MQS_NO_FAST_COPY_FROM) for(;from != to; ++from, ++o) { *o = *from; } #else /* ? ACESTL_MQS_NO_FAST_COPY_FROM */ o.fast_copy(from, to); #endif /* ACESTL_MQS_NO_FAST_COPY_FROM */ return o; } . . .
I've deliberately left in the
#defines that suppress the block operations,
just to illustrate in code what the alternative, default, behavior is. These
#defines also facilitate tests with and without block copying enabled. (Anyone
sniff a performance test in the near future?) The block mode code uses new
iterator::fast_copy() methods, shown in Listing 31.11.
Listing 31.11 Definition of the iterator Algorithm Worker Methods
class message_queue_sequence<. . .>::iterator { . . . void fast_copy(char const* from, char const* to) { if(from != to) { ACESTL_ASSERT(NULL != m_handle); m_handle->fast_copy(from, to, static_cast<size_type>(to - from)); } } void fast_copy(class_type const& to, char* o) { if(*this != to) { ACESTL_ASSERT(NULL != m_handle); m_handle->fast_copy(to.m_handle, o); } }
Tantalizingly, these do very little beyond invoking the same-named new
methods of the
shared_handle class, shown in Listing 31.12. For both in and out
transfers, these methods calculate the appropriate portion of each block to be
read/written and effect the transfer with
memcpy().
Listing 31.12 Definition of the
shared_handle Algorithm Worker Methods
struct message_queue_sequence<. . .>::iterator::shared_handle { . . . void fast_copy(char const* from, char const* to, size_type n) { ACESTL_ASSERT(0 != n); ACESTL_ASSERT(from != to); if(0 != n) { size_type n1 = m_entryLength - m_entryIndex; if(n <= n1) { ::memcpy(&m_entryIndex[m_entry->rd_ptr()], from, n); } else { ::memcpy(&m_entryIndex[m_entry->rd_ptr()], from, n1); from += n1; m_entry = nextEntry(); ACESTL_ASSERT(NULL != m_entry); fast_copy(from, to, n - n1); } } } void fast_copy(class_type const* to, char* o) { size_type n1 = m_entryLength - m_entryIndex; if( NULL != to && m_entry == to ->m_entry) { ::memcpy(o, &m_entryIndex[m_entry->rd_ptr()], n1); } else { ::memcpy(o, &m_entryIndex[m_entry->rd_ptr()], n1); o += n1; m_entry = nextEntry(); if(NULL != m_entry) { fast_copy(to, o); } } } . . .
So far so good, but no one wants to write client code such as the following:
ACE_Message_Queue<ACE_NULL_SYNCH> mq; // 2 locks; total 120 bytes char results[120]; acestl::message_queue_sequence<ACE_NULL_SYNCH> mqs(mq); acestl::message_queue_sequence<ACE_NULL_SYNCH>::fast_copy(mqs.begin() , mqs.end(), &results[120]);
We want the invocation
std::copy() to pick up our fast version
automatically when the other iterator type is
char (const)*. For this we need
to specialize
std::copy().
For a number of reasons, defining partial template specializations in the
std namespace is prohibited. This proves inconvenient in two ways. First, and
most importantly, because
message_queue_sequence is a template, we want to
cater to all its specializations and so would want to do something like that
shown in Listing 31.13. (For brevity I'm omitting the namespace qualification
acestl from each
message_queue_sequence<S>::iterator shown in this
listing and the next.)
Listing 31.13 Illegal Specializations of
std::copy()
// In namespace std template <typename S> char* copy( typename message_queue_sequence<S>::iterator from , typename message_queue_sequence<S>::iterator to, char* o) { return message_queue_sequence<S>::fast_copy(from, to, o); } template <typename S> typename message_queue_sequence<S>::iterator copy(char* from, char* to , typename message_queue_sequence<S>::iterator o) { return message_queue_sequence<S>::fast_copy(from, to, o); }
Since we may not do this, we are forced to anticipate the
specializations of
message_queue_sequence and (fully) specialize
std::copy() accordingly, as in Listing 31.14. Note that separate
char*
and
char const* specializations are required for the
char-pointer-to-iterator block transfer, to ensure that copying from
char* and
char const* uses the optimization.
Listing 31.14 Legal Specializations of
std::copy()
// In namespace std template <> char* copy( typename message_queue_sequence<ACE_NULL_SYNCH>::iterator from , typename message_queue_sequence<ACE_NULL_SYNCH>::iterator to , char* o) { return message_queue_sequence<ACE_NULL_SYNCH>::fast_copy(from, to, o); } . . . // Same as above, but for ACE_MT_SYNCH template <> typename message_queue_sequence<ACE_NULL_SYNCH>::iterator copy(char* from , char* to , typename message_queue_sequence<ACE_NULL_SYNCH>::iterator o) { return message_queue_sequence<ACE_NULL_SYNCH>::fast_copy(from, to, o); } . . . // Same as above, but for ACE_MT_SYNCH template <> typename message_queue_sequence<ACE_NULL_SYNCH>::iterator copy(char const* from , char const* to , typename message_queue_sequence<ACE_NULL_SYNCH>::iterator o) { return message_queue_sequence<ACE_NULL_SYNCH>::fast_copy(from, to, o); } . . . // Same as above, but for ACE_MT_SYNCH
Fortunately, ACE offers only two specializations, in the form of
ACE_NULL_SYNCH (a
#define for
ACE_Null_Mutex,
ACE_Null_Condition) and
ACE_MT_SYNCH (a
#define for
ACE_Thread_Mutex,
ACE_Condition_Thread_Mutex),
yielding only six specializations.
But there's more. If, like me, you avoid like the plague the use of
char as a substitute for C++'s missing byte type, you probably
instead use
signed
char or
unsigned
char, both of which are distinct types from
char when it comes to
overload resolution (and template resolution). Passing these to an invocation
of
std::copy() will not succeed in invoking the optimized transfer
methods. So, with heads bowed low, we need to provide another six
specializations for
signed char and six for
unsigned char, yielding a total of
eighteen specializations, for what we'd like to have been two, or at most
three, were we able to partially specialize in the
std namespace.
Thankfully, all this effort is worth the payoff. Before we look at
that, I just want to answer one question you might be pondering: Why
only
std::copy()? In principle there is no reason to not specialize all
possible standard algorithms. The reason I've not done so is twofold.
First, the sheer effort in doing so would be onerous, to say the least;
to avoid a lot of manual plugging around we'd be pragmatically bound to
use macros, and who likes macros? The second reason is more, well,
reasoned. The whole reason for this optimization is to facilitate
high-speed interpretation of data in its original memory block and
high-speed exchange of data into new storage. In my experience, both of
these involve
std::copy(). I should admit one exception to this in our
middleware project that required
copy_n(). The
copy_n() algorithm was
overlooked for incorporation into the C++-98 standard (but will be
included in the next version) and so appears in STLSoft. There are
specializations of it, this time in the
stlsoft namespace, in the same
fashion as for
std::copy(). Hence, there are a total of 36 function
specializations in the <acestl/collections/message_queue_sequence.hpp>
header file.
Now that we've examined the optimization mechanism, we'd better make
sure it's worth the not inconsiderable effort. In order to demonstrate
the differences in performance between the optimized block copying
version and the original version, I used a test program that creates an
ACE_Message_Queue instance to which it adds a number of blocks, copies
the contents from a
char array using
std::copy(),copies them back to
another
char array (again with
std::copy()), and verifies that the
contents of the two arrays are identical. The number of blocks ranged
from 1 to 10. The block size ranged from 10 to 10,000. The times for
copying from the source char array to the sequence, and from the
sequence to the destination
char array, were taken separately, using the
PlatformSTL component
performance_counter. Each copying operation was
repeated 20,000 times, in order to obtain measurement resolution in
milliseconds. The code is shown in the extra material for this chapter
on the CD.
Table 31.1 shows a representative sample of the results, expressed as percentages of the time (in milliseconds) taken by the equivalent nonoptimized version. As we might expect, with the very small block size of 10, the difference is negligible. For a buffer size of 100, there's an advantage with the optimized form, but it's not stunning. However, when we get to the more realistic buffer sizes of 1,000 and 10,000, there's no competition-the optimized form is 40-50 times faster.
This chapter has looked at the features of Scatter/Gather I/O, whose APIs
present considerable challenges to STL adaptation. We've examined an
adaptation, in the form of the
scatter_slice_sequence component, and have seen
that such sequences must have genuinely random access iterators (i.e., not
contiguous iterators), for which the identity
&*it + 2 == &*(it + 2) does not
hold (see Section 2.3.6). Notwithstanding, we've seen how we can take advantage
of their partial contiguity in order to effect significant performance
improvements, something that is particularly important given their use in file
and/or socket I/O. With minimal sacrifice of the Principle of Transparency,
we've made big gains in the Principle of Composition (and also served the
Principle of Diversity along the way).
Have an opinion about scatter/gather I/O?Discuss this article in the Articles Forum topic, Gathering Scattered I/O in C++.
STLSoft
Pantheios
Extended STL
Imperfect C++
ACE | http://www.artima.com/cppsource/scattered_ioP.html | crawl-001 | en | refinedweb |
Related link:
No sales pitch here, but I just have to announce the launch of something I’ve spent over two years making:
the HostBaby Wizard
Related link:
No sales pitch here, but I just have to announce the launch of something I’ve spent over two years making:
the HostBaby Wizard
Related link:
I’ve been reading about viral marketing recently, so I tried to fit the phenomemon in with trends in social networks and Web Services.
Here’s a short script to remove multiple ip6fw rules sharing the same number.
for a in `ip6fw list $1 | cut –d ' ' –f 1` do ip6fw delete $1; echo "ip6fw: deleted rule $1"; done
Usage:
# sh rmrules.sh 123
Today’s post is a one-liner that saves me a few lines of PHP every time I do a people query.
I used to always select my clients’ or customers’ names from the database, then use PHP/Ruby/something to grab just the first word of the name.
From now on I’m doing it in the database directly, though it took a while to figure out how.
SELECT name, INITCAP(SPLIT_PART(name, ‘ ‘, 1)) AS firstname FROM clients
(Ok so it only took like 3 minutes to figure out but I’m posting it here so I don’t lose it.)
:-)
Related link:
This Bugtraq post questions if a 16-bit counter overflow was the cause of Comair’s recent computer failure that caused the cancellation of 1,100 flights on Christmas day. Quote from the post:
“…”.
And while we are at it, can someone please explain to me what a “worker keystroke error” is?! Here’s a quote from the article:
“A worker keystroke error grounded or delayed some American and US Airways flights for several hours in August.”
Give me a break! Sounds more like an “input validation” flaw in the software.
IPFW, IPFW2, and IP6FW allow the ruleset writer to add more than one rule with the same number. Here’s a command that lists rules with the same numbers:
IPFW/IPFW2:
$ ipfw show | cut -f 1 -d ’ ’ | uniq –d 00100 00243
IP6FW:
$ ip6fw show | cut -f 1 -d ’ ’ | uniq –d 00100 00243
In recent years, Open Source has become a relevant and strangely addictive force in IT. As the Internet age has dominated businesses and consumers with the same well oiled, yet clunky machine, Open Source has crept out of the dimly lit bedrooms occupied by toiling hackers and into the network rooms and ‘enterprise centric strategies’ of todays businesses. Open Source has not just become more acceptable, it has become more relevant.
Despite the over-production of rubber penguins and increasing technorati blogging about Open Source across the net, Open Source still has a long way to go in achieving the kind of acceptance and use we as enthusiasts optimistically predict. As bods familiar with the workings of an Open Source community, it makes reverent sense why one should use the software; freedom, stability, security, quality etc. The real challenge that faces us is that although you may have a clear view of why Open Source may be the right solution, expressing and communicating this message can be difficult at best. Advocacy is a concept riddled with theories, beliefs and opinions on how it should be practiced - how can you best advocate Open Source software?
I am going to be writing a series of new articles about advocacy. In these features, I am going to write about the different issues involved in advocating Open Source software, and attempt to beat a path towards best practice. For quite some time, I have been advocating how Open Source can be right for some people. This evangelism was first expressed through my efforts creating Linux UK (one of the first UK editorial sites about Linux), on through my work as a journalist and resulting in my current full time position at OpenAdvantage () as a professional Open Source evangelist. Although there is still much to learn, the body of experience I have collated myself and from other people can act as a useful map when choosing which path to head down. The goal in this game is to be as productive as possible with your advocacy; anyone can randomly advocate, but here you want results.
Before you set forth and explore the different avenues of advocacy, you need to step back and evaluate exactly what you are aiming to do. Sure, this nugget of advice sounds like one of those self-help audio cassette courses, but it is particularly important in the context of Open Source. The reason for this is that Open Source is first and foremost a culture, and like any other culture, it can be perceived and understood in inherently different ways. As an example, some people are inspired by Open Source due to the ethical and philosophical concepts, some are inspired by the technical benefits and some are plainly in it financial benefits. Even within these three loose groups, there are variations in tone and colour. If you support Open Source due to its ethical nature, you may approve of certain rights (such as access to source code and freedom of innovation), but not approve of other rights (such as selling Open Source, or including closed source components). What do you feel about free media - do you believe Open Source should be applied to sound/video? What is your take on software patents? Do you feel that Open Source should be advocated to businesses who will use it for closed source solutions? Are you happy running closed source software on Linux? Each of these questions has a variety of distinct answers that vary among members of the same Open Source community.
In marketing parlors there is a general rule that you should know your product inside out. This common-held view preaches that if you don’t understand part of your product, you won’t be able to answer *every* question and query about it. Open Source advocacy has a similar, albeit, less critical rule of thumb; “to help Open Source, you need to know Open Source”. With such a range in views of Open Source, you need to firmly understand your own position, and more importantly, understand how flexible you are in promoting Open Source in areas that you are not personally familiar with. Throughout your experiences advocating, it is likely that you will face challenges that will test your ethical and technical views, and it is advisable to set yourself a policy regarding how far you can bend on these issues. With the policy set early, you will gain more confidence in pushing forward.
While you are sat back, re-evaluating your perspective on Open Source, you should also re-evaluate your perspective on facts. Although we can be safe in knowledge that the O’Reilly Network does not resort to ridiculous headlines that mis-represent a story, it is likely that you will hear stories that may be perceived one way, but are entirely wrong in how the story was actually played out. As an example, a while back I was at a conference in London designed to help public sector organisations and schools understand what Open Source is, and a number of councils and schools gave a presentation at the event. One such school, Orwell High School, a specialist school for technology, made the leap over to Open Source. With over 1000 students, the school found upgrade costs quite prohibitive due to expensive hardware requirements for Windows XP, as well as high administrative costs. The school made the move to a LTSP thin-client setup, and this resulted in cheaper hardware, less landfill and a centrally administered system. This solution was by its very nature, a standard cut n’shut case; the need was there, and a solution was proposed. The result of this case was a saving of £13,000 a year in license fees; a worthwhile but not exactly ground shattering figure, given the millions saved in cases such as that at Beaumont Hospital where they saved over EU4million with Open Source. What is interesting with the Orwell High School case is how that saving is relevant to their context. A teacher in the English department stated that each child in the department had a budget of £1 per head. When you are dealing with such low figures, £13,000 can seem like an awful lot of money being saved each year.
The key point here is that information is relative to context, and context is relative to information. Before you even begin discussing how to push the merits of advocacy in different ways, you need to be prepared to sit back and think about the information you receive, and how that information is relevant to the bigger picture. The goal here is not to con people into using Open Source, neither is it to suppress some elements of Open Source. The goal is to be as honest and up front as possible, and to try and dispel some of the large quantities of hype.
The new series of articles will be online soon on the O’Reilly Network.
Thoughts and ideas? Scribe them below…
In most of the articles and forum entries everyone always mentions that installing Mono is greatly simplified by using Red Carpet. Now that I’ve done it, I’d have to agree. Getting there was, however, not as easy as I would have liked.
In this article I’ll provide the step by step instructions for installing Mono on SuSE Linux Professional 9.1 using Red Carpet 2.2.3. After reading, you should have a clear idea of how to get through the installation, and hopefully avoid some of the snags I ran into. In the end, with this article, setting up Red Carpet will be as easy as using it to install Mono.
This article will not provide any explanation about Mono itself. It assumes that the reader knows what Mono is, otherwise, why would you be interested in completing the installation?
Well the first thing to understand is that since Novell’s purchase of Ximian, Red Carpet is no longer available as a separate download. As mentioned on the Mono Project download project, it is now available as the ZENWorks Suite 6.5.
Once you get to the ZENWorks download page from Novell, the file required is ZEN65_LinuxMgmt.iso. This provides management for Linux desktops and servers, but it is essentially Red Carpet Enterprise. Be forewarned, the only way to get the necessary installation RPMs are from a 474.2 MB iso image. In addition, according to the Novell web site, this is a 90-day evaluation license of Red Carpet.
I believe that Novell is well positioned to benefit from the growth of LInux and open source software. However, I question their restrictive licensing of a technology that is necessary to download and install what may become the predominant open source software development platform. I’m really not sure if this license will apply to Red Carpet, so I’ll let you know in 88 days. In addition, why not break up the iso image into the individual components? Why should I have to download 474 MB when I only need 3 files that are less than 2 MB?
Once you have downloaded the iso image and have burned it onto a CD, then mount the CD and look for the redcarpet2 directory. For SuSE Linux Professional 9.1, the RPMs necessary were found in the suse-91-i586 subdirectory. The critical files are:
This is the Red Carpet Daemon. It is absolutely necessary for getting Red Carpet to work correctly as I’ll explain later.
This package contains the graphical user interface for Red Carpet.
This package contains the command graphical user interface for Red Carpet.
Install these packages and we are almost done. First, the GUI application should appear in the KMenu in the System -> Configuration folder as Red Carpet. You can further verify the successful installation by using YaST. Check in the Install and Remove Software -> Package Groups -> System Environment where you will find the newly installed software in the the Applications and Deamons groups.
With the software installed, the next step requires starting the Red Carpet Deamon. To start the deamon from the graphical user interface, you need to return to YaST. Select System -> Runlevel Editor, and then click on the rcd service. Click on the Enable button and the service should change status and show Enabled is now equal to Yes.
To start the Red Carpet Deamon from the command line, use the following quick command. Enter:
sudo /etc/init.d/rcd start
You will need the root password, in order to complete this transaction. Once complete, you should receive the prompt “Starting Red Carpet Daemon.” Now that the deamon is running you are ready to begin the Mono installation.
I’ll assume that you are not currently using the root account, so the first thing to do is be sure and start the Red Carpet GUI application using the root account. Enter:
sudo /usr/bin/red-carpet
If the connect to deamon dialog appears as illustrated in Figure 1, then the red carpet deamon is not currently enable. Return to the previous step and start the rcd deamon. Click on the Available Software tab, then click on the Channel drop down list selection button. This should produce a list of available channels. Select the mono-1.1 channel and then all available packages from the channel should appear. Slect Edit -> Select All or press Ctrl A to select all of the available packages. Select Actions -> Mark for Installation or right click the selected packages and select Mark for Installation. Click on the Run Now icon from the tool bar, select Actions -> Run Now, or press Ctrl X. This will start the installation process.
Issue the following commands as root:
rug refresh
This will download the most current channel information. Next in order to confirm connectivity with the redcarpet servers run:
rug channels
This should present a list of available channels. The list will also present whether you are currently subscribed to them. The output should resemble the following listing:
To complete the installation, then run the following commands:
sudo rug sub mono-1.1
sudo rug update -y
The -y option will permit all actions without confirmations. Since there are approximately 50 different packages required for Mono, including all of the dependencies, this option should prove helpful.
After installing Mono, confirm the installation with a simple test. Using a text editor, enter the simple C# program shown in Lilsting 1.
using System; namespace HelloNameSpace { public class Hello { static void Main() { Console.WriteLine("Hello, from Gurabo, Puerto Rico"); } } }
To compile the program, enter:
mcs hellopr
This should produce a response of “Compilation succeeded.” If the compilation fails, then check whether the system class starts with a capital “S”, the console class starts with a capital “C”, and the write line method uses a capital letter for both write and line. Remember, that in C#, classes and all methods and properties are case sensitive.
Successfully compiling the hellopr program should produce a hellopr.exe executable. To run the program, enter
mono hellopr.exe
This should produce the output “Hello, from Gurabo, Puerto Rico”.
In this article you have seen how to install and verify Mono. This process includes obtaining Red Carpet Enterprise, installing Mono, and finally creating, compiling and running a simple test program. I hope this provides a simple to follow procedure and eases your way on the road to developing Mono applications.
Do any of your development plans include Mono in 2005?
I’ll start with the thanks. I’ve received bundles of blog comments and personal e-mail messages with helpful ideas and very patient tutoring regarding my OS X problems. I’ve always been lucky to roll in congenial ‘Net company. From the early C++ community (before eventual insanity set in) to the early Linux community to the early Python community, I’ve seen the best that brilliant and dedicated interest groups can offer a newbie willing to do some homework. My experiences so far with OS X folks matches up against all the above. Thanks guys.
So I hope you don’t take it the wrong way when I say that as for the issue at hand, I tried as much was was practical from your suggestions with no luck. I had just decided: No mas. I surrender. Time to reinstall. I was in the middle of a glacial job of backing up by tarring to our household backup server before a reinstall. using tar. Then I heard of the OS X 10.3.7 release. I have always suspected 10.3.6. I know many of you run it without problems but the timing for me and for many others I’ve seen on the Net is just too much coincidence for my skeptic taste.
So I upgraded to 10.3.7 and the problem has vanished. Everythign is snapy again. Lori has stopped abusing me for buying her a Mac (easy now, she likes the computer, but she became very frustrated when it became unusable). My opinion is that OS X 10.3.6 had a bug that only affected some users, and that we were among the unlucky, and that they fixed it in 10.3.7. That’s cool and all, but I must say that I wish such breaks and fixes wouldn’t come and go in such mystery. I think Apple has a lot of opening up todo, but that’s the subject of another blog. For now, it’s all love.
Before I put this issue behind me (I hope), here are some brief notes in response to the many thoughtful suggestions:
A lot of people mentioned disk damage as a likely culprit. Some pointed me to this helpful article about running Disk Utility (or fsck). This seemed reasonable, since my one brief reprieve from the problem had come after running repair permissions. I dutifully tried both Disk Utility and fsck, neither found anything wrong, and I was right back to the problem upon full boot. I think Apple tech support also instructed Lori to do this when she first reported the problem, but it was worth my trying personally. One note is that in safe mode everything was nice and fast, but back in regular start-up (regardless of the user) it was dog slow. Perhaps it was a kernel extenson in that case, but I had no idea how to start narrowing down which was the culprit.
Some suggested DiskWarrior, a third-party tool, but one that has received an impressive shelf of accolades. Apparently it can fix some problems that elude all other tools, including those built into OS X. Problem 1 is that it costs $80, but that’s a bargain if I were confident that it would do the trick. Problem 2 is that it is not even really advertized as a performance elixir. It’s advertized more as a wicked sharp disk error recovery tool. A bit more lumber than we need overall, and not an obvious enticement to spend a speculative $80.
Some suggested DNS issues. This doesn’t seem at all likely. The slowdown affects launch of applications that have nothing to do with networking, yet networking applications such as Safari don’t seem in any more distress than any other counterparts. What’s more, if I do basic lookups on the command line, once the command line applet itself has taken ages to start up, network responses are immediate.
Apparently HP printer drivers, have been the source of some reported problems, but these problems are a matter of chewing up CPU. Once again, Activity monitor shows plenty of CPU idle left. CPU utilization is not the problem here (nor is memory).
I checked for third-party extensions and all that. /System/Library/Extensions has tons of “.kext” file (kernel extensions, I presume) but I can’t tell which would be 3rd-party, nor did I feel confident just deleting the lot. In /Library/StartupItems I did find a file “Wacom” which might correspond to an old Wacom tablet which I’d forgotten we’d installed (we hardly used it). Deleting that file didn’t make a difference.
Some thought that the fact that we’d made incremental updates from 10.3.0 to 10.3.6 caused a few gaps and fissures in things and that we should just use the “combo updater” (new concept for me) to go back to 10.3.0 and make the leap back to 10.3.6 in one go. Sounded plausible and all, but if it were the problem wouldnit it have got worse with 10.3.7, rather than better?
Here is a quote from one sugghestion:
[T]he Jan 2005 issue of MacAddict addresses
the “lazy Mac” (p.20: “your Mac isn’t as snappy as it used to be”).
They start by recommending a reboot and then checking Activity Monitor,
but eventually they recommend running the $15 shareware Cocktail,
which would have worked (as “repair permissions” is one of its
options).
Seems like useful into to keep in mind, though I don’t know how Cocktail would compare to DiskWarrior and other similar packages.
Bottom line now: my wife is happy so I’m happy. Thanks to Apple for the fix, and thanks to the OS X community for such solid support.
Related link:
It looks like the first quarter of 2005 is going to be a busy time for your humble weblogger. I will spend most of the January and February working on a book for O’Reilly. Then, the last week of February I’m going to Sheffield to speak about Open Source firewals at the ShefLUG 2005 Seminar (23rd February 2005) and immediately after that I’m off to Fosdem 2005. (I won’t be one of the speakers at Fosdem, just a regular attendee, look for me near the OpenBSD booth.) Then, in March, I’ll be teaching BSD Firewalls classes in Krakow (Cracow), Poland.
Fingers crossed, my health will keep up with my schedule.
A?
Related link:
Rogue Amoeba, the makers of the insanely cool Audio Hijack, will release Slipstream in January 2005. In an email they sent to registered users of their other products, they say:
Now, audio from any program can be played through it. With Slipstream, the AirPort Express isn’t just for iTunes any more.
Very cool: I can’t seem to give these guys money fast enough for all the cool stuff they have. At some point I’d like to have RealPlayer on all the speakers in my house so I can keep listening no matter which room I move to.
Related:
Kudos to all the Thunderbird developers for making such a great app.
Related link:×087h29ub
This week the web analytics software company WebSideStory reported that the Mozilla Firefox browser now has an estimated 4% share of U.S. Browser Usage Share. Most of this new found usage has come at the expense of the Microsoft Internet Explorer, which is now down to 91.8%. I can only imagine that this number will continue to drop.
I’m even now more excited about giving all my relatives copies of the OpenCD for Christmas. I’m mostly just giving it to them just to encourage their use of Firefox, but who knows? Maybe they’ll get curious and give some of the other fine apps a spin.
Looking to impress your friends and relatives, give’m Firefox. They’ll thank you all next year.
Related link:
NetFlix uses RSS to tell me which movies they’ve gotten back from me and which ones they have shipped. It’s been handy. I’ve been curious when they get the movies back, and now I know.
I?
Of all of the companies that are playing the open source card, the one I least understand is Oracle. HP, IBM, and even to some degree Sun Microsystems are easy to understand. Promoting open source helps them sell more hardware, like I said easy. Fundamentally, Oracle’s interest in Linux is the same, by promoting open source it helps them sell more software licenses. However, that’s the kicker. Until Bill Gates’ prediction comes true and hardware is essentially free, there is a big difference between these two positions.
Hardware manufacturers are not on a collision course with open source. Computer hardware and open source are complimentary markets. On the contrary, database management servers, middleware, web servers, finance and accounting software, and customer relationship management software are dead center the target of some of the most aggressive and successful open source companies. A simple example is MySQL AB, that competes, or will compete, directly with Oracle in the database market.
I’ve been confused by this for a while, but this week I was stunned. After a relentless pursuit, Peoplesoft Corporation surrendered to Larry Ellison and Oracle Corporation. This week the Peoplesoft Board of Directors took the advice of the “Transaction Committee” and accepted a cash bid from Oracle of $26.50 per Peoplesoft share, for an estimated total transaction value of $10.3 Billion.
As I’ll break this transaction down later, it is my opinion that this purchase may be a serious mistake by Oracle Corporation. I’ll show through a very brief review of their finances, that with the growing credibility of open source software in the server room, both Oracle and Peoplesoft will be directly threatened by a strong contender in some of their most successful product areas. This threat will drain new software licenses faster than they expect and they will be slow to respond to this threat.
In the end, I commend the Peoplesoft share owners in successfully raising Oracle’s offer price per share from $19 per share to $26.50. If completed, this will be a huge financial windfall for all Peoplesoft share owners. Although their 2003 annual report does not even mention a threat from open source it is curious to note that one of their key ex-executives believes differently. As we have already seen in the attempt from Novell to hire ex- PeopleSoft exec Ram Gupta to replace departed vice-chairman Chris Stone, it seems the executives at Peoplesoft know where the future lies. I predict that some of the biggest Peoplesoft shareholders, those who will reap the most from a successful sale, will head into open source.
More and more to get a sense of how little this makes sense, we only need to examine the numbers. The best place to go for any public company are the federally mandated Security and Exchange Commission reports. Let’s first check out the basic numbers from Oracle Corporation (ORCL). First, according to the Oracle Corporation 2004 10Q, Year End Report they claim that “We are the world’s largest enterprise software company.” They are most definitely somewhere in the top five of all proprietary software companies in the world. In 2004 they generated a total of $10B revenue, with $8B in software licensing (79%) and $2B from services (21%). 44% of those software revenues were from new software sales, while the remainder comes from license renewals and upgrades. Due to the well deserved reputation of their marque product, the Oracle database, they seem rock solid.
It is worth mentioning though, they do have open source on their radar. I guess they believe, as many of us do, that open source is important to watch but it is anyone’s guess when it might actually have any material impact on their revenue. Oracle admits: “We may also face competition in the open source software initiatives, in which companies such as JBoss and MySQL provide software and intellectual property free over the internet.” and “We may also face increasing competition from open source software initiatives, in which competitors may provide software and intellectual property free over the internet. If existing or new competitors gain market share in any of these markets, at our expense, our business and operating results could be adversely affected.” I say that when a risk finally makes it into the management discussion, it is probably too late to do anything to stop that threat from becoming an issue.
So the risk is there, fine. So let’s take a look at Peoplesoft. Reading from Peoplesoft Corporation’s 10Q Quarterly statement for the third quarter, they have booked $1.9B in revenue in the first 3 quarters of 2004. They are on track to surpass the annual numbers from 2003. Digging a little deeper, they have $1.3B in software license related revenue and $0.6B in service revenue. We see less of an emphasis on software licensing, but it is still heavily dependent on software licensing. It is also very important to look at what it would take to see a return on investment for Oracle’s $10.3B.
With $61 Million in net income in the first nine months of 2004, it would take approximately 384 years to earn back the investment based only on profit. Maybe it makes more sense to look at the almost 4 years it will take to have the Peoplesoft annual revenue to cover the sale price. Now I realize this freshman business school student’s analysis of this transaction is way off, but I stand by my analysis that this is a fantastic deal for Peoplesoft’s shareholders and an anchor around the neck of Oracle. Companies are bought and sold for many reasons, and I’m sure we may learn why Larry Elison pushed so hard to get Peoplesoft. In the meantime, let me elaborate on some of the open source projects that I hope, err I mean “may”, represent some of the biggest risks to Oracle and Peoplesoft.
In no particular order, here are some companies and projects that should be giving the Oracle sales staff nightmares before too long:
Anyone, or maybe several, of these companies or projects may rise to be a serious threat to Oracle’s future software revenue. Only time will tell, however, I know that if I had $10 Billion dollars burning a hole in my pocket, I imagine there are better ways to invest it.
To obtain copies of the financial data reviewed for this, please visit:
What do you think Oracle’s purchase of Peoplesoft means for the software industry?
Related link:
That.
Related link:…
This inspiring phrase from Kent Beck’s book Test-Driven Development By Example:
“When we write a test, we imagine the perfect interface for our operation. We are telling ourselves a story about how the operation will look from the outside. Our story won’t always come true, but it’s better to start from the best-possible application program interface (API) and work backward than to make things complicated, ugly, and “realistic” from the get-go.”
The following example test is just a little simple one-method thing, but I realized this same approach could be used to design an entire system!
You can just start typing some pseudocode the way you *wish* you could type *if* your class/system was “just that easy”.
Then, when done, break it down into bits, and try test-driven development to see if you can make it so!
Just a 1-hour-old idea. Feel free to rip it apart….
Related link:
I am pleased to announce that starting next month, the start of the Spring semester at Tufts University, I will be teaching a course entitled Security, Privacy, and Politics in the Computer Age. The course will be offered by the Experimental College at Tufts University. The following is a brief description of my course:
Computer viruses, worms, Trojan Horses, spyware, exploits, poorly designed software, inadequate technology laws, and terrorism: these issues have a profound affect on our daily computing operations and habits. New technological innovations such as file-sharing software and location-based tracking tools also have major political and social implications. Unfortunately, basic knowledge and understanding of the security, political, and social issues concerning the use of technologies is lax, and is a major reason why people are continually affected by computer security breaches and technology misuse. Granted, the problems are only getting worse. Issues including electronic voting, Radio Frequency Identification (RFID) tags, location-based tracking technologies, and the Digital Millennium Copyright Act (DMCA) will be discussed. This course will also delve into reverse engineering of software, understanding exploits (e.g. buffer overflow, Denial of Service, rootkits, spoofing) and intrusion detection, and how to protect yourself from malicious computer activities. Then, the issues will be put into a global context to answer the question: we have dug ourselves into a deep hole; how do we dig out of it?
This course is open to all, regardless of area of study. No software development or computer programming knowledge is required. Basic knowledge on computer technology and concepts is sufficient. The only requirement is that you are curious on the security, privacy, political, and legal issues in computer technology, and why they are important in society.
I do understand that only Tufts students and neighboring community members can enroll in the class (EDIT, 1/5/2005: ON A SPACE AVAILABLE BASIS). However, I endear to make this course accessible to the general public. Therefore, all lecture notes, news, examples, and assignments will be published to the course’s website as soon as they become available. I also envision having a message board for the course, and possibly videos or audio recordings for some lectures –all available to the general public’s use.
I am honored to have the opportunity to teach this course not only because of my passion for this complex matter, but to contribute back to my alma mater and to the Computer Science community. In addition, there is a lack of ownership of the subject matter that I will be presenting, a major reason why there is a lack of basic understanding of the security, legal, political issues in using technology.
For more information, please visit the course’s (tentative) website at.
Once in a while someone asks for a graphical installer for BSD, similar to Fedora Core. Wouldn’t it be cool if we could see the daemon or Puffy in full color?
No, it wouldn’t.
As someone who just finished a chapter on BSD and Linux installation procedures, I must say that I like BSD installers much better than their Linux counterparts and that I don’t see a reason to ask the developers to spend their time on something that’s pure eye candy. This is 2004, soon to be 2005, and the odd graphical Linux installer can still have problems talking to some monitors and video cards.
Related link:
A long time ago in a publishing world far, far away, Brian Richard started Py, an independent Zine for Python developers. I still have several copies of the first issue I picked up at OSCON 2002. That was about the same time I was thinking about going into print with The Perl Review.
Brian and I had chatted over email a couple times, and he had moved on to Linux Magazine. I thought that was the end of Py.
Mark Pratt, the new publisher of PyZine, called me tonight (all the way from Barcelona) and we talked about what each of us is doing and what we have planned. I certainly don’t want to compete with PyZine, so my idea for a Python magazine (I wanted to call it MagPy) is dead.
There might be some ways we can help each other, but mostly we just talked about how we do things and exchanged some ideas.
Now I’ll have to find another idea for a new magazine, but that’s okay. :)
Someone, somewhere on the Internet claimed recently that the hama USB 2.0 Card Reader 9in1 doesn’t work with OpenBSD 3.6. This is not true, the system detects the card reader and the flash cards you plug into the reader..
Related link:
Google just keeps doing amazing things. Google for “War and Peace”. One of the top links should be “Book results for war and peace” with a rainbow colored collection of book spines next to it. Follow that result and you’re in Google Print, which shows you the Penguin edition of the book, and it’s fully searchable!.
I’ve had this dream of creating concordances for my favorite books, but Google Print would virtually do that for me. Go on Google, take it off my to-do list!?
Tonight I had some time to update Busines::ISBN::Data, which is the data pack for Business::ISBN. It exists separately so I can update them separately. The ISBN folks have added several country prefixes and updated the publisher ranges in a lot of the prefixes: space for new publishers is getting short.
The task was much easier this time. Previously, I got the data from the HTML pages on their web site. They’ve done away with that in favor of a PDF file. Things would have been much easier if it was just a text file. There isn’t anything fancy in the PDF: no images, no fancy text effects: just the data.
Oh well. Updates are infrequent, and it was easy to extract the text from the PDF, even if the data was a bit dirty. Still, text wants to be text.
Updated. Thanks to “david_given” for pointing out a typo, which is now corrected.
Much of my current late-night hacking (I should be making a couple of big announcements soon) involves pushing the art of XML/SAX processing in Python by using its ever more powerful functional features. In doing so, I’ve been making more and more use of nested (AKA inner) functions for modularity and some neat approaches to dynamic dispatch. This has also brought me several times into the grey areas of nested scopes. I’ve come to know the PEP pretty well, but in case it saves anyone time spent probing Python legalese, here is an example that illustrates the behavior of nested scopes. The code can be run as is in Python 2.2 or later. For Python 2.1, add
from __future__ import nested_scopes to the top.
g_a = 1 #global scope def f1(): a = 2 def g(): print "f1/g", a #a is 2 return g() def f2(): a = 3 def g(): print "f2/g", a #UnboundLocalError (at runtime) a = 4 return g() def f3(): def g(): global g_a print "f3/g A", g_a #a is 1 g_a = 5 #Modifying the global print "f3/g B", g_a #g_a is 5 return g() print "f3/g C", g_a #g_a still 5 def f4(): a = 6 def g(a=a): #Yuck. Cumbersome print "f4/g A", a #No problem. a is 6 a = 7 print "f5/g B", a #Now a is 7 g() print "f4", a #Back to 6, since int is immutable and def f5(): a = 8 def g(a): print "f5/g A", a #No problem. a is 8 a = 9 print "f5/g B", a #Now a is 9 return a a = g(a) print "f5", a #a is still 9 f1() #f2() #Commented out to avoid Exception f3() f4() f5()
When I first ran into the
UnboundLocalError problem demoed in
f2 I started with the fall-back solution in
f4, but 2 considerations turned me off that solution. The main one was ugliness of the keyword stuffing, especially when I wanted several variables in the shared scope. A more minor issue was a need to mutate such variables within the nested function. This is a minor issue because such mutation is rather inelegant and runs counter to the very functional principles underlying nested scopes. Nevertheless, I did have a couple of cases where I wanted to hack in mutation temporarily for some quick and dirtypurpose. It turns out that the
f5 approach deals with both issues, with a bonus that mutation is replaced by functional transform. I use tuples to pass back multiple values from the inner function, of course.
I did not show in the listing how using
exec or
from foo import * can lead to syntax errors, a situation I ran into once, to my great confusion. See the PEP for a terse listing of syntax gotchas. Andrew Kuchling’s document mentions the
exec gotcha. The usual solution is to use the form
exec cmd in globals(), locals(). See the Python Library Ref exec documentation for details (notice: Python 2.4 relaxes the rules a bit on what can be used with
exec ... in ...).
This slide mentions the
exec gotcha as well as a possible problem with
eval.
Side note: in Andrew Kuchling’s brief on nested scopes he introduces them by saying:?)
I read this bit after I’d already used recursive inner functions in several cases, and had been impressed at their expressive power. Just goes to show that the human imagination is not fit to find limits to the usefulness of recursion.
Overall, nested scopes in Python are not as clean as a language purist might wish, but like so much in Python’s evolution, they find a comfortable niche between cleanliness and practicality.
Related link:
Here are some BSD and firewall sticker designs:
DragonFly BSD — Because There Are Other Alternatives
FreeBSD — Because There Are Other Alternatives
NetBSD — Because There Are Other Alternatives
OpenBSD — Because There Are Other Alternatives
OpenBSD — Painted Puffy
IPFW — Building a Better Firewall
IPFilter — Building a Better Firewall
IPTables — Building a Better Firewall
PF — Building a Better Firewall
Copyright? You are free to print and sell/give away these designs as long as they are used to promote DragonFly BSD, FreeBSD, NetBSD, OpenBSD, IPFW, IPFilter, PF, and IPTables in a positive manner. The Puffy image is copyright Theo de Raadt
Have fun.
I | http://www.oreillynet.com/onlamp/blog/2004/12/ | crawl-001 | en | refinedweb |
Solution for Programmming Exercise 2.2
This page contains a sample solution to one of the exercises from Introduction to Programming Using Java.
Exercise 2.2:
Write a program that simulates rolling a pair of dice. You can simulate rolling one die by choosing one of the integers 1, 2, 3, 4, 5, or 6 at random. The number you pick represents the number on the die after it is rolled. As pointed out in Section 2.5, The expression
(int)(Math.random()*6) + 1
does the computation you need to select a random integer between 1 and 6. You can assign
When designing a program, one of the first things you should ask yourself is, "What values do I need to represent?" The answer helps you decide what variables to declare in the program. This program will need some variables to represent the numbers showing on each die and the total of the two dice. Since these numbers are all integers, we can use three variables of type int. I'll call the variables die1, die2, and roll. The program begins by declaring the variables:
int die1; int die2; int roll;
In the actual program, of course, I've added a comment to explain the purpose of each variable. The values of die1 and die2 can be computed using the expression given in the exercise:
die1 = (int)(Math.random()*6) + 1; die2 = (int)(Math.random()*6) + 1;
Note that even though the expressions on the right-hand sides of these assignment statements are the same, the values can be different because the function Math.random() can return different values when it is called twice.
We can then compute roll = die1 + die2 and use three System.out.println statements to display the three lines of output:
System.out.println("The first die comes up " + die1); System.out.println("The second die comes up " + die2); System.out.println("Your total roll is " + roll);
Note that I've chosen to use the concatenation operator, +, to append the value of die1 onto the string "The first die comes up". Alternatively, I could use two output statements:
System.out.print("The first die comes up "); System.out.println(die);
I'll also note that I could get away without the variable roll, since I could output the value of the expression die1 + die2 directly:
System.out.println("Your total roll is " + (die1 + die2));
However, it's generally better style to have a meaningful name for a quantity. By the way, the parentheses around (die1 + die2) are essential because of the precedence rules for the + operator. You might try to experiment with leaving them out and see what happens.
public class RollTheDice { /* This program simulates rolling a pair of dice. The number that comes up on each die is output, followed by the total of the two dice. */ public static void main(String[] args) { int die1; // The number on the first die. int die2; // The number on the second die. int roll; // The total roll (sum of the two dice). die1 = (int)(Math.random()*6) + 1; die2 = (int)(Math.random()*6) + 1; roll = die1 + die2; System.out.println("The first die comes up " + die1); System.out.println("The second die comes up " + die2); System.out.println("Your total roll is " + roll); } // end main() } // end class | http://math.hws.edu/javanotes/c2/ex2-ans.html | crawl-001 | en | refinedweb |
I want to read in a datafile that has several constants for my program (e.g. MAXARRAYSIZE).
I then want these constants to be accessible anywhere in my program by typing something like: ConstantsClassName.MAXARRAYSIZE. How do I implement this class?
Once assigned from the datafile, these constants will never again change value during program execution.
Thanks.
Use a static bloc in
ConstantsClassName class.
public class ConstantsClassName{ public static final String MAXARRAYSIZE; static{ // read your file and store the data in; MAXARRAYSIZE = valueRetrievedFromFile; } }
MAXARRAYSIZE should be
MAX_ARRAY_SIZE if you follow Java conventions for constants declaration. | https://codedump.io/share/NJk9uJaallqW/1/read-in-file-for-constants-java | CC-MAIN-2017-26 | en | refinedweb |
0
hi i have a very large array that i want to populate but i don want to run the for loops every time so i thought i would right a program to output text to fill the array and was wondering if this would work.
the array is
int array[100][12][31];
#include <iostream> #include <stdlib.h> #include <fstream> #include <time.h> using namespace std; #define getrandom( min, max ) ((rand() % (int)(((max) + 1) - (min))) + (min)) int main() { int temp, counter; srand(time(NULL)); ofstream fout("MyFile.txt"); fout << "int array[100][12][31] =" << "\n" << "{"; for (int i = 0; i < 100; i++) { for (int j = 0; j < 12; j++) { for (int k = 0; k < 31; k++) { if ((counter % 15 == 0) && (counter != 0)) fout << "\n"; // after 15 numbers go to a new line in the file temp = getrandom(1000, 9999); // used to get a number for the array fout << temp << ", "; // seperate each element with a "," and a space counter++; } } } fout << "};"; // ends the array with "};" fout.close(); return 0; }
i know this is kinda rough but i was hoping it would help, im not anywhere near being done with this program and i may need to increase the array by a factor of 100 before the end so im not sure how much time it will take to populate the array every time. i do realize that i will have to go back into the .txt to delete the last", " or the compiler will flag an error. if anyone has any ideas or sugestions it would be very much aperciated | https://www.daniweb.com/programming/software-development/threads/190130/question-about-programing-pratice-with-a-large-array | CC-MAIN-2017-26 | en | refinedweb |
0
ok well I'm working on making a dungeon explorer and I can't get images to work...so far I have this
from Tkinter import * class App: def __init__(self, master): frame = Frame(master) frame.pack() canvas = Canvas(frame, width = 225, height = 225) canvas.pack() gif1 = PhotoImage(file = 'Fhall.gif') canvas.create_image(0, 0, image = gif1, anchor = NW) root = Tk() d = App(root) root.geometry('225x225+0+0') root.resizable(FALSE,FALSE) root.wait_window() | https://www.daniweb.com/programming/software-development/threads/325563/photoimage | CC-MAIN-2017-26 | en | refinedweb |
#include <hallo.h> * Bill Allombert [Tue, Apr 11 2006, 12:34:45AM]: > Please take into account that Debian menu will only display modules > suitable for the running window-manager (because they use a specific Okay... now I understand. > 'needs' field that only this wm 'support'). So in effect you are just > renaming "WindowManagers/Modules" to "Window Managers/$wm Modules". Yep. WRT you said above, what abot renaming "WindowManagers/Modules" to "$wm Modules" (one level above WM starters and indicating which "modules" are meant by that). > I am not a typical user, but I have 45 window-managers installed > so I have a hard time finding the single Modules subsection and > naming it "Foo WM Modules" will not make things any easier. That > might not make me the best judge of the issue, though. Ehm - yes to both. Eduard. -- For any stupid thing chosen at random, you'll find at least 5 people on the Internet who thinks it's a good idea. -- Steve Langasek in debian-devel | https://lists.debian.org/debian-devel/2006/04/msg00220.html | CC-MAIN-2017-26 | en | refinedweb |
Exercise 2 was a little more involving as far as conversions go. Nevertheless, I was able to find a solution to this:
2. constants
to represent the various conversion factors.
#include <iostream> using namespace std; int main() { const int foot = 12; const double meter = 0.0254; const double kilo = 2.2; int feet; int inches; int pounds; int heightInches; double weightKG; double meters; double bmi; // gather input cout << "Enter your height in feet: "; cin >> feet; cout << "Enter your height in inches: "; cin >> inches; cout << "Enter your weight in pounds: "; cin >> pounds; // Calculate BMI heightInches = feet * foot + inches; meters = heightInches * meter; weightKG = pounds / kilo; bmi = weightKG / (meters * meters); cout << "Your Body Mass Index(BMI) is: " << bmi << endl; return 0; } | https://rundata.wordpress.com/2012/10/12/c-primer-chaper-3-exercise-2/ | CC-MAIN-2017-26 | en | refinedweb |
Recently needed the ability to parse durations from human readable strings that were also context aware. The context being the date to start your duration calculation from so that if you started on January 1st 2017 and wanted 2 months you’d get exactly 31 (number of days in January 2017) + 28 (number of days in February). Then if I gave it the context of April 1st I’d get 61 days since there were two months with 31 days each.
I tried to find an existing library with no luck so I wrote
delta to take care
of the job and hopefully someone else would find it of use. You can get your
hands on delta easily through pypi like so:
pip install delta
Once installed you can use it like so:
import delta from datetime import datetime print(delta.parse('1 year 2 months and 3 days')) print(delta.parse('2 months and 3.5 weeks', datetime(2017, 3, 4)))
You can see that
delta allows you to easily include a context or not and when
you don’t supply the context it will assume the current date. Another thing you
may have noticed is you can get quite expressive with the duration expressions
being able to do all of the following:
1 year 2 months and 3 weeks 2 months, 3 weeks and 12 days 1y 2m 3w 4d 3.5 years and 2.7 days
delta will handle all of those without any issues.
If you find
delta useful then head over to the github
project and open any issues or contribute a PR for any additional features you’d
like. | http://rlgomes.github.io/work/python/date/parsing/2017/03/04/15.59-human-friendly-context-aware-date-parsing.html | CC-MAIN-2017-26 | en | refinedweb |
Upgrading to 3.0¶
Note
This guide assumes that you are familiar and comfortable with administration of a Cyrus installation, and system administration in general.
It assumes you are intalling from source or tarball. If you want to install from package, use the upgrade instructions from the package provider.
Upgrading: an overview
-
Note
For those upgrading from 2.3.X; newer releases of Cyrus IMAP will use significantly more memory per selected mailbox. This is not an error or bug; it’s a feature. The newer code is holding more data and metadata in memory for purposes of faster access to more of the mailbox. This is not a memory leak..0.
How are you planning on upgrading?¶
Ideally, you will do a sandboxed test installation of 3.0 using a snapshot of your existing data before you switch off your existing installation. The rest of the instructions are assuming a sandboxed 3.0 installation.
If you’re familiar with replication, and your current installation is 2.4 or newer, you can set up your existing installation to replicate data to a new 3.0 installation and failover to the new installation when you’re ready. The replication protocol has been kept backwards compatible.
If you are upgrading in place, you will need to shut down Cyrus entirely while you install the new package. If your old installation was using Berkeley DB format databases, you will need to convert or upgrade the databases before you upgrade. Cyrus v3.0 does not support Berkeley DB at all..0 Cyrus¶
Download the release 3snmp.0, you can consider using the new inbuilt.
Warning
Please be warned that some
Note
If your installation is using groups, don’t turn
reverseacls: on. Reverseacl support
only works well for sites without
Sieve Scripts
Since defaults for options:
unixhierarchysep:and
altnamespace:have changed in imapd.conf(5), you may very likely need to modify any sieve scripts already on your system. Fear not, there’s a tool for this task, called translatesieve(8). This tool can handle situations where either or both of these settings need change. Please consult the man page for details.
Consider the following example, where the prior configuration was already using
altnamespace: on, but was not using
unixhierarchysep: on:
# su cyrus -c "/usr/lib/cyrus/upgrade/translatesieve -a" you are using /var/lib/imap/sieve as your sieve directory. translating sievedir /var/lib/imap/sieve... converting separator from '.' to '/' not changing name space. done
Warning
Berkeley db format no longer supported
If you have any databases using Berkeley db, they’ll need to be converted to skiplist or flat in your existing installation. And then optionally converted to whatever final format you’d like in your 3.0 installation.
Databases potentially affected: mailboxes, annotations, conversations, quotas.
On old install, prior to migration:
cvt_cyrusdb /<confdir>mailboxes.db berkeley /tmp/new-mailboxes.db skiplist
If you don’t want to use flat or skiplist for 3.0, you can use the new 3.0 cvt_cyrusdb(8) to swap to new format:
cvt_cyrusdb /tmp/new-mailboxes.db skiplist /<confdir>/mailboxes.db <new file format>
Note
The cvt_cyrusdb(8) command does not accept relative paths.
7. Start new 3.0 Cyrus and verify¶
sudo ./master/master -d
Check
/var/log/syslog for errors so you can quickly understand potential problems.
When you’re satisfied version 3
9. Do you want any new features?¶
3.0 comes with many lovely new features. Consider which ones you want to enable. Here are some which may interest you. Check the 3.0 release notes for the full list.
- JMAP
- Backups
- Xapian for searching
- Cross-domain support. See
crossdomainsin imapd.conf(5). | https://www.cyrusimap.org/imap/download/upgrade.html | CC-MAIN-2017-26 | en | refinedweb |
Bluetooth headset
This article describes the configuration of Bluetooth headsets within Gentoo Linux.
Contents
- 1 Prerequisites
- 2 Configuration
- 3 Testing
- 4 Working devices
- 5 Troubleshooting
- 6 See also
- 7 External resources
- 8 References
Prerequisites
The configurations for Bluetooth and ALSA must have been previously completed.
Configuration
PulseAudio
Following instructions from PulseAudio and BlueZ 5 should be sufficient to make Bluetooth headsets work (through pavucontrol for instance).
ALSA + Bluez 5
If you do not want to use PulseAudio, you can use bluez-alsa to provide integration between Bluez and ALSA.
- Install bluez-alsa:
root #
emerge --ask media-sound/bluez-alsa
- In your ALSA configuration, /etc/asound.conf (system-wide) or ~/.asoundrc (user-level), specify the parameters of the Bluetooth connection (replace the MAC address with the MAC address of your device)
/etc/asound.conf or ~/.asoundrc
# Bluetooth headset defaults.bluealsa { interface "hci0" # host Bluetooth adapter device "10:4F:A8:00:11:22" # Bluetooth headset MAC address profile "a2dp" }
A static ALSA configuration is also possible, make sure to change the device name in the below examples for aplay.
/etc/asound.conf or ~/.asoundrc
# Bluetooth headset pcm.btheadset { type plug slave.pcm { type bluealsa device "10:4F:A8:00:11:22" profile "a2dp" } hint { show on description "Your description of Bluetooth Headset" } }
- Make sure the bluetooth and bluealsa services are started. You probably want to add them to your default runlevel via rc-config.
- Make sure the device is paired and connected to your computer. See Bluetooth for details.
- Test with e.g. aplay, passing the PCM device 'bluealsa'
user $
aplay -D bluealsa some_file.wav
- For other applications, the precise option to set the output device may differ.
Changes to ALSA configuration files /etc/asound.conf and ~/.asoundrc are picked up automatically at application start, you don't need to restart the alsasound service.
- Hardware volume control:
user $
alsamixer -D bluealsa
Testing
- Play a sound file.
user $
mplayer -ao alsa:device=bluealsa filename
user $
mpv --audio-device=alsa/bluealsa filename
If it works, please add your device to the table of working devices.
Working devices
The capabilities of the device are dependent on the Bluetooth controller being used.
Troubleshooting
/etc/bluetooth/audio.conf
[General] Enable=Socket
- Restart bluetoothd by doing one of the following things:
- Turn the software wireless kill switch off and on again
root #
rfkill block bluetooth
root #
rfkill unblock bluetooth
- Turn the hardware wireless kill switch off and on again
- Reboot the computer
- Reconnect the Bluetooth headset
Audio device not visible when using GDM
If you are using GDM, but not logging into GNOME (e.g. i3 instead), GDM might block your headset, which will it not being available for PulseAudio. This will result in your headset being connected, but the applications won't see it.
As a workaround, you can switch to a different display manager (e.g. LXDM), or disable PulseaAudio for GDM[1]:
/var/lib/gdm/.config/pulse/client.conf
autospawn = no daemon-binary = /bin/true
If you have created the file, make sure that GDM can read it:
root #
chown gdm:gdm /var/lib/gdm/.config/pulse/client.conf
Audio device not visible using PulseAudio volume control (but working with ALSA)
According to this forum post, add the following to /etc/pulse/default.pa (and possibly /etc/pulse/system.pa):
/etc/pulse/default.pa
### Automatically load driver modules for Bluetooth hardware .ifexists module-bluez5-device.so load-module module-bluez5-device .endif .ifexists module-bluez5-discover.so load-module module-bluez5-discover .endif
Ensure that the
pulseaudio and
bluetooth USE flags are enabled
See also
External resources
References
- ↑ Stanislav Naumuk. Bluetooth a2dp, Debian Wiki, June 13th, 2015. Retrieved on March 18th, 2019. | https://wiki.gentoo.org/wiki/Bluetooth_headset | CC-MAIN-2020-29 | en | refinedweb |
Grid Sorting Overview
In Ignite UI for Angular Grid, data sorting is enabled on a per-column level, meaning that the igx-grid can have a mix of sortable and non-sortable columns. Performing angular sort actions enables you to change the display order of the records based on specified criteria.
Demo
This is done via the
sortable input. With the Grid sorting, you can also set the
sortingIgnoreCase property to perform case sensitive sorting:
<igx-column</igx-column>
Sorting through the API
You can sort any column or a combination of columns through the Grid API using the Grid
sort method:
import { SortingDirection } from 'igniteui-angular'; // Perform a case insensitive ascending sort on the ProductName column. this.grid.sort({ fieldName: 'ProductName', dir: SortingDirection.Asc, ignoreCase: true }); // Perform sorting on both the ProductName and Price columns. this.grid.sort([ { fieldName: 'ProductName', dir: SortingDirection.Asc, ignoreCase: true }, { fieldName: 'Price', dir: SortingDirection.Desc } ]);
Note
Sorting is performed using our
DefaultSortingStrategy algorithm. Any
IgxColumnComponent or
ISortingExpression can use a custom implementation of the
ISortingStrategy as a substitute algorithm. This is useful when custom sorting needs to be defined for complex template columns, or image columns, for example.
As with the filtering behavior, you can clear the sorting state by using the
clearSort method:
// Removes the sorting state from the ProductName column this.grid.clearSort('ProductName'); // Removes the sorting state from every column in the Grid this.grid.clearSort();
Note
The
sortStrategy of the Grid is of different type compared to the
sortStrategy of the column, since they work in different scopes and expose different parameters.
Note
The sorting operation DOES NOT change the underlying data source of the Grid.
Initial sorting state
It is possible to set the initial sorting state of the Grid by passing an array of sorting expressions to the
sortingExpressions property of the Grid.
public ngOnInit() { this.grid.sortingExpressions = [ { fieldName: 'ProductName', dir: SortingDirection.Asc, ignoreCase: true }, { fieldName: 'Price', dir: SortingDirection.Desc } ]; }
Note
If values of type
string are used by a column of
dataType
Date, the Grid won't parse them to
Date objects and using Grid
sorting won't work as expected. If you want to use
string objects, additional logic should be implemented on an application level, in order to parse the values to
Date objects.
Remote Sorting
The Grid supports remote sorting, which is demonstrated in the
Grid Remote Data Operations topic.
Styling
To get started with styling the sorting behavior, we need to import the
index file, where all the theme functions and component mixins live:
@import '~igniteui-angular/lib/core/styles/themes/index';
Following the simplest approach, we create a new theme that extends the
igx-grid-theme and accepts the
$sorted-header-icon-color and
sortable-header-icon-hover-color parameters.
$custom-theme: igx-grid-theme( $sorted-header-icon-color: #ffb06a, $sortable-header-icon-hover-color: black );
The last step is to include the component mixins:
@include igx-grid($custom-theme);
Note
If the component is using an
Emulated ViewEncapsulation, it is necessary to
penetrate this encapsulation using
::ng-deep:
:host { ::ng-deep { @include igx-grid($custom:
$black-color: black; $orange-color: #ffb06a; $custom-palette: igx-palette($primary: $black-color, $secondary: $orange-color);
And then with
igx-color we can easily retrieve color from the palette.
$custom-theme: igx-grid-theme( $sorted-header-icon-color: igx-color($custom-palette, "secondary", 500), $sortable-header-icon-hover-color: igx-color($custom light grid schema $custom-grid-schema: extend($_light-grid, ( sorted-header-icon-color: (igx-color:('secondary', 500)), sortable-header-icon-hover-color: (igx-color:('primary', 500)) ) );
In order to apply our custom schema we have to extend one of the globals (
light or
dark), which is basically pointing out the components with a custom schema, and after that add it to the respective component themes:
// Extending the global light-schema $my-custom-schema: extend($light-schema, ( igx-grid: $custom-grid-schema ) ); // Defining our custom theme with the custom schema $custom-theme: igx-grid-theme( $palette: $custom-palette, $schema: $my-custom-schema );
Don't forget to include the themes in the same way as it was demonstrated above.
Demo
API References
Additional Resources
- Grid overview
- Virtualization and Performance
- Paging
- Filtering
- Summaries
- Column Moving
- Column Pinning
- Column Resizing
- Selection | https://www.infragistics.com/products/ignite-ui-angular/angular/components/grid/sorting.html | CC-MAIN-2020-29 | en | refinedweb |
This post is a summary of the best python libraries for GraphQL. Python in recent years is starting to be on the list of top programming language. GraphQL is emerging but very promising query language and execution engine tied to any backend service.
Python is one of the most popular languages used in data science, machine learning and AI systems. GraphQL was introduced by Facebook as an alternative to REST and it’s popular of flexibility on handling complex systems.
Ariadne is a Python library for implementing GraphQL servers using schema-first approach.
Ariadne is a Python library for implementing GraphQL servers.
Features:
pip install ariadne
The following example creates an API defining Person type and single query field people returning a list of two persons. It also starts a local dev server with GraphQL Playground available on the address. Start by installing uvicorn, an ASGI server we will use to serve the API:
Start by installing uvicorn, an ASGI server we will use to serve the API:
pip install uvicorn
Then create an example.py file for your example application:
from ariadne import ObjectType, QueryType, gql, make_executable_schema from ariadne.asgi import GraphQL # Define types using Schema Definition Language () # Wrapping string in gql function provides validation and better error traceback type_defs = gql(""" type Query { people: [Person!]! } type Person { firstName: String lastName: String age: Int fullName: String } """) # Map resolver functions to Query fields using QueryType query = QueryType() # Resolvers are simple python functions @query.field("people") def resolve_people(*_): return [ {"firstName": "John", "lastName": "Doe", "age": 21}, {"firstName": "Bob", "lastName": "Boberson", "age": 24}, ] # Map resolver functions to custom type fields using ObjectType person = ObjectType("Person") @person.field("fullName") def resolve_person_fullname(person, *_): return "%s %s" % (person["firstName"], person["lastName"]) # Create executable GraphQL schema schema = make_executable_schema(type_defs, [query, person]) # Create an ASGI app using the schema, running in debug mode app = GraphQL(schema, debug=True)
Strawberry is a new GraphQL library for Python 3, inspired by dataclasses. An initial version of Strawberry has been released on GitHub. Strawberry was created by @patrick91 who is also an organizer of @pyconit. It was originally announced during Python Pizza Berlin.
pip install strawberry-graphql
Create a file called app.py with the following code:
import strawberry @strawberry.type class User: name: str age: int @strawberry.type class Query: @strawberry.field def user(self, info) -> User: return User(name="Patrick", age=100) schema = strawberry.Schema(query=Query)
This will create a GraphQL schema defining a User type and a single query field user that will return a hard-coded user.
To run the debug server run the following command:
strawberry run server app
Open the debug server by clicking on the following link:
This will open a GraphQL playground where you can test the API.
Graphene is a Python library for building GraphQL schemas/types fast and easily.
Graphene has multiple integrations with different frameworks:
Also, Graphene is fully compatible with the GraphQL spec, working seamlessly with all GraphQL clients, such as Relay, Apollo and gql.
For instaling graphene, just run this command in your shell
pip install "graphene>=2.0"! | https://blog.graphqleditor.com/top-3-python-libraries-for-graphql/ | CC-MAIN-2020-29 | en | refinedweb |
SYNOPSIS
#include <xosd.h>
-
- int xosd_set_bar_length (xosd *osd, int displayPercentage);
DESCRIPTION
xosd_set_bar_length changes the percentage of the display used by a slider or percentage bar. Normally the XOSD choses a sensible length for the bar, but you may wish to change the default behavior if there are only a small number of possible values to be displayed.
ARGUMENTS
- osd
- The XOSD window to alter.
- displayPercentage
- The percentage of the display to be used up by the slider or percentage bar, as an interger between 0 and 100. Setting displayPercentage to -1 reverts to the default behaviour.
RETURN VALUE
On success, a zero is returned. On error, -1 is returned and xosd_error is set to indicate the reason for the error.
ENVIRONMENT
- char *xosd_error
- A string describing the error, if one occurred.
BUGS
There are no known bugs with xosd_set_bar_length. Bug reports can be sent to <[email protected]>.
AUTHORS
The XOSD library was originally written by André Renaud, and is currently maintained by Tim Wright. This document was written by Michael JasonSmith. | http://manpages.org/xosd_set_bar_length/3 | CC-MAIN-2020-29 | en | refinedweb |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Hi,
I'm interested in languages which do compile time restrictions on the values of variables. Specifically, does anything like the following exist:
x : int [0, 2, ... 10]
x = 50 //error
x = 3 // error
x = 2 // ok
x = rand() // error, rand() : [0.0 ... 1.0] doesn't match [0, 2, ... 10]
x = floor(rand() * 2) * 2 // ok [0, 2, 4] is a subset of [0, 2, ...10]
Thanks,
Alex
The theory that most fully generalizes this notion is that of "dependent types", which basically means types which can depend on or include values. This is an area of much active research, and a search for "dependent types" or "dependently typed" will produce lots of good stuff...
For a number of reasons, dependent type systems present barriers to practical application, or at least application in a naive way. So there have been a number of attempts to realize the benefits of dependent types without having to dramatically change the way we write programs (indexed types, GADTs + extensible kinds, etc.)
You can find quite a bit of discussion on these topics here on LtU. I'm sorry I don't have time to include links.
On the other hand, I feel a little bad pointing you at one of the more esoteric and experimental areas of PLT in answer to a relatively simple question. You should be warned that none of the dependently typed languages provides anything as straightforward as your example, and without considerable preparation, you might not even recognize them as doing the same sort of thing. If the only thing you're looking for is relatively simple bounds checking and enumerated types, you may find a system that does this stuff using, for example, constraint propagation. Lots of languages provide statically checked enumerated types, for example. However, you'll probably find that you quickly run up against cases that can't be (trivially) statically analyzed, and your language will need to provide a run-time checked way of getting dynamic values into a constrained type.
I believe that you can emulate this type of behavior with C++ templates and Haskell type classes, but it gets a bit esoteric. Perhaps someone else knows of a practical language that does this sort of thing in a more friendly way.
After all, what is compile time but an early step in run time.
Consider C++ template metaprogramming. It's the worst and most horrid mini-language in the world, right up there with InterCal and BrainF*k, but you can use it as a programming language in it's own right.
Now consider Ruby (or lisp or scheme or...). Classes and Functions are actually defined and defineable at run time.
What you need rather is to implement your strict checking language in Ruby (or Scheme or ...), 90% of the work can be done creating strict checking classes overlaying the standard primitives. 9% of the work can be achieved by directly dropping through to eval or module_eval.
ps: Check for a very nice implementation of Interval arithmetic.
What about when the code is running in an application where runtime errors are never acceptable (automotive control systems, for example)? The application may contain some complex, hard to statically analyse code, like the rand() example above.
In the extremely static case, the code may be correct but fail to compile as the compiler is too 'stupid' to work out that the code is correct. The code may also be incorrect. Either way, the programmer is forced to make it a bit simpler, to allow the compiler to figure out what is going on.
In the extremely dynamic case, with all strictness analysis being done on the executing program, there may be cases where runtime errors occur. In some areas that may simply not be acceptable, so wouldn't the extremely static language be better for these areas?
All language that I know of have runtime errors, no matter what kind of typing they have.
A static type system might prove that a program doesnt have any type errors. But it could be that its just moves the errors into the non-type error field.
edit: Ok, I'm not sure if you get any runtime errors in Charity. But I'm not sure if want to program in Charity anyway.
Yes, all languages can have runtime errors (what if the hardware fails, after all?) But, static checking can remove a lot of those runtime errors. If your code runs in an application environment where runtime errors can't be tolerated, then static checking would seem to be a better direction to head in.
At least, for these sorts of applications.
Actually the static typing doesn't remove those errors, it just finds them. Removing them has to be made manually. And there has been studies that show that for large programs there comes a point there removing errors generally causes new errors in a proportionall fashion. So I just wonder: can we be sure that then we remove static type errors, that we dont introduce just as many new non type errors?
This is why it's important to strive for languages in which local changes have local effects. On the other hand, part of the point is that when you correct type errors, you have more assurance that your code is correct, rather than less. Of course, correcting the type error can still reveal an error that was in the code and that isn't addressed by the type system, but it's quite unusual to introduce such errors in the process of correcting type errors.
How tight is your spec and how much of it can you check on a computer (optionally, without requiring an undecidable typechecker)?
And of course, how do you ensure the spec says what it's supposed to? :-)
The post on Usenet
"There have been several times I have thought
of creating a bug tracking system that tracks how many bugs are caused
by a previous "bug-fix." The number of such bugs for each earlier bug
is a major determinant of when existing software has become too
"brittle" and has to be replaced. But it also allows you to determine
when, and in fact if, a new development project is ever going to be
complete."
Industrial Strength Exception Freedom
John Carter: After all, what is compile time but an early step in run time.
The obvious crucial distinction being that compile time occurs before you deliver the software to the user, and any error that the compiler can prove can't occur at runtime won't, so by definition you can create more robust software from the user's point of view with any statically-typed language than any dynamically-checked language.
Now, as has been discussed here ad nauseam, if the statically-typed language is essentially Standard ML or O'Caml or Haskell98, then without jumping through various encoding hoops, the static type system probably isn't going to be able to prove anything dramatically more interesting than what a good test suite in a dynamically-checked language is. So you have a couple of options: jump through those encoding hoops (cf. "lightweight dependent typing," the various pieces on phantom types, etc.) or use the GHC extensions to Haskell98 (GADTs and so forth); or use one of the more powerful, but even less mainstream than Haskell, functional languages.
We're also ignoring the issue of scaling these checks. The larger the codebase, the less likely that you're going to have a comprehensive test suite with adequate coverage. Consider Tim Sweeney's POPL '06 presentation, in which he discusses Gears of War, which has some million lines of C++ and UnrealScript. Tim discusses four static language features which he calculates would eradicate approximately 50% of the bugs in Gears of War—that is, the codebase wouldn't compile if those bugs were present and the compiler would tell you where the bugs were. Now imagine test coverage that found every instance of a bug caused by the lack of those four language features in a million line codebase. You'd spend enough time writing the tests to implement a compiler for the language with Tim's four features, a compiler that you could then use on multiple projects.
When you really think about static typing vs. dynamic checking coupled with programming in the large, it rapidly becomes apparent how fundamentally unserious the dynamic checking advocates are.
In no way am I invoking the "dynamic typing is better than static". I merely stating static typing can relatively easily be recreated on top of a dynamic one.
Consider C++ template metaprogramming.
It is a truly horrible language, but it is a programming language whose primitives are static types.
If I were to point to the one great weakness of C++ is that you need a truly twisted genius to metaprogram in C++ templates.
Andrei Alexandrescu has this twisted genius in spades, but the rest of don't.
Is doesn't need to be so.
The types themselves can just be (yet another) object in a dynamic OO programming language , and you can program with them the same way as you program with any other object.
Thus instead of creating yet another Bondage & Discipline language like C++, you can merely create a thin extension to a very dynamic language like ruby / scheme / ...
All static type checking is, is the evaluation of constant expressions prior to running the program. (A Good Hint to this is the existence of RTTI in C++. RTTI and dymnamic_cast is what you do when you have a type expression that is not constant.
Static type checking is just constant folding of type related assertions.
Consider this fragment of Ruby..
def myfunc( an_int)
raise "Static type check failure!" if an_int.class != FixNum
an_int.split(,) # FixNum's do not have such a method and we have
# just asserted an_int ISA FixNum.
end
my_func( "a string")
There is no reason why that fragment, since it involves constants, cannot be evaluated
So I'm suggesting instead of creating a super strict static checking language, merely create the thinnest extension of a language like ruby or scheme or ....
Static type checks merely become assertions of constant boolean expressions involving instances of "type" objects. Where the "type" objects are merely vanilla objects within the base programming languages that have the appropriate attributes and methods to model the types used.
Once the instance of the type object has been created, it can be marked as "frozen" or "constant".
On loading a module, do the obvious optimization... ie. "constant folding", evaluate all expressions involving objects that are "constant" or frozen at the time of loading.
Static types check failures merely appear as assertion failures at module load time.
Even the speed of a static language can be recovered in a dynamic one in that some dynamic languages permit you to explicitly state, if you know beforehand, which instance/implementation of a virtual function you will invoke. Thus at the constant folding stage, one the expressions you can constant fold is the lookup in the virtual method table.
...I do feel that since I can get the advantages of static typing (type safety and speed) via a thin extension to a dynamic language, but not the other way round, it does suggest a certain, umm, ordering in my language preferences.
:-)
This is very helpful, and while I am a fan of soft typing systems for dynamically-checked languages, the major point remains that either it's being done after the software has been deployed, or it becomes static typing that doesn't check everything. I mean that last literally: there is code in your program that isn't checked, vs. a statically-typed language in which everything is checked, and that checking prevents you from saying some perfectly valid things that we don't yet have a type system to express. I suppose a soft typing system that told you what it gave up on would be a kind of happy medium. And in the limit, what we end up with, of course, is the inaptly named Dylan, which sits precisely at this intersection of static typing and dynamic checking.
Furthermore, multi-staged programming blurs the distinction between compile-time and runtime in such a way as to preserve the properties you want at each stage, and introduces runtime code generation as an added bonus. Note that this is distinct from, but subsumes, meta-programming, of which C++'s template system is an accidental, and not exemplary, instance. So this is a field worth continuing to watch, but unfortunately, there aren't many multi-stage languages as of yet; they're the most "out there" of the research languages.
So I remain convinced that the future will get us out of the static/dynamic conundrum, and in the meantime I also remain baffled as to just what kinds of things people find themselves able to express in their favorite dynamically-checked language that they find themselves unable to express in their closest-to-favorite statically-typed language. But I suspect, given my experience, that I'll never hear a satisfactory answer to that—that is, given a set of functional requirements, someone could implement those requirements in a dynamically-checked language and I could implement them in O'Caml, and each of us would find the other's solution abhorrent for one reason or another. :-)
I've found myself faking first class modules in haskell a few times now, and I'm pretty sure that sooner or later I'll find a use case based on them and functors that requires impredicative polymorphism. I've not found an alternative that doesn't sacrifice some typechecking ability - AIUI switching to ocaml would likely do so as well, I'm rather fond of monads for restricting the range of side-effects.
I mean that last literally: there is code in your program that isn't checked, vs. a statically-typed language in which everything is checked, and that checking prevents you from saying some perfectly valid things that we don't yet have a type system to express.
As someone else pointed out you can always go with something like SPARK Ada which has another layer of checking above static types which can cover exactly the sort of cases outlined in the original post on this topic. Every argument for static types and their improved checking and correctness can be converted into a perfectly analogous argument for formal specification over static typing due to improved checking and catchign errors that static typing would simply miss.
Personally I would like to see people recognise that the whole thing, from completely dynamic types with no checking at all (even runtime checking) up to full formal specification and theorem proving for checking is simply a continuous spectrum of choices of exactly how much you wish to specify about your system, trading flexibility for assurance. There is no "correct" spot on the spectrum, only a choice as to which point in that trade off provides the sweet spot for the project you are working on. Some projects require a high degree of flexibility and don't require excessive assurance. Some projects require strong assurance, and development flexibility really isn't that important in comparison. Use the tool that suits the job. There are going to be a lot of different tools to fill the different niches. Being familiar with all the possibilities is probably a good idea.
...and the question basically comes down to the value of the phase distinction. What would you like to be able to say, iron-clad, about your code before it gets deployed? Conversely, what would you like to be able to say, iron-clad, about your code after it gets deployed? There's a big world in which you don't get to change your code after deployment except via replacement. There's another big world in which you can incrementally apply patches as long as the code isn't running, and another big one in which you can't stop running the code. One reason I'm fascinated by systems such as Acute is that they attempt to address safe upgrade without sacrificing either abstraction or type safety. But in the meantime, we continue to be haunted by the phase distinction and the limitations on the current state of the art in allowing us to express what we'd like to express.
In the extreme, static typing becomes the verification of a full formal specification.
Static typing is merely constant folding. It isn't anyway so ambitious as to be program proving.
What would be a more interesting goal for enhancements to static typing is to take duck typing to it's logical conclusion.
ie. Accumulate every operation that a function parameter participates in as a list of methods which it responds to.
Then check (statically) every call of that function and verify that the calling instance actual arguments all respond to that list of methods.
If some of the actual arguments are also parameters, just feed the list transitively up the call graph.
And lo, you have a statically checked duck typing system.
This is very incorrect. Static typing involves proving theorems about programs. These theorems may not be that exciting to you, but they are theorems. And there are type systems that prove very interesting theorems (race-freedom, or deadlock-freedom, or ...). I don't believe that all static analysis of programs is really type systems, but type systems are a static analysis of a program that proves certain things about that program.
Incidentally, the same is true for constant folding.
To go a step further, there're some well-understood turing-complete type systems out there - so the real question is "can a type system get enough input to prove what I'm interested in?"
Duck typing is a form of structural typing (as opposed to nominal typing). Statically-typed languages with structural subtyping have been around for a long time.
Static typing is merely constant folding.
Static typing (ST) is far more than constant folding (except in trivial type systems).
ST consists of verifying your algorithms against an "embedded logic". ST is a simplified form of abstract interpretation (AI); ST is thus simpler, faster, but less precise than AI in every way (that I can see). Unfortunately, AI seems much more difficult and time-consuming (compilation-wise - correct me if I'm wrong!).
Can you can have abstract interpretation to verify algorithms in dynamic languages like Ruby though? Perhaps.
Others have commented that type systems can prove deadlock-free code, and other interesting properties, so I won't go into that. :-)
or it becomes static typing that doesn't check everything. I mean that last literally: there is code in your program that isn't checked, vs. a statically-typed language in which everything is checked
It's not as simple as that. When you encode latently-typed data into a statically-checked language, not "everything" is statically checked. You're simply wrapping some things in such a way as to escape the typechecker. The typechecker is satisfied by checking the type of a wrapper, but the contents of the wrapper remain latently typed.
In such programs, there's usually information about the type of the contents of a wrapper which can either only be known at runtime, or which is known to the programmer but not easily expressed to the typesystem (e.g. knowledge about constraints on the input data).
The value of static checking tends to be pretty low in these cases — the only reason to use it is because you care about static checking in the rest of the program, the bits that aren't dealing with latently-typed data.
While this is a good point—I'm reminded immediately of serializing statically-typed data over a network connection, for example—I wonder to what extent, if any, this isn't addressed by any language that has a good ADT-building system. If I'm using a member of the ML family (loosely defined to include Haskell) and I define a module that exposes an abstract type that can only be manipulated by the functions exposed by the module, in what meaningful sense is the data "latently typed?" Granted, the module's implementation might be very arcane, making use of Obj.magic in O'Caml or unsafe operations in Haskell, but once the module exists, it really does define a type in the type system, and that type really can be inferred, reasoned about, etc. using the concepts that are enforced by the module system.
I suspect strongly that this isn't appreciably different from what you meant, and without specific examples it will prove difficult to say much more. I would only like to point out that in O'Caml, for example, thanks to modules and polymorphic variants, it has proven relatively straightforward to provide a type-safe interface to OpenGL, a C API that is somewhat notorious for essentially embedding a "latently typed" API in such a way that C's type system is too weak to be of any use in enforcing the type distinctions. The question, to me, seems to revolve primarily around what level of granularity you think a module should be defined at as to whether values of the exposed types qualify as "latent" or "static."
On loading a module, do the obvious optimization... ie. "constant folding", evaluate all expressions involving objects that are "constant" or frozen at the time of loading.
Static types check failures merely appear as assertion failures at module load time.
Can you point to places where this is being done?
I.e. where such assertions are being caught at module-load time.
Not that it looks like a difficult optimization -- I just haven't seen it used thus (other than proofs-of-concept).
I have a feeling that things may not work out so cleanly when you really get down to implementing this. Typing is delicate. It's hard to pack everything you want into a type system while still ensuring your checker doesn't wander off into halting problem land. Allowing free-form assertions on types is probably not an option if you want to guarantee that you can check them statically.
Typing constructs are just part of a DSL that's integrated into a statically typed language. It optimized to match the problem domain, making it concise and efficient to process. Though I've seen some efforts to share constructs between the two, I don't think you can take an existing language like Ruby and make it work.
I'd much rather write:
def myfunc(an_int : FixNum)
...
end
What are the advantages of using the language's "normal" syntax to make assertions on types? I suppose the language's grammar may become a little simpler without the type-related syntax. You even might be able to share code between the type checker and the optimizer. Anything else?
BTW, I'm curious... did you mean "constant folding" in a literal sense? I think I understand how it'll handle the simple cases, but what about trickier features like parametric polymorphism, and subtyping?
...the static type system probably isn't going to be able to prove anything dramatically more interesting than what a good test suite in a dynamically-checked language is.
I think you can take the "probably" out of that statement: a sufficiently good test suite is going to be able to prove essentially anything provable, while any practical static type system is going to be strictly less powerful. (Are there any type system implementations incorporating a theorem prover and allowing any predicate to be a type description? That might be fun to play with, but I can't actually see myself using it, much less anyone else.)
The primary power of a static type system is in what it allows you to not do (ahem): The type system means you don't have to come up with quite so many tests. I have even more doubts about the existence of a "sufficiently good test suite" than about the theorem-proving type system.
I can't speak for anyone else and I may be stating the obvious but...
Isn't the primary purpose of type systems is that it allows us to reason about types? And whether you favour static typing or dynamic typing, types are a fairly important component of programming.
Because it's fresh on my mind, let's take a simple example from SICP 2.1.1 where the authors define a set of functions to operate over Rational types. In my translation to Alice, I did two translations - one that's a fairly literal translation of the Scheme code - and one that takes advantage of ML modules. Specifically, the ML signature for the Rational type gives us:
signature RATIONAL =
sig
type rational
val make_rat : int * int -> rational
val numer : rational -> int
val denom : rational -> int
val add_rat : rational * rational -> rational
val sub_rat : rational * rational -> rational
val mul_rat : rational * rational -> rational
val div_rat : rational * rational -> rational
val equal_rat : rational * rational -> bool
val print_rat : rational -> unit
end
Now what does this signature buy me? Well, the Scheme code and the ML code basically give the same functionality, so it couldn't be a matter of computability. The ML code also imposes some hard limits here over the definition and interface to these functions, so I guess one could say the static Rational type in ML is less flexible than the implied types of Scheme. And Scheme unit tests could perform a parallel universe of unit tests to impose similar type contracts, so I don't think it's necessarily an advantage of testing.).
Anyhow, I do think it important to bear in mind that types are an important force in PLs. But let's not get carried away and say that the only purpose of types is to reduce the number of runtime tests.
But arent you confusing static typing with abstract datatypes now?
Type abstraction is the primary purpose of typing systems. You have types in the variables, types in the expressions, types in the functions, types in the modules, and types of the ADTs. We call them static type systems, only because of one behavior associated with these type systems - the ability to check type consistency at compile time.
If you want to restrict this at a lower level, just grab one of the functions within the signature and run with it. We can objectively say that the make_rat function takes two integers and returns a rational. We can say this without resorting to the implementation. How is this contract guaranteed? Static type checking of course. But was static type checking the goal of this exercise? Or is static type checking simply the glue that makes reasoning about the types possible?
To me, the only connection between ADT and ST is that former makes the later easier. Making a ST language without ADT would be difficullt, and involve a lot of structural typing. And in some way would be a wasted effort as ADT is a quite good thing. And this is becourse ADT makes it easier to reason about your types. Easy enuogh that it could be done at least in part by a machine.
So i think that causallity is in the other way. ST demands a typesystem that is easy to reason about. It doesn't give it to you.).
I agree that the important thing is the ability to reason about types. But note that this can be achieved without requiring static checks, or the associated "limitations being imposed that are necessary for this level of reasoning to be meaningful".
Something like PLT's provide/contract gives this ability to declare, reason about, and enforce types (as well as other things which most statically-checked systems wouldn't recognize as types). It offers the same sort of capability as an ML signature. However, it doesn't require that typechecking be static. Your example would look something like the following. I've shown it in the context of a module, so it has some extra stuff - the provide/contract expression is the part that's equivalent to the ML signature definition.
provide/contract
(module rational mzscheme
(require (lib "contract.ss"))
; Note: 'rational?' already exists in Scheme, chose not to override
(define (rat? x)
; implement appropriate predicate (likely just use a struct predicate)
)
; name the contract - for convenience, readability & friendly error msgs
(define rational (flat-named-contract 'rational rat?))
(provide/contract
(rat? (any/c . -> . boolean?))
(make-rat (integer? integer? . -> . rational))
(numer (rational . -> . integer?))
(denom (rational . -> . integer?))
(add-rat (rational rational . -> . rational))
(sub-rat (rational rational . -> . rational))
(mul-rat (rational rational . -> . rational))
(div-rat (rational rational . -> . rational))
(equal-rat? (rational rational . -> . boolean?))
(print-rat (rational . -> . any/c)))
; implementation goes here
)
Test it with an erroneous call:
(numer 10)
=> repl-8:1:1: top-level broke the contract (-> rational integer?) it had with rational on numer; expected <rational>, given: 10
Systems that use such contracts share many characteristics with statically-checked systems. Working with such systems demonstrates quite clearly that many of the benefits people tend to associate with static typechecking don't actually have that much to do with the static checking aspect. Plus, of course, these contracts are capable of expressing and enforcing constraints which most typesystems can only dream about.
A nice intro to these contracts is linked from here.
I didn't really intend to compare static and dynamic type systems. Just interjecting that sometimes we lose sight of what the purpose of having a type system within a language really aims to achieve. In the case of ML, I view the type system to be a sort of DSL that rides on top and intertwines with the code. In the case of Scheme contracts, it is similar in purpose but uses more of a proxy pattern to define and enforce the types.
Personally, I still consider it easier to work with and reason about types in ML, as the syntax is highly dedicated to types. But then Lisp/Scheme has always shown to be amenable to any programming style you can possibly throw at it. As with all PL issues, the question back is whether having availed itself of libraries that can be used for typeful programming, will that lead the community to avail itself of those techniques? Or will it just be used as an argument over the never ending static/dynamic debate? Whichever side wins the longstanding argument, I'd hope the result is still that types, contracts and seperation of concerns is central to any PL and/or libraries.
I agree that the important thing is the ability to reason about types.
I think the important thing is the ability to reason about programs, not types. The runtime contract check above does not prove that no module (even the ones not yet written) can violate the contract. If you take that benefit away from static checks, what else is left anyway?
It offers the same sort of capability as an ML signature. However, it doesn't require that typechecking be static.
This almost reads like static checks are something that a type system imposes on people against their will. Let's assume there was a magical analyzer that checked contracts statically in scheme programs. Would you elect to not use it?
I think the important thing is the ability to reason about programs, not types.
Yes. To rephrase what I wrote, the important thing about types is the ability they provide to reason about programs. To do that, though, one reasons about types in programs.
As I wrote in Why type systems are interesting, all programmers rely on the ability to reason about static properties of programs, even in dynamically-checked languages. Types provide a framework with which to reason about (some of) those properties. That reasoning can be done by automated systems, as well as by humans. Human reasoning about types is more important than static automated reasoning in the sense that it always has to occur, regardless of whether static automated reasoning is occurring.
The runtime contract check above does not prove that no module (even the ones not yet written) can violate the contract.
That contract expression is agnostic with respect to proof. A static checker could certainly use it to assist with a proof.
If you take that benefit away from static checks, what else is left anyway?
I'm not taking any benefit away from static checks, I'm saying that when you take automated static checks away, the types don't necessarily disappear. The benefits of automated static checks should not be conflated with the benefits of types.
Dynamically-checked languages demonstrate what you get if you remove static checks entirely. Programmers still have to reason about types in those languages. (More pedantically, they still have to reason about type-like static properties, which are close enough to types that the distinction is little more than a quibble in this discussion.) Tools that help deal with types are still useful in that context, whether or not they operate statically. Test suites combined with contracts, or even with less sophisticated assertions, work very well in practice — I wrote about this in The unreasonable effectiveness of testing.
Automated static checking is not essential to get benefits from dealing with and reasoning about types. Such checks can be useful, but they have costs, and doing without such checks doesn't mean doing without types, or many of the benefits of types. It's worth considering type systems as useful constructs independent of the automated static checking of programs that use those systems.
This almost reads like static checks are something that a type system imposes on people against their will.
Type systems don't necessarily impose automated static checks, but statically-checked languages do. In most such languages, you don't get a choice not to statically check a piece of code. So in those languages, the static checks and their ramifications can, indeed, be against one's will.
Let's assume there was a magical analyzer that checked contracts statically in scheme programs. Would you elect to not use it?
I would certainly use it, if it worked well enough, and if I didn't have to globally transform the rest of my program to support the analyzer, in the same sort of way that statically-checked languages require one to do. But the reason we're having this discussion is that such an analyzer doesn't exist. Instead, we have a choice between writing programs in a way which suits the requirements of automated static type checkers, or living without such checkers and dealing with and checking types in other ways.
The contracts are checked at runtime, yes??
(numer 10)
If I am right, then that plus your comment, "Plus, of course, these contracts are capable of expressing and enforcing constraints is goinwhich most typesystems can only dream about," are essentially equivalent to my earlier comments about the relationship between static type systems, dynamic type systems, and testing. Correct?
Contracts are a good basis for a test setup, and are, overall, a really good thing if your system will do something horrible if the conditions embodied by the contract are violated or when you need to figure out what went wrong. In fact, given the strictly weaker nature of a static type checking system, contracts are a win in any language. But I do not see how they can change the fundamental relationship.
The contracts are checked at runtime, yes??
Only if you're inclined to install code which could result in 2am phone calls without testing it well. In my experience, in practice, this doesn't happen. If 2am phone calls are going to happen, they could just as well be the result of a bug that's not statically preventable. Testing thorough enough to catch such dynamic bugs will also catch the bugs that would have been caught statically, especially if you're using assertions or contracts so that type bugs aren't likely to go unnoticed when they do occur.
In general, though, I don't think I'm disagreeing with your overall perspective. However, the "fundamental relationship" you mention is perceived by some people in a way which connects types, and the benefits of types, more strongly to automated static typechecking of programs than is actually warranted. There's also a tendency to underrate the effectiveness of testing, in part because of a lack of recognition of the extent to which testing benefits from the same properties of programs which allow type inference to occur..
I agree that its orthogonal, presuming you check every contract every time, but how can this be done efficiently?
That is, given a super-smart soft-typechecker, and contracts that express more than it can statically deduce (as mentioned several times already), won't the contracts need to be checked every time a new value of the given type is computed?
Or put another way, won't using the "smart" soft typechecking load even more complexity on the programmer, since he/she must carefully consider which checks will be removed at compile time, and which runtime checks will be done too often to be practical?
On another note, what are the smartest soft-typecheckers out there? How do their compile-time analyses compare to e.g. Haskell or ML?
Or put another way, won't using the "smart" soft typechecking load even more complexity on the programmer, since he/she must carefully consider which checks will be removed at compile time, and which runtime checks will be done too often to be practical?
The same thing could be said of high level functionall language with optimising compilers. That they load more complexity on the programmer, since he/she must carefully consider what machinecode the program will end up as. And that this is a lot easier in a more lowlevel language.
The solution is to not worry to much and just trust the compiler.
Its called abstraction.
Testing is better than a static type checker because testing will give you more info on WHY it gone wrong; insted of it does not pass the test and give you a line number.
Given the discipline not to violate the interface ("Doctor, Doctor, it hurts when I hit myself on the head with a hammer...") or a sufficient test setup that will throw a hissy if you do violate the interface, what prevents you from reasoning about the type of Rationals without static type checking?
Now, both discipline and "sufficent test setups" are in rather short supply, and a good static type system should both encourage reasoning about your types and make both the types and reasoning as pain-free as possible. But I don't see how static type checks are going to allow you to do such reasoning, or how the lack of such checks, by itself, would interfere.
I absolutely agree that reasoning about types is a very valuable abstraction, and that a good static type system makes such reasoning the easiest approach to take. Far too often in my experience, programs in languages without static type checks wind up dealing the representations rather than types. But that is a weakness of the programmer, not the language.
mcguire: I have even more doubts about the existence of a "sufficiently good test suite" than about the theorem-proving type system.
Me too, because there are several existence proofs of the validity of the theorem-proving type system idea: you can find a brief list of them in my post here.
mcguire: a sufficiently good test suite is going to be able to prove essentially anything provable, while any practical static type system is going to be strictly less powerful.
May I kindly remind you of the folklore fact that testing can only prove presence of errors (i.e. disprove certain properties), while typing can prove their absence (i.e. actually prove properties)?
Moreover, there are quite interesting properties that are inherently unapproachable by testing. For example, race and deadlock freedom were already mentioned.
Hence my qualifier, "sufficently good." :-)
If memory serves, the statement is actually about full-on, formal program verification, not typing. Formal verification is strictly stronger than any practical type system that I am familiar with.
...properties that are inherently unapproachable by testing. For example, race and deadlock freedom....
mcguire:.
You've hit on a crucial point: there comes a point at which you cannot separate the type system from the language's semantics. The given example is a great one: the type system and language in question (presumably TyPiCal) go together, because you can't expose, e.g. the POSIX thread model of concurrency and enforce the rules of the Pi Calculus using them. As Benjamin Pierce points out in TAPL, generally speaking, language design and type system design actually do work hand-in-hand. It's somewhere between very difficult and impossible to retrofit a type system onto a language that was designed without one in mind. I think the line into "impossible" gets crossed when the semantics being enforced by the type system preclude the implementation of alternative models in the language. So it isn't true that you could accomplish the same thing with tests given a "general-purpose" (presumably meaning one with the usual shared-state concurrency semantics) language.
If memory serves, the statement is actually about full-on, formal program verification, not typing. Formal verification is strictly stronger than any practical type system that I am familiar with.
If your type system is sound, then it is a formal verification tool. Unfortunately, most mainstream languages do not have sound type systems. But the ones usually preferred here (e.g. ML) have.
As an aside, this is precisely the reason why I think that soft typing and friends are not worthwhile - they are unsound by definition.
But, I do not see why such a type system could not be replaced by an appropriate test. That test may require a preprocessor that walks over the code using exactly the same algorithm as the type system, but that is just a matter of programming.
Then your test is (a most likely ad-hoc, informally-specified, bug-ridden, slow implementation of) a type system. ;-)
Speaking of unsound type systems, does anyone know of work done on probabilistic type systems? One in which instead of relying on exact proofs, you use probabilistic proofs? Let's say you start with a Turing complete type system. You take a proposition, give it your best shot at proving it in a conventional manner, and if that proof seems to be taking too long (i.e. is might be undecidable), then you switch gears and start trying to find a random counter example to try to disprove the conjecture. If after 1 billion tries (or whatever) you can't find a counter example, you call it good enough, and declare it well-typed. Is there anything similar to this out there?
Tim discusses four static language features which he calculates would eradicate approximately 50% of the bugs in Gears of War—that is, the codebase wouldn't compile if those bugs were present and the compiler would tell you where the bugs were.
Attention all dynamic programmers: please start being more serious about arithmetic overflow and access to uninitialized memory!
If the compiler doesn’t beep, my program should work
[...]
§ Accessing arrays out-of-bounds
§ Dereferencing null pointers
§ Integer overflow
§ Accessing uninitialized variables
I don't think he's complaining about overflow or uninitialized variables in themselves, only that they can't be discovered until runtime. (Note to the pedantic: yes, testing is runtime. :-) )
Your an idiot it is weak ASM/C type systems that your talking about not Lisp/Scripting/Smalltalk Dynamic Type Systems.
I'm quite confident Luke is not an idiot, and that there is no need for the aggressive nature of your opinionated posts. If you want to add something of actual value, then please do.
The point I'd really meant to make is that there's more than one way to skin a cat. The classes of programming error under discussion can be attacked either with static analysis or with higher-level languages that make them disappear in a puff of smoke. There's no concept of an uninitialized variable in Erlang, for example.
Occam's razor makes me much prefer removing problems to retaining them and adding extra machinery to detect them. Why wait until compile-time to catch errors that you could have prevented at brain-time, anyway?
Saying that types detect and reject erroneous programs follows the Curry view, where you define the semantics of an untyped language and restrict the set of allowable programs with a type system.
In the Church view, semantics comes after typing, so there is no such thing as an ill-typed program, just terms that are invalid because they don't type, like most languages have text that is not a valid program because it doesn't parse.
From that point of view (for a programmer that thinks of types as a part of code), a type system is also a way of making certain errors nonsensical. You can write down ill-typed things that look like they might have certain problems, but you could also look at "Erlang" with statements like "X" or "X := X + 1" and think they might create an uninitialized variable, or use mutation. If you really think of types as part of the language, type errors are equally nonsensical and inconceivable. That's exaggerating how I actually think when coding, but it isn't completely untrue.
Hi,
I'm interested in languages which do compile time restrictions on the values of variables. Specifically, does anything like the following exist:
x : int [0, 2, ... 10]
x = 50 //error
x = 3 // error
ADA is a programming language that one can declare user-defined ordinal types.
Many popular programming languages of today do not treat values as types, although they implicitely use values as types. In my opinion, this is a major mistake that lowers performance of the final code, as well as performance of the developers.
Take the for loop, for example:
for i = 0 to 10
next i
the type of i is [0..10] in the first iteration, [1..10] in the 2nd iteration, [2..10] in the next iteration etc.
[0..10]
[1..10]
[2..10]
Another example is that every pointer operation (dereference, access from offset etc) has a type of pointer value [1..max_int], yet almost no programming language has a restriction that a pointer should not be null in order to use it.
[1..max_int]
Yet another example is that of array indices: most programming languages use integer types for array indices, but the real type of an array index is [0...array_size - 1].
[0...array_size - 1]
All this means that it is values that are types and not categories of values only. Categories of values are logical union types. For example, the set of integers is the logical union of all integer values.
Lots of type information that exists in programs is never used by compilers, thus missing lots of chances for static optimization of code.
All static type systems put "compile time restrictions on the values of variables". The real question is, "What limits can you accept on those restrictions?"
For example, in a Ruby collection object (eg.Hash and/or Array and/or Set) I put all kinds of things into them.
Interestingly enough 90% of the time in any one instance of a collection I put one and only one class of object into the Hash, and the other 9.99% of the time I put a objects which are all genuine Liskov certified subclasses of ancestor class and on extracting them from the collection I only every treat them as the ancestor.
The one thing I expected to do when I first learnt OOP's was to have "pencil box" collections and be able to say, "With each object in this pencil box, write"
The pencil would then scribble in pencil, the pen would write in ink, the ruler would do nothing, the rubber would rub out,...
I guess part of the reason why I don't have more "pencil box" collections in my programs is I grew up writing in statically typed languages like Pascal.
The other part is most OOP languages unhelpfully would thow an NoMethodError on telling a ruler to write instead of just doing nothing....
So again I question the direction, should we add generics to our collections so we can type check them more harshly, or should we relax even more, and do nothing on invoking an absent method. And using the null object pattern to do nothing in response to any method on nil?
I do not see a reason why both solutions can not co-exist in the same programming language. If you want an array of Pens, then the language should allow you to instantiate an array of pens. If you want an array of anything, then the language should allow you to instantiate an array of anything.
The division between static and dynamic languages is an artificial one. Normally, it should not exist: a good programming language shall offer both. Why it is not so, I think others more knowledgable can explain that.
The Merd programming language has something like that:
When you start to place constraints on types you are essentially proposing a theorem... (like all x^2 are positive) which would be a function like:
type PositiveInt = x :: Int where x >= 0
f :: Int -> PositiveInt
f x = x * x
So what is needed is to combine an automated theorem prover with a compiler... Then you can statically guarantee really useful things. If the theorem prover cannot prove the theorem the program would not be compiled.
I am working on a language based on these ideas, originally to be called "Russel", but unfortunately there is already a language (although obscure) called Russel (suggestions for a new name welcome).
I don't see how a dynamic language could offer this kind of sophisticated static type checking... You could of course implement a theorem prover, but nothing would tie the theorem to the code to be executed, you could for example in a dynamic language do:
f = "some function"
if prove f then exec f else fail
But there is nothing to ensure you remember to call prove on the function... and besides you would still need to express the theorem to be proved... and as the theorem _is_ the type signature this is infact making thins more complex not simpler.
Had you considered Skolem? Thoralf Skolem was a mathematician who did a lot of interesting work in mathematical logic, and some of the foundational work for model theory, among many other things. I did a quick search and it desn't seem to be used by any other programming language, and is both distinctive and pronounceable: important features in a language name.
dbaelde@igloo ~ $ ledit cduce
CDuce version 0.4.0
# let i = ref (0|2|4--10) 0 ;;
val i : { get=[ ] -> (0 | 2 | 4--10) set=(0 | 2 | 4--10) -> [ ] } = { get= set= }
# i := 2 ;;
- : [ ] = ""
# i := 3 ;;
Characters 5-6:
This expression should have type:
0 | 2 | 4--10
but its inferred type is:
3
which is not a subtype, as shown by the sample:
3
I'm no CDuce expert and I haven't used it for a long time nor for serious projects, so I won't say more. | http://lambda-the-ultimate.org/node/1373 | CC-MAIN-2019-43 | en | refinedweb |
See for details.
--- Davanum Srinivas <dims@yahoo.com> wrote:
> JAXB is needed for JAXRPC 2.0 implementation, which in turn will be needed in our future
J2EE
> App
> Server project Geronimo.
>
> -- dims
>
> --- Daniel Rall <dlr@finemaltcoding.com> wrote:
> > Jochen Wiedmann wrote:
> > >
> > > Ted Leung wrote:
> > >
> > >
> > >>>2. Does xml.apache.org have plans to develop a JAXB implementation?
> > >>>
> > >>>
> > >>
> > >>Not directly, although it's likely that XMLBeans will become JSR-31
> > >>compliant.
> > >
> > >
> > > As a side note, one could use JaxMe from within XMLBeans, saving a lot
> > > of work.
> > >
> > > XMLBeans and JaxMe are both layered applications. XMLBeans has more layers,
> > > but both share a layer for parsing a schema syntactically, a layer for
> > > building the logical schema structure from the syntax layers result and,
> > > finally, a source generation layer.
> > >
> > > Combining JaxMe's source generator and XMLBeans (very powerfull) schema
> > > parser is not straightforward: The XMLBeans parser requires extensions.
> > > But that step is required, as these extensions are specified by JAXB.
> > > For example JAXB requires to know, given an arbitrary element, the
> > > outermost syntactical schema with the elements namespace as target
> > > namespace - an idea completely unknown to XML Schema.
> > >
> > > Likewise JaxMe is not yet ready for accepting a schema from other sources
> > > than its own parser. (This will change anyways, to generate classes from
> > > Java Beans or relational database schemas.)
> >
> > Please excuse my ignorance, but what differentiates these technologies
> > from something like Betwixt <>?
> >
> > I'm currently -0 on this VOTE.
> >
>
>
> =====
> Davanum Srinivas -
=====
Davanum Srinivas - | http://mail-archives.us.apache.org/mod_mbox/ws-dev/200309.mbox/%3C20030904001926.60586.qmail@web12801.mail.yahoo.com%3E | CC-MAIN-2019-43 | en | refinedweb |
Visual C++ change history 2003 - 2015
This article describes all the breaking changes from Visual Studio 2015 going back to Visual Studio 2003, and in this article the terms "new behavior" or "now" refer to Visual Studio 2015 and later. The terms "old behavior" and "before" refer to Visual Studio 2013 and earlier releases.
For information about the latest version of Visual Studio, see What's new for Visual C++ in Visual Studio and Conformance Improvements in Visual C++ in Visual Studio.
Note
There are no binary breaking changes between Visual Studio 2015 and Visual Studio 2017.
When you upgrade to a new version of Visual Studio, compiled by using a different version of the compiler. Also, when you upgrade an EXE or DLL project, make sure to upgrade the libraries that it links to. Don't pass CRT (C Runtime) or C++ Standard Library (C++ Standard Library) types between binaries, including DLLs, compiled by using different versions of the compiler. For more information, see Potential Errors Passing CRT Objects Across DLL Boundaries.
You should never write code that depends on a particular layout for an object that isn't a COM interface or a POD object. If you do write such code, then you must ensure that it works after you upgrade. For more information, see Portability At ABI Boundaries.
Additionally, ongoing improvements to compiler conformance can sometimes change how the compiler understands your existing source code. For example, you might find new or different errors during your build, or even behavioral differences in code that previously built and seemed to run correctly. Although these improvements aren't breaking changes like the ones discussed in this document, you may need to make changes to your source code to resolve these issues:
C Runtime (CRT) Library Breaking Changes
Standard C++ and C++ Standard Library Breaking Changes
MFC and ATL Breaking Changes
Concurrency Runtime Breaking Changes
Visual Studio 2015 Conformance Changes
C Runtime Library (CRT)
General Changes
Refactored binaries
The CRT Library has been refactored into a two different binaries: a Universal CRT (ucrtbase), which contains most of the standard functionality, and a VC Runtime Library (vcruntime). The vcruntime library contains the compiler-related functionality such as exception handling, and intrinsics. If you are using the default project settings, then this change doesn't impact you since the linker will use the new default libraries automatically. If you've set the project's Linker property Ignore All Default Libraries to Yes or you are using the
/NODEFAULTLIBlink ucrtbase.lib, ucrtbased.dll or ucrtbased.lib, and the VC runtime library, libvcruntime.lib, vcruntimeversion.dll, libvcruntimed.lib, and vcruntimedversion.dll. The version in both Visual Studio 2015 and Visual Studio 2017 is 140. See CRT Library Features.
<locale.h>
localeconv
The localeconv function declared in locale.h now works correctly when per-thread locale is enabled. In previous versions of the library, this function would return the
lconvdata for the global locale, not the thread's locale.
If you use per-thread locales, you should check your use of
localeconv. If your code assumes that the
lconvdata returned is for the global locale, you should correct it.
<math.h>
C++ overloads of math library functions
In previous versions, <math.h> defined some, but not all, of the C++ overloads for the math library functions. The rest of the overloads were in the <cmath> header. Code that only included <math.h> could have problems with function overload resolution. Now the C++ overloads have been removed from <math.h> and are only found in <cmath>.
To resolve errors, include <cmath> to get the declarations of the functions that were removed from <math.h>. These functions were moved:
double abs(double)and
float abs(float)
double pow(double, int),
float pow(float, float),
float pow(float, int),
long double pow(long double, long double),
long double pow(long double, int)
floatand
long doubleversions, and
trunc
If you have code that uses
abswith a floating point type that only includes the <math.h> header, the floating point versions will no longer be available. The call now resolves to
abs(int), even with a floating point argument, which produces the error:
warning C4244: 'argument' : conversion from 'float' to 'int', possible loss of data
The fix for this warning is to replace the call to
abswith a floating point version of
abs, such as
fabsfor a double argument or
fabsff <new.h> isn't a breaking change for native or mixed code (
/clr), however for code compiled as /clr:pure, this change might cause your code to fail to compile. If you compile code as
/clr:pure, you may need to add
#include <new>or
#include <new.h>to work around build errors due to this change. The
/clr:pureoption is deprecated in Visual Studio 2015 and unsupported in Visual Studio 2017. Code that needs to be "pure" should be ported to C#.
<process.h>
_beginthread and _beginthreadex
The _beginthread and _beginthreadex functions now hold a reference to the module in which the thread procedure is defined for the duration of the thread. This helps to ensure that modules aren't unloaded until a thread has run to completion.
<stdarg.h>
va_start and reference types
When compiling C++ code, va_start now validates at compile-time that the argument passed to it isn't of reference type. Reference-type arguments are prohibited by the C++ Standard.
<stdio.h> and <conio.h>
The printf and scanf family of functions are now defined inline.
The definitions of all of the
printfand
scanffunctions have been moved inline into <stdio.h>, <conio.h>, and other CRT headers. This breaking changeto the semi-colon-separated list.
If your project links with static libraries that were compiled with a release of Visual Studio earlier than 2015, the linker might report an unresolved external symbol. These errors might reference internal definitions for
_iob,
_iob_func, or related imports for certain <stdio.h> functions in the form of imp*. Microsoft recommends that you recompile all static libraries with the latest version of the C++ compiler and libraries when you upgrade a project. If the library is a third-party library for which source isn't available, you should either request an updated binary from the third party or encapsulate your usage of that library into a separate DLL that you compile with the older version of the compiler and libraries.
Warning
If you are linking with Windows SDK 8.1 or earlier, you might encounter these unresolved external symbol errors. In that case, you should resolve the error by adding legacy_stdio_definitions.lib to the linker input as described previously.
To troubleshoot unresolved symbol errors, you can try using dumpbin.exe to examine the symbols defined in a binary. Try the following command line to view symbols defined in a library.
dumpbin.exe /LINKERMEMBER somelibrary.lib
gets and _getws
The gets and _getws functions have been removed. The gets function was removed from the C Standard Library in C11 because it can't MSVC-specific sentinel strings.
Infinity: 1.#INF
Quiet NaN: 1.#QNAN
Signaling NaN: 1.#SNAN
Indefinite NaN: 1.#IND
Any of these formats may have been prefixed by a sign and may have been formatted slightly differently depending on field width and precision (sometimes with unusual effects, for example
printf("%.2f\n", INFINITY)would print 1.#J because the #INF would be "rounded" to a 2-digit precision). C99 introduced new requirements on how infinities and NaNs are to be formatted. The MSVC implementation now conforms to these requirements. The new strings are as follows:
Infinity: inf
Quiet NaN: nan
Signaling NaN: nan(snan)
Indefinite NaN: nan(ind)
Any of these may be prefixed by a sign. If a capitalized format specifier is used (%F instead of %f), then the strings are printed in capital letters (
INFinstead of
inf), as is required.
The scanf functions have been modified to parse these new strings, so these strings now round-trip through
printf. They could usually generate strings that would round-trip back to the original floating point value, but weren't great if you wanted output:
1208925819614629200000000
New output:
1208925819614629174706176
The old parsing algorithms would consider only up to 17 significant digits from the input string and would discard the rest of the digits. This approach is sufficient to generate a potentially a breaking behavior change because these functions might output different results. The new results are always more correct than the old results.
Hexadecimal and infinity/NaN floating point parsing
The floating point parsing algorithms will now parse hexadecimal floating point strings (such as the ones generated by the %a and %A printf format specifiers) and all infinity and NaN strings that are generated by the
printffunctions, as described above.
%A and %a zero padding
The %a and %A format flaw's now treated as the %F format specifier; if %N is encountered, it, whichand
scanffunctions would silently accept many invalid format strings, sometimes with unusual effects. For example, %hlhlhld would be treated as %d. All invalid format strings are now treated as invalid parameters.
fopen mode string validation
In previous versions, the
fopenfamily of functions silently accepted some invalid mode strings, such they're no longer needed in newer versions. If snprintf or vsnprintf is defined as a macro before including <stdio.h>, compilation now fails with an error that indicates where the macro was defined.
Normally, the fix to this problem is to delete any declarations of
snprintfor
vsnprintfin user code.
tmpnam Generates Usable File Names
In previous versions, the
tmpnamand
tmpnam_sfunctions generated file names in the root of the drive (such as \sd3c.). These functions now generate usable file name paths in a temporary directory.
FILE Encapsulation
In previous versions, the complete FILE type was defined publically in <stdio.h>, so it was possible for user code to reach into a FILE and modify its internals. The library has been changed to hide implementation details. As part of this change, FILE as defined in <stdio.h> is now an opaque type and its members are inaccessible from outside of the CRT itself.
_outp and _inp
The functions _outp, _outpw, _outpd, _inp, _inpw, and _inpd have been removed.
<stdlib.h>, <malloc.h>, and <sys/stat.h>
strtof and wcstof
The
strtofand
wcstoffunctions failed to set
errnoto ERANGE when the value wasn't representable as a float. This error was specific to these two functions; the
strtod,
wcstod,
strtold, and
wcstoldfunctions were unaffected. This issue has been fixed, and is a runtime breaking change.
Aligned allocation functions
In previous versions, the aligned allocation functions (
_aligned_malloc,
_aligned_offset_malloc, etc.) would silently accept requests for a block with an alignment of 0. The requested alignment must be a power of two, which isn't true of zero. A requested alignment of 0 is now treated as an invalid parameter. This issue has been fixed, and is a runtime breaking change.
Heap functions
The
_heapadd,
_heapset, and
_heapusedfunctions have been removed. These functions have been nonfunctional since the CRT was updated to use the Windows heap.
smallheap
The
smallheaplink option has been removed. See Link Options.
<string.h>
wcstok
The signature of the
wcstokfunction has been changed to match what is required by the C Standard. In previous versions of the library, the signature of this function was:
wchar_t* wcstok(wchar_t*, wchar_t const*)function has been added with the old signature to ease porting. When compiling C++ code, there is also an inline overload of
wcstokthat has the old signature. This overload is declared as deprecated. In C code, you may define_CRT_NON_CONFORMING_WCSTOK to cause
_wcstokto be used in place of
wcstok.
<time, as in
Fri Jun 6 08:00:00 2014. This issue has been fixed.
strftime and wcsftime
The
strftimeand
wcsftimefunctions, the same form as is produced by
asctime. In previous versions, the %c format specifier incorrectly formatted times using a
MM/DD/YY HH:MM:SSrepresentation. This issue has been fixed.
timespec and TIME_UTC
The <time.h> header now defines the
timespectype and the
timespec_getfunction from the C11 Standard. In addition, the TIME_UTC macro, for use with the
timespec_getfunction, is now defined. This update is a breaking change for code that has a conflicting definition for any of these identifiers.
CLOCKS_PER_SEC
The CLOCKS_PER_SEC macro now expands to an integer of type
clock_t, as required by the C language.
C++ Standard Library can't detect DLL mixing, and can't detect mixing that involves Visual Studio 2008 or earlier.
C++ Standard Library include files
Some changes have been made to the include structure in the C++ Standard Library headers. C++ Standard Library headers are allowed to include each other in unspecified ways. In general, you should write your code so that it carefully includes all of the headers that it needs according to the C++ standard, and doesn't rely on which C++ Standard Library headers include which other C++ Standard Library headers. This makes code portable across versions and platforms. At least two header changes in Visual Studio 2015 affect user code. First, <string> no longer includes <iterator>. Second, <tuple> now declares
std::arraywithout including all of <array>, which can break code through the following combination of code constructs: your code has a variable named "array", and you have a using-directive "using namespace std;", and you include a C++ Standard Library header (such as <functional>) that includes <tuple>, which now declares
std::array.
steady_clock
The <chrono> implementation of steady_clock has changed to meet the C++ Standard requirements for steadiness and monotonicity.
steady_clockis now based on QueryPerformanceCounter and
high_resolution_clockis now a typedef for
steady_clock. As a result, in Visual Studio
steady_clock::time_pointis now a typedef for
chrono::time_point<steady_clock>; however, this isn't necessarily the case for other implementations.
allocators and const
We now require allocator equality/inequality comparisons to accept const arguments on both sides. If your allocators define these operators like this,
bool operator==(const MyAlloc& other)
then you should update them and declare them as const members:
bool operator==(const MyAlloc& other) const
const elements
The C++ standard has always forbidden containers of const elements (such as vector<const T> or set<const T>). Visual Studio 2013 and earlier accepted such containers. In the current version, such containers fail to compile.
std::allocator::deallocate
In Visual Studio 2013 and earlier,
std::allocator::deallocate(p, n)ignored the argument passed in for n. The C++ standard has always required that n must be equal to the value passed as the first argument to the invocation of
allocatewhich:
bool operator()(const X& a, const X& b)
To resolve this error, change the function declaration to:
bool operator()(const X& a, const X& b) constand
launch::syncpolicies were removed. Instead, for
launch::any, use
launch:async | launch:deferred. For
launch::sync, use
launch::deferred. See launch Enumeration.
MFC and ATL running Visual Studio setup again. Choose the Custom install option, and then choose Microsoft Foundation Classes. You can run Visual Studio setup from the Control Panel control Programs and Features, or from the installation media.
The Visual C++ Redistributable Package still includes this library.
Concurrency Runtime
Yield macro from Windows.h conflicting with concurrency::Context::Yield
The Concurrency Runtime previously used
#undefto undefine the Yield macro to avoid conflicts between the Yield macro defined in Windows.h h and the
concurrency::Context::Yieldfunction. This
#undefhas been removed and a new non-conflicting equivalent API call concurrency::Context::YieldExecution has been added. To work around conflicts with Yield, you can either update your code to call the
YieldExecutionfunction instead, or surround the
Yieldfunction name with parentheses at call sites, as in the following example:
(concurrency::Context::Yield)();
Compiler Conformance Improvements in Visual Studio 2015
When upgrading code from previous versions, you might also encounter compiler errors that are due to conformance improvements made in Visual Studio 2015. These improvements do not break binary compatibility from earlier versions of Visual Studio, but they can produce compiler errors where none were emitted before. For more information, see Visual C++ What's New 2003 through 2015.
In Visual Studio 2015, ongoing improvements to compiler conformance can sometimes change how the compiler understands your existing source code. As a result, you might encounter new or different errors during your build, or even behavioral differences in code that previously built and seemed to run correctly.
Fortunately, these differences have little or no impact on most of your source code. When source code or other changes are needed to address these differences, the fixes tend to be Studio versions. A breaking change is more severe, and can affect binary compatibility, but these kinds of binary compatibility breaks only occur between major versions of Visual Studio, for example, between Visual Studio 2013 and Visual Studio 2015. For information on the breaking changes that occurred between Visual Studio 2013 and Visual Studio 2015, see Visual Studio 2015 Conformance Changes.
Conformance Improvements in Visual Studio 2015
Conformance Improvements in Update 1
Conformance Improvements in Update 2
Conformance Improvements in
Usually, this option was used in order to allow nonstandard code that uses loop variables after the point where, according to the standard, they should have gone out of scope. It was only necessary when you compiled with the
/Zaoption, since without
/Za, use of.
// C2065 expected int main() { // Uncomment the following line to resolve. // int i; for (int i = 0; i < 1; i++); i = 20; // i has already gone out of scope under /Za }
/Zgcompiler option
The
/Zgcompiler can't be applied to names declared const or static, and can't be applied to reference members.
For example, consider the following code:
struct S { mutable int &r; };
Previous versions of the compiler accepted this, but now the compiler gives the following error:
error C2071: 'S::r': illegal storage class
To fix the error, remove the redundant mutable keyword.; } Studio.
struct S1 { void f(int); void f(int, int); }; struct S2 { template <class C, void (C::*Function)(int) const> void f() {} }; void f() { S2 s2; s2.f<S1, &S1: construct was always ignored, but now it produces a compiler error.
error C3323: 'alignas' and '__declspec(align)' are not allowed on function declarations
To fix this problem, remove
__declspec(align)from the function declaration. Since it had no effect, removing it doesn't change anything.
Exception handling
There are a couple of changes to exception handling. First, exception objects have to be either copyable or movable. The following code compiled in Visual Studio 2013, but doesn't compile in Visual Studio 2015:
struct S { public: S(); private: S(const S &); }; int main() { throw S(); // error }
The problem is that the copy constructor is private, so the object can't doesn't compile in Visual Studio 2015:
struct B { public: B(); private: B(const B &); }; struct D : public B {}; int main() { try { } catch (D d) // error { } }
You can fix this issue by changing the parameter type for the catch to a reference.
catch (D& d) { }:
#define _x "there" char* func() { return "hello"_x; } int main() { char * p = func(); return 0; }
The compiler interpreted this code as a string literal "hello" followed by a macro, which is expanded into "there", and then the two string literals were concatenated into one. In Visual Studio 2015, the compiler interprets this sequence as a user-defined literal, but since there is no matching user-defined literal
_xdefined, it gives an error.
error C3688: invalid literal suffix '_x'; literal operator or literal operator template 'operator ""_x' not found note: Did you forget a space between the string literal and the prefix of the following string literal? Studio 2015, you must now add whitespace between the two strings. For example, the following code must be changed:
char * str = "abc""def";
To fix this issue, add a space in between the two strings:
char * str = "abc" "def";:
void * operator new(std::size_t, std::size_t); void operator delete(void*, std::size_t) noexcept;
The problem occurs because of the match in function signatures between a placement delete operator you've defined, and the new global sized delete operator. Consider whether you can use a different type other than
size_tfor any placement new and delete operators. The type of the
size_ttypedef is compiler-dependent; it's a typedef for unsigned int in MSVC. A good solution is to use an enumerated type such as this one:
enum class my_type : sizemember Studio 2013, but produces an error in Visual Studio 2015.
union U1 { const int i; }; union U2 { int & i; }; union U3 { struct { int & i; }; }; compiler-generated functions; }; } named member aren; }hasn't been defined yet.
In this case, the fix is to not use such type traits until the class has been defined. If you move the definitions of
Band
Dto".
main declared as extern "C" now requires a return type.
The following code now produces C4430.
extern "C" __cdecl main(){} // C4430
To fix the error, add the return type:
extern "C" int __cdecl main(){} // OK
typename isn't allowed in a member initializer
The following code now produces C2059:
template<typename T> struct S1 : public T::type { S1() : typename T::type() // C2059 { } }; struct S2 { typedef S2 type; }; S1<S2> s;
To fix the error, remove
typenamefrom the initializer:
S1() : T::type() // OK ...
The storage class on explicit specializations is ignored.
In the following code, the static storage class specifier is ignored
template <typename T> void myfunc(T h) { } template<> static void myfunc(double h) // static is ignored { }
A constant used in a static_assert inside a class template will always fail.
The following code causes the
static_assertto always fail:
template <size_t some_value> struct S1 { static_assert(false, "default not valid"); // always invoked }; //other partial specializations here
To work around this issue, wrap the value in a struct:
template <size_t some_value> struct constant_false { static const bool value = false; }; template <size_t some_value> struct S1 { static_assert(constant_false<some_value>::value, "default not valid"); }; //other partial specializations here
Rules enforced for forward declarations. (Applies only to C.)
The following code now produces C2065:
struct token_s; typedef int BOOL; typedef int INT; typedef int(*PFNTERM)(PTOKEN, BOOL, INT); // C2065: 'PTOKEN' : undeclared identifier
To fix this problem, add the proper forward declarations:
struct token_s; typedef int BOOL; typedef int INT; // forward declarations: typedef struct token_s TOKEN; typedef TOKEN *PTOKEN; typedef int(*PFNTERM)(PTOKEN, BOOL, INT);
More consistent enforcement of function pointer types
The following code now produces C2197:
typedef int(*F1)(int); typedef int(*F2)(int, int); void func(F1 f, int v1, int v2) { f(v1, v2); // C2197 }
Ambiguous calls to overloaded functions
The following code now produces C266: 'N::bind': ambiguous call to overloaded function
template<typename R, typename T, typename T1, typename A1> void bind(R(T::*)(T1), A1&&); namespace N { template <typename T, typename R, typename ... Tx> void bind(R(T::*)(Tx...), T* ptr); } using namespace N; class Manager { public: void func(bool initializing); void mf() { bind(&Manager::func, this); //C2668 } };
To fix the error, you can fully qualify the call to
bind: N::bind(...). However, if this change is manifest through an undeclared identifier (C2065), then it may be appropriate to fix this with a using declaration instead.
This pattern happens frequently with ComPtr and other types in the
Microsoft::WRLnamespace.
Fix incorrect address of
The following code now produces C2440: '=': cannot convert from 'type *' to 'type'. To fix the error, change &(type) to (type) and (&f()) to (f()).
// C typedef void (*type)(void); void f(int i, type p); void g(int); void h(void) { f(0, &(type)g); } // C++ typedef void(*type)(void); type f(); void g(type); void h() { g(&f()); }
String literal is a constant array
The following code now produces C2664: 'void f(void )': cannot convert argument 1 from 'const char ()[2]' to 'void *'
void f(void *); void h(void) { f(&__FUNCTION__); void *p = &""; }
To fix the error, change the function parameter type to
const void*, or else change the body of
hto look like this example:
void h(void) { char name[] = __FUNCTION__; f( name); void *p = &""; }
C++11 UDL strings
The following code now produces error C3688: invalid literal suffix 'L'; literal operator or literal operator template 'operator ""L' not found
#define MACRO #define STRCAT(x, y) x\#\#y int main(){ auto *val1 = L"string"MACRO; auto *val2 = L"hello "L"world"; std::cout << STRCAT(L"hi ", L"there"); }
To fix the error, change the code to add a space:
#define MACRO // Remove ##. Strings are automatically // concatenated so they aren't needed #define STRCAT(x, y) x y int main(){ //Add space after closing quote auto *val1 = L"string" MACRO; auto *val2 = L"hello " L"world"; std::cout << STRCAT(L"hi ", L"there"); }
In the example above,
MACROis no longer parsed as two tokens (a string followed by a macro). Now it's parsed as a single token UDL. The same applies to L""L"", which was parsed previously as L"" and L"", and is now parsed as L""L and "".
String concatenation rules were also brought into compliance with the standard, which means L"a" "b" is equivalent to L"ab". Previous editions of Visual Studio did not accept concatenation of strings with different character width.
C++11 empty character removed
The following code now produces error C2137: empty character constant
bool check(wchar_t c){ return c == L''; //implicit null character }
To fix the error, change the code to make the null explicit:
bool check(wchar_t c){ return c == L'\0'; }
MFC exceptions can't be caught by value because they aren't copyable
The following code in an MFC application now causes error C2316: 'D': cannot be caught as the destructor and/or copy constructor are inaccessible or deleted
struct B { public: B(); private: B(const B &); }; struct D : public B { }; int main() { try { } catch (D) // C2316 { } }
To fix the code, you can change the catch block to
catch (const D &)but the better solution is usually to use the MFC TRY/CATCH macros.
alignof is now a keyword
The following code now produces error C2332: 'class': missing tag name. To fix the code you must rename the class or, if the class is performing the same work as alignof, just replace the class with the new keyword.
class alignof{}
constexpr is now a keyword
The following code now produces error C2059: syntax error: ')'. To fix the code, you must rename any function or variable names that are called "constexpr".
int constexpr() {return 1;}
Movable types can't be const
When a function returns a type that's intended to be moved, its return type should not be const.
Deleted copy constructors
The following code now produces C2280 'S::S(S &&)': attempting to reference a deleted function:
struct S{ S(int, int); S(const S&) = delete; S(S&&) = delete; }; S s2 = S(2, 3); //C2280
To fix the error, use direct initialization for
S2:
struct S{ S(int, int); S(const S&) = delete; S(S&&) = delete; }; S s2 = {2,3}; //OK
Conversion to function pointer only generated when no lambda capture
The following code produces C2664 in Visual Studio 2015.
void func(int(*)(int)) {} int main() { func([=](int val) { return val; }); }
To fix the error, remove the
=from the capture list.
Ambiguous calls involving conversion operators
The following code now produces error C2440: 'type cast': cannot convert from 'S2' to 'S1':
struct S1 { S1(int); }; struct S2 { operator S1(); operator int(); }; void f(S2 s2) { (S1)s2; }
To fix the error, explicitly call the conversion operator:
void f(S2 s2) { //Explicitly call the conversion operator s2.operator S1(); // Or S1((int)s2); }
The following code now produces error C2593: 'operator =' is ambiguous:
struct S1 {}; struct S2 { operator S1&(); operator S1() const; }; void f(S1 *p, S2 s) { *p = s; }
To fix the error, explicitly call the conversion operator:
void f(S1 *p, S2 s) { *p = s.operator S1&(); }
Fix invalid copy initialization in non-static data member initialization (NSDMI)
The following code now produces error C2664: 'S1::S1(S1 &&)': cannot convert argument 1 from 'bool' to 'const S1 &':
struct S1 { explicit S1(bool); }; struct S2 { S1 s2 = true; // error };
To fix the error, use direct initialization:
struct S2 { S1 s1{true}; // OK };
Accessing constructors inside decltype statements
The following code now produces C2248: 'S::S': cannot access private member declared in class 'S':
class S { S(); public: int i; }; class S2 { auto f() -> decltype(S().i); };
To fix the error, add a friend declaration for
S2in
S:
class S { S(); friend class S2; // Make S2 a friend public: int i; };
Default ctor of lambda is implicitly deleted
The following code now produces error C3497: you cannot construct an instance of a lambda:
void func(){ auto lambda = [](){}; decltype(lambda) other; }
To fix the error, remove the need for the default constructor to be called. If the lambda doesn't capture anything, then it can be cast to a function pointer.
Lambdas with a deleted assignment operator
The following code now produces error C2280:
#include <memory> #include <type_traits> template <typename T, typename D> std::unique_ptr<T, typename std::remove_reference<D &&>::type> wrap_unique(T *p, D &&d); void f(int i) { auto encodedMsg = wrap_unique<unsigned char>(nullptr, [i](unsigned char *p) { }); encodedMsg = std::move(encodedMsg); }
To fix the error, replace the lambda with a functor class or remove the need to use the assignment operator.
Attempting to move an object with deleted copy constructor
The following code now produces error C2280: 'moveable::moveable(const moveable &)': attempting to reference a deleted function
struct moveable { moveable() = default; moveable(moveable&&) = default; moveable(const moveable&) = delete; }; struct S { S(moveable && m) : m_m(m)//copy constructor deleted {} moveable m_m; };
To fix the error, use
std::moveinstead:
S(moveable && m) : m_m(std::move(m))
Local class can't reference other local class defined later in the same function
The following code now produces error C2079: 's' uses undefined struct 'main::S2'
int main() { struct S2; struct S1 { void f() { S2 s; } }; struct S2 {}; }
To fix the error, move up the definition of
S2:
int main() { struct S2 { //moved up }; struct S1 { void f() { S2 s; } }; }
Cannot call a protected base ctor in the body of derived ctor.
The following code now produces error C2248: 'S1::S1': cannot access protected member declared in class 'S1'
struct S1 { protected: S1(); }; struct S2 : public S1 { S2() { S1(); } };
To fix the error, in
S2remove the call to
S1()from the constructor and if necessary put it in another function.
{} prevents conversion to pointer
The following code now produces C2439 'S::p': member could not be initialized
struct S { S() : p({ 0 }) {} void *p; };
To fix the error, remove the braces from around the
0or else use nullptr instead, as shown in this example:
struct S { S() : p(nullptr) {} void *p; };
Incorrect macro definition and usage with parentheses
The following example now produces error C2008: ';': unexpected in macro definition
#define A; //cause of error struct S { A(); // error };
To fix the problem, change the top line to
#define A();
The following code produces error C2059: syntax error: ')'
//notice the space after 'A' #define A () ; struct S { A(); };
To fix the code, remove the space between A and ().
The following code produces error C2091: function returns function:
#define DECLARE void f() struct S { DECLARE(); };
To fix the error, remove the parentheses after DECLARE in S:
DECLARE;.
The following code produces error C2062: type 'int' unexpected
#define A (int) struct S { A a; };
To fix the problem, define
Alike this:
#define A int
Extra parens in declarations
The following code produces error C2062: type 'int' unexpected
struct S { int i; (int)j; };
To fix the error, remove the parentheses around
j. If the parentheses are needed for clarity, then use a typedef.
Compiler-generated constructors and __declspec(novtable)
In Visual Studio 2015, there is an increased likelihood that compiler-generated inline constructors of abstract classes with virtual base classes may expose improper usage of
__declspec(novtable)when used in combination with
__declspec(dllimport).
auto requires single expression in direct-list-initialization
The following code now produces error C3518: 'testPositions': in a direct-list-initialization context the type for 'auto' can only be deduced from a single initializer expression
auto testPositions{ std::tuple<int, int>{13, 33}, std::tuple<int, int>{-23, -48}, std::tuple<int, int>{38, -12}, std::tuple<int, int>{-21, 17} };
To fix the error, one possibility is to initialize
testPositionsas follows:
std::tuple<int, int> testPositions[]{ std::tuple<int, int>{13, 33}, std::tuple<int, int>{-23, -48}, std::tuple<int, int>{38, -12}, std::tuple<int, int>{-21, 17} };
Checking types vs. pointers to types for is_convertible
The following code now causes the static assertion to fail.
struct B1 { private: B1(const B1 &); }; struct B2 : public B1 {}; struct D : public B2 {}; static_assert(std::is_convertible<D, B2>::value, "fail");
To fix the error, change the
static_assertso that it compares pointers to
Dand
B2:
static_assert(std::is_convertible<D*, B2*>::value, "fail");
__declspec(novtable) declarations must be consistent
__declspecdeclarations must be consistent across all libraries. The following code will now produce a one-definition rule (ODR) violation:
//a.cpp class __declspec(dllexport) A { public: A(); A(const A&); virtual ~A(); private: int i; }; A::A() {} A::~A() {} A::A(const A&) {} //b.cpp // compile with cl.exe /nologo /LD /EHsc /Osx b.cpp #pragma comment(lib, "A") class __declspec(dllimport) A { public: A(); A(const A&); virtual ~A(); private: int i; }; struct __declspec(novtable) __declspec(dllexport) B : virtual public A { virtual void f() = 0; }; //c.cpp #pragma comment(lib, "A") #pragma comment(lib, "B") class __declspec(dllimport) A { public: A(); A(const A&); virtual ~A(); private: int i; }; struct /* __declspec(novtable) */ __declspec(dllimport) B // Error. B needs to be novtable here also. : virtual public A { virtual void f() = 0; }; struct C : virtual B { virtual void f(); }; void C::f() {} C c;
Conformance Improvements in Update 1
Private virtual base classes and indirect inheritance
Previous versions of the compiler allowed a derived class to call member functions of its indirectly derived
private virtualbase classes. This old behavior was incorrect and doesn or delete operator doesn in an elaborated type specifier, but's a better match than the historic candidate, the call resolves unambiguously to the new candidate, causing a change in program behavior that some warnings related to switch statements; doesn't give a specific diagnostic, we also recommend that the parent-directory specifier ".." shouldn't state }
Conformance Improvements in Update 2
Additional warnings and errors might be issued as a result of partial support for expression SFINAE
Previous versions of the compiler did not parse certain kinds of expressions inside decltype specifiers due to lack of support for expression SFINAE. This old behavior was incorrect and doesn expression that includes a type that hasn't been declared yet, expression that's missing a necessary use of the typename keyword isn member variables to have default copy/move constructors and default copy/move assignment operators automatically generated. This old behavior was incorrect and doesn anonymous Studio 2015 allowed static member functions to have cv-qualifiers. This behavior is due to a regression in Visual Studio 2015 and Visual Studio 2015 Update 1; Visual Studio 2013 and previous versions of the compiler reject code written in this way. The behavior of Visual Studio 2015 and Visual Studio 2015 Update 1 is incorrect and doesn (only affects
/ZW)
Code compiled for the Windows Runtime (WinRT) doesn't allow enum types to be forward declared, similarly to when managed C++ code is compiled for the .Net Framework using the
/clrcompiler switch. This behavior when the compiler, doesn Studio:44
Visual Studio 2013 Conformance Changes
Compiler
The final keyword now generates an unresolved symbol error where it would have compiled previously:
struct S1 { virtual void f() = 0; }; struct S2 final : public S1 { virtual void f(); }; int main(S2 *p) { p->f(); }
In earlier versions, an error wasn't issued because the call was a virtual call; nevertheless, the program would crash at runtime. Now, a linker error is issued because the class is known to be final. In this example, to fix the error, you would link against the obj that contains the definition of
S2::f.
When you use friend functions in namespaces, you must redeclare the friend function before you refer to it or you will get an error because the compiler now conforms to the ISO C++ Standard. For example, this example no longer compiles:
namespace NS { class C { void func(int); friend void func(C* const) {} }; void C::func(int) { NS::func(this); // error } }
To correct this code, declare the friend function:
namespace NS { class C { void func(int); friend void func(C* const) {} }; void func(C* const); // conforming fix void C::func(int) { NS::func(this); }
The C++ Standard doesn't allow explicit specialization in a class. Although the Microsoft C++ compiler allows it in some cases, in cases such as the following example, an error is now generated because the compiler doesn't consider the second function to be a specialization of the first one.
template < int N> class S { public: template void f(T& val); template < > void f(char val); }; template class S< 1>;
To correct this code, modify the second function:
template <> void f(char& val);
The compiler no longer tries to disambiguate the two functions in the following example, and now emits an error:
template< typename T> void Func(T* t = nullptr); template< typename T> void Func(...); int main() { Func< int>(); // error }
To correct this code, clarify the call:
template< typename T> void Func(T* t = nullptr); template< typename T> void Func(...); int main() { Func< int>(nullptr); // ok }
Before the compiler was made compliant with ISO C++11, the following code would have compiled and caused
xto resolve to type int:
auto x = {0}; int y = x;
This code now resolves
xto a type of
std::initializer_list<int>and causes an error on the next line that tries to assign
xto type int. (There is no conversion by default.) To correct this code, use int to replace auto:
int x = {0}; int y = x;
Aggregate initialization is no longer allowed when the type of the right-hand value doesn't match the type of the left-hand value that's being initialized, and an error is issued because the ISO C++11 Standard requires uniform initialization to work without narrowing conversions. Previously, if a narrowing conversion was available, a Compiler Warning (level 4) C4242 warning would have been issued instead of an error.
int i = 0; char c = {i}; // error
To correct this code, add an explicit narrowing conversion:
int i = 0; char c = {static_cast<char>(i)};
The following initialization is no longer allowed:
void *p = {{0}};
To correct this code, use either of these forms:
void *p = 0; // or void *p = {0};
Name lookup has changed. The following code is resolved differently in the C++ compiler in Visual Studio 2012 and Visual Studio 2013:
enum class E1 { a }; enum class E2 { b }; int main() { typedef E2 E1; E1::b; }
In Visual Studio 2012, the
E1in expression
E1::bresolved to
::E1in the global scope. In Visual Studio 2013,
E1in expression
E1::bresolves to the
typedef E2definition in
main()and has type
::E2.
Object layout has changed. On x64, the object layout of a class may change from previous releases. If it has a virtual function but it doesn’t have a base class that has a virtual function, the object model of the compiler inserts a pointer to a virtual function table after the data member layout. This means the layout may not be optimal in all cases. In previous releases, an optimization for x64 would try to improve the layout for you, but because it failed to work correctly in complex code situations, it was removed in Visual Studio 2013. For example, consider this code:
__declspec(align(16)) struct S1 { }; struct S2 { virtual ~S2(); void *p; S1 s; };
In Visual Studio 2013, the result of
sizeof(S2)on x64 is 48, but in previous releases, it evaluates to 32. To make this evaluate to 32 in the Visual Studio 2013 C++ compiler for x64, add a dummy base class that has a virtual function:
__declspec(align(16)) struct S1 { }; struct dummy { virtual ~dummy() {} }; struct S2 : public dummy { virtual ~S2(); void *p; S1 s; };
To find places in your code that an earlier release would have tried to optimize, use a compiler from that release together with the
/W3compiler option and turn on Warning 4370. For example:
#pragma warning(default:4370) __declspec(align(16)) struct S1 { }; struct S2 { virtual ~S2(); void *p; S1 s; };
Before Visual Studio 2013, this code outputs this message: "warning C4370: 'S2' : layout of class has changed from a previous version of the compiler due to better packing".
The x86 compiler has the same suboptimal layout issue in all versions of the compiler. For example, if this code is compiled for x86:
struct S { virtual ~S(); int i; double d; };
The result of
sizeof(S)is 24. However, it can be reduced to 16 if you use the workaround mentioned for x64:
struct dummy { virtual ~dummy() {} }; struct S : public dummy { virtual ~S(); int i; double d; };
Standard Library
The C++ compiler in Visual Studio 2013 detects mismatches in _ITERATOR_DEBUG_LEVEL, which was implemented in Visual Studio 2010, and RuntimeLibrary mismatches. These mismatches occur when compiler options
/MT (static release),
/MTd (static debug),
/MD (dynamic release), and
/MDd (dynamic debug) are mixed.
If your code acknowledges the previous release's simulated alias templates, you have to change it. For example, instead of
allocator_traits<A>::rebind_alloc<U>::other, now you have to say
allocator_traits<A>::rebind_alloc<U>. Although
ratio_add<R1, R2>::typeis no longer necessary and we now recommend that you say
ratio_add<R1, R2>, the former will still compile because
ratio<N, D>is required to have a "type" typedef for a reduced ratio, which will be the same type if it's already reduced.
You must use
#include <algorithm>when you call
std::min()or
std::max().
If your existing code uses the previous release’s simulated scoped enums—traditional unscoped enums wrapped in namespaces—you have to change it. For example, if you referred to the type
std::future_status::future_status, now you have to say
std::future_status. However, most code is unaffected—for example,
std::future_status::readystill compiles.
explicit operator bool()is stricter than operator unspecified-bool-type().
explicit operator bool()permits explicit conversions to bool—for example, given
shared_ptr<X> sp, both
static_cast<bool>(sp)and
bool b(sp)are valid—and Boolean-testable "contextual conversions" to bool—for example,
if (sp),
!sp,
sp &&whatever. However,
explicit operator bool()forbids implicit conversions to bool, so you can't say
bool b = sp;and given a bool return type, you can't say
return sp.
Now that real variadic templates are implemented, _VARIADIC_MAX and related macros have no effect. If you're still defining _VARIADIC_MAX, it's ignored. If you acknowledged our macro machinery intended to support simulated variadic templates in any other way, you have to change your code.
In addition to ordinary keywords, C++ Standard Library headers now forbid the macro replacement of the context-sensitive keywords override and final.
reference_wrapper,
ref(), and
cref()now forbid binding to temporary objects.
<random> now strictly enforces its compile-time preconditions.
Various C++ Standard Library type traits have the precondition "T shall be a complete type". Although the compiler now enforces this precondition more strictly, it may not enforce it in all situations. (Because C++ Standard Library precondition violations trigger undefined behavior, the Standard doesn't guarantee enforcement.)
The C++ Standard Library doesn't support
/clr:oldSyntax.
The C++11 specification for common_type<> had unexpected and undesired consequences; in particular, it makes common_type<int, int>::type return int&&. Therefore, the compiler implements the Proposed Resolution for Library Working Group issue 2141, which makes common_type<int,::type return int.
As a side-effect of this change, the identity case no longer works (common_type<T> doesn't always result in type T). This behavior complies with the Proposed Resolution, but it breaks any code that relied on the previous behavior.
If you require an identity type trait, don't use the non-standard
std::identitythat's defined in <type_traits> because it won't work for <void>. Instead, implement your own identity type trait to suit your needs. Here's an example:
template < typename T> struct Identity { typedef T type; };
MFC and ATL
Visual Studio 2013 only: MFC MBCS Library isn't included in Visual Studio because Unicode is so popular and use of MBCS has declined significantly. This change also keeps MFC more closely aligned with the Windows SDK itself, because many of the new controls and messages are Unicode-only. However, if you must continue to use the MFC MBCS library, you can download it from the MSDN Download Center at Multibyte MFC Library for Visual Studio 2013. The Visual C++ Redistributable Package still includes this library. (Note: The MBCS DLL is included in the C++ setup components in Visual Studio 2015 and later).
Accessibility for the MFC ribbon is changed. Instead of a one-level architecture, there is now a hierarchical architecture. You can still use the old behavior by calling
CRibbonBar::EnableSingleLevelAccessibilityMode().
CDatabase::GetConnectmethod is removed. To improve security, the connection string is now stored encrypted and is decrypted only as needed; it can't be returned as plain text. The string can be obtained by using the
CDatabase::Dumpmethod.
Signature of
CWnd::OnPowerBroadcastis changed. The signature of this message handler is changed to take an LPARAM as the second parameter.
Signatures are changed to accommodate message handlers. The parameter lists of the following functions have been changed to use newly added ON_WM_* message handlers:
CWnd::OnDisplayChangechanged to (UINT, int, int) instead of (WPARAM, LPARAM) so that the new ON_WM_DISPLAYCHANGE macro can be used in the message map.
CFrameWnd::OnDDEInitiatechanged to (CWnd*, UINT, UNIT) instead of (WPARAM, LPARAM) so that the new ON_WM_DDE_INITIATE macro can be used in the message map.
CFrameWnd::OnDDEExecutechanged to (CWnd*, HANDLE) instead of (WPARAM, LPARAM) so that the new ON_WM_DDE_EXECUTE macro can be used in the message map.
CFrameWnd::OnDDETerminatechanged to (CWnd*) as the parameter instead of (WPARAM, LPARAM) so that the new ON_WM_DDE_TERMINATE macro can be used in the message map.
CMFCMaskedEdit::OnCutchanged to no parameters instead of (WPARAM, LPARAM) so that the new ON_WM_CUT macro can be used in the message map.
CMFCMaskedEdit::OnClearchanged to no parameters instead of (WPARAM, LPARAM) so that the new ON_WM_CLEAR macro can be used in the message map.
CMFCMaskedEdit::OnPastechanged to no parameters instead of (WPARAM, LPARAM) so that the new ON_WM_PASTE macro can be used in the message map.
#ifdefdirectives in the MFC header files are removed. Numerous
#ifdefdirectives in the MFC header files related to unsupported versions of Windows (WINVER < 0x0501) are removed.
ATL DLL (atl120.dll) is removed. ATL is now provided as headers and a static library (atls.lib).
Atlsd.lib, atlsn.lib, and atlsnd.lib are removed. Atls.lib no longer has character-set dependencies or code that's specific for debug/release. Because it works the same for Unicode/ANSI and debug/release, only one version of the library is required.
ATL/MFC Trace tool is removed together with the ATL DLL, and the tracing mechanism is simplified. The
CTraceCategoryconstructor now takes one parameter (the category name), and the TRACE macros call the CRT debug reporting functions.
Visual Studio 2012 Breaking Changes
Compiler
The
/Ylcompiler option has changed. By default, the compiler uses this option, which can lead to LNK2011 errors under certain conditions. For more information, see /Yl (Inject PCH Reference for Debug Library).
In code that's compiled by using
/clr, the enum class keyword defines a C++11 enum, not a common language runtime (CLR) enum. To define a CLR enum, you must be explicit about its accessibility.
Use the template keyword to explicitly disambiguate a dependent name (C++ Language Standard compliance). In the following example, the highlighted template keyword is mandatory to resolve the ambiguity. For more information, see Name Resolution for Dependent Types.
template < typename struct Container { typedef typename AY::template Rebind< X> ::Other AX; };
Constant expression of type float is no longer allowed as a template argument, as shown in the following example.
template<float n=3.14> struct B {}; // error C2993: 'float': illegal type for non-type template parameter 'n'
Code that's compiled by using the
/GScommand-line option and that has an off-by-one vulnerability may lead to process termination at runtime, as shown in the following pseudocode example.
char buf[MAX]; int cch; ManipulateString(buf, &cch); // ... buf[cch] = '\0'; // if cch >= MAX, process will terminate
The default architecture for x86 builds is changed to SSE2; therefore, the compiler may emit SSE instructions, and will use the XMM registers to perform floating-point calculations. If you want to revert to previous behavior, then use the
/arch:IA32compiler flag to specify the architecture as IA32.
The compiler may issue warnings Compiler Warning (level 4) C4703 and C4701 where previously it did not. The compiler applies stronger checks for use of uninitialized local variables of pointer type.
When the new linker flag
/HIGHENTROPYVAis specified, Windows 8 typically causes memory allocations to return a 64-bit address. (Prior to Windows 8, such allocations more often returned addresses that were less than 2 GB.) This change may expose pointer truncation bugs in existing code. By default, this switch is on. To disable this behavior, specify
/HIGHENTROPYVA:NO.
The managed compiler (Visual Basic/C#) also supports
/HIGHENTROPYVAfor managed builds. However, in this case, the
/HIGHENTROPYVAswitchis off by default.
IDE
-.
Parallel Patterns Library and Concurrency Runtime Library
The
SchedulerType enumeration of
UmsThreadDefault is deprecated. Specification of
UmsThreadDefault produces a deprecated warning, and internally maps back to the
ThreadScheduler.
Standard Library
Following a breaking change between the C++98/03 and C++11 standards, using explicit template arguments to call
make_pair()— as in
make_pair<int, int>(x, y)— typically doesn't compile in Visual C++ in Visual Studio 2012. The solution is to always call
make_pair()without explicit template arguments — as in
make_pair(x, y). Providing explicit template arguments defeats the purpose of the function. If you require precise control over the resulting type, use
pairinstead of
make_pair— as in
pair<short, short>(int1, int2).
Another breaking change between the C++98/03 and C++11 standards: When A is implicitly convertible to B and B is implicitly convertible to C, but A isn't implicitly convertible to C, C++98/03 and Visual Studio 2010 permitted
pair<A, X>to be converted (implicitly or explicitly) to
pair<C, X>. (The other type, X, isn't of interest here, and isn't specific to the first type in the pair.) The C++ compiler in Visual Studio 2012 detects that A isn't implicitly convertible to C, and removes the pair conversion from overload resolution. This change is a positive Studio 2010 simulated variadic templates—for example,
make_shared<T>(arg1, arg2, argN)—up to a limit of 10 arguments, by stamping out overloads and specializations with preprocessor machinery. In Visual Studio 2012, this limit is reduced to five arguments to improve compile times and compiler memory consumption for the majority of users. However, you can set the previous limit by explicitly defining _VARIADIC_MAX as 10, project-wide.
C++11 17.6.4.3.1 [macro.names]/2 forbids macro replacement of keywords when C++ Standard Library headers are included. The headers now emit compiler errors if they detect macro-replaced keywords. (Defining _ALLOW_KEYWORD_MACROS allows such code to compile, but we strongly discourage that usage.) As an exception, the macro form of
new, Studio 2010 with ones that were compiled by using The C++ compiler in Visual Studio 2012 emits linker errors about _MSC_VER mismatch, where _MSC_VER is the macro that contains the compiler's major version (1700 for Visual C++ in Visual Studio 2012). This check can't detect DLL mixing, and can't detect mixing that involves Visual Studio 2008 or earlier.
In addition to detecting _ITERATOR_DEBUG_LEVEL mismatches, which was implemented in Visual Studio 2010, The C++ compiler in Visual Studio 2012 detects Runtime Library mismatches. These mismatches occur when the compiler options
/MT(static release),
/MTd(static debug),
/MD(dynamic release), and
/MDd(dynamic debug) are mixed.
operator<(),
operator>(),
operator<=(), and
operator>=()were previously available for the
std::unordered_mapand
stdext::hash_mapfamilies of containers, although their implementations were not useful. These non-standard operators have been removed in Visual C++ in Visual Studio 2012. Additionally, the implementation of
operator==()and
operator!=()for the
std::unordered_mapfamily has been extended to cover the
stdext::hash_mapfamily. (We recommend that you avoid the use of the
stdext::hash_mapfamily in new code.)
C++11 22.4.1.4 [locale.codecvt] specifies that
codecvt::length()and
codecvt::do_length()should take modifiable
stateT¶meters, but Visual Studio 2010 took
const stateT&. The C++ compiler in Visual Studio 2012 takes
stateT&as mandated by the standard. This difference is significant for anyone who is attempting to override the virtual function
do_length().
CRT
The C Runtime (CRT) heap, which is used for new and malloc(), is no longer private. The CRT now uses the process heap. This means that the heap isnstruct has changed to accommodate the changes to locale functions.
CRT functions that have corresponding intrinsics such as
memxxx(),
strxxx()are removed from intrin.h. If you included intrin.h only for these functions, you must now include the corresponding CRT headers.
MFC and ATLierto
CDockablePane::RemoveFromDefaultPaneDivider.
Changed the signature of
CFileDialog::SetDefExtto use LPCTSTR; therefore, Unicode builds are affected.
Removed obsolete ATL tracing categories.
Changed the signature of
CBasePane::MoveWindowto take a
const CRect.
Changed the signature of
CMFCEditBrowseCtrl::EnableBrowseButton.
Removed
m_fntTabsand
m_fntTabsBoldfrom
CMFCBaseTabCtrl.
Added a parameter to the
CMFCRibbonStatusBarPaneconstructors. (It is a default parameter, and so it's not source-breaking.)
Added a parameter to the
CMFCRibbonCommandsListBoxconstructor. (It is a default parameter, and so it's not source-breaking.)
Removed the
AFXTrackMouseAPI (and related timer proc). Use the Win32
TrackMouseEventAPI instead.
Added a parameter to the
CFolderPickerDialogconstructor. (It is a default parameter, and so it's not source-breaking.)
CFileStatusstructure size changed: The
m_attributemember changed from BYTE to DWORD (to match the value that's returned from
GetFileAttributes).
CRichEditCtrland
CRichEditViewuse MSFTEDIT_CLASS (RichEdit 4.1 control) instead of RICHEDIT_CLASS (RichEdit 3.0 control) in Unicode builds.
Removed
AFX_GLOBAL_DATA::IsWindowsThemingDrawParentBackgroundbecause it's always TRUE on Windows Vista, Windows 7, and Windows 8.
Removed
AFX_GLOBAL_DATA::IsWindowsLayerSupportAvailablebecause itto
IsDwmCompositionEnabledto eliminate name collision.
Changed identifiers for a number of MFC internal timers and moved the definitions to afxres.h (AFX_TIMER_ID_*).
Changed the signature of
OnExitSizeMovemethod to agree with the ON_WM_EXITSIZEMOVE macro:
CFrameWndEx
CMDIFrameWndEx
CPaneFrameWnd
Changed the name and signature of
OnDWMCompositionChangedto agree with the ON_WM_DWMCOMPOSITIONCHANGED macro:
CFrameWndEx
CMDIFrameWndEx
CPaneFrameWnd
Changed the signature of
OnMouseLeavemethodReplaceThisText
CMFCToolBarComboBoxEdit
CMFCToolBarEditCtrl
CMFCAutoHideBar
Changed the signature of
OnPowerBroadcastto agree with the ON_WM_POWERBROADCAST macro:
CFrameWndEx
CMDIFrameWndEx
Changed the signature of
OnStyleChangedto agree with the ON_WM_STYLECHANGED macro:
CMFCListCtrl
CMFCStatusBar
Renamed the internal method
FontFamalyProcFontsto
FontFamilyProcFonts.
Removed numerous global static
CStringobjectsand removed the accelerator delimiter parameter.
Added
CPropertyPage::GetParentSheet, and in the
CPropertyPageclass, call it instead of
GetParentto get the correct parent sheet window, which may be the parent or a grandparent window to
CPropertyPage. You might have to change your code to call
GetParentSheetinsteadatainitialization to on-demand instead of at CRT initialization time, to satisfy
DLLMainrequirements.
Added the
RemoveButtonByIndexmethod to the
CMFCOutlookBarPaneclass.
Corrected
CMFCCmdUsageCount::IsFreqeuntlyUsedCmdto
IsFrequentlyUsedCmd.
Corrected several instances of
RestoreOriginalstatetoPanestatic member variables
m_bCaptionTextand
m_bHideDisabledButtons.
Added an override
DeleteStringmethod to
CMFCFontComboBox.
Removed unused methods from
CPane:
GetMinLengthand
IsLastPaneOnLastRow.
Renamed
CPane::GetDockSiteRow(CDockingPanesRow *)to
CPane::SetDockSiteRow.
Visual Studio 2010 Breaking Changes
Compiler
The auto keyword has a new default meaning. Because use of the old meaning is rare, most applications will not be affected by this change.
The new static_assert keyword is introduced, which will cause a name conflict if there is already an identifier by that name in your code.
Support for the new lambda notation excludes support for coding an unquoted GUID in an IDL uuid attribute.
The .NET Framework 4 introduces the concept of corrupted state exceptions, which are exceptions that leave a process in an unrecoverable corrupted state. By default, you can't catch a corrupted state exception, even with the /EHa compiler option that catches all other exceptions. To explicitly catch a corrupted state exception, use __try-__except statements. Or, apply the [HandledProcessCorruptedStateExceptions]attribute to enable a function to catch corrupted state exceptions. This change affects primarily system programmers who might have to catch a corrupted state exception. The eight exceptions are STATUS_ACCESS_VIOLATION, STATUS_STACK_OVERFLOW, EXCEPTION_ILLEGAL_INSTRUCTION, EXCEPTION_IN_PAGE_ERROR, EXCEPTION_INVALID_DISPOSITION, EXCEPTION_NONCONTINUABLE_EXCEPTION, EXCEPTION_PRIV_INSTRUCTION, STATUS_UNWIND_CONSOLIDATE. For more information about these exceptions, see the GetExceptionCode macro.
The revised
/GScompiler option guards against buffer overruns more comprehensively than in earlier versions. This version might insert additional security checks in the stack that might decrease performance. Use the new
__declspec(safebuffers)keyword to instruct the compiler to not insert security checks for a particular function.
If you compile with both the
/GL(Whole Program Optimization) and
/clr(Common Language Runtime Compilation) compiler options, the
/GLoption is ignored. This change was made because the combination of compiler options provided little benefit. As a result of this change, the performance of the build is improved.
By default, support for trigraphs is disabled in Visual Studio 2010. Use the
/Zc:trigraphscompiler option to enable trigraphs support. A trigraph consists of two consecutive question marks ("??") followed by a unique third character. The compiler replaces a trigraph with a corresponding punctuation character. For example, the compiler replaces the
??=trigraph with the '#' character. Use trigraphs in C source files that use a character set that doesn't contain convenient graphic representations for some punctuation characters.
The linker no longer supports optimizing for Windows 98. The
/OPT(Optimizations) option produces a compile time error if you specify
/OPT:WIN98or
/OPT:NOWIN98.
The default compiler options that are specified by the RuntimeLibrary and DebugInformationFormat build system properties have been changed. By default, these build properties are specified in projects that are created by Visual C++ releases 7.0 through 10.0. If you migrate a project that was created by Visual C++ 6.0, consider whether to specify a value for these properties.
In Visual Studio 2010, RuntimeLibrary = MultiThreaded (
/MD) and DebugInformationFormat = ProgramDatabase (
/Zi). In Visual C++ 9.0, RuntimeLibrary = MultiThreaded (
/MT) and DebugInformationFormat = Disabled.
CLR
- The Microsoft C# and Visual Basic compilers can now produce a no primary interop assembly (no-PIA). A no-PIA assembly can use COM types without the deployment of the relevant primary interop assembly (PIA). When consuming no-PIA assemblies produced by Visual C# or Visual Basic, you must reference the PIA assembly on the compile command before you reference any no-PIA assembly that uses the library.
Visual Studio C++ projects and MSBuild
Visual Studio C++ projects are now based on the MSBuild tool. Consequently, project files use a new XML file format and a .vcxproj file suffix. Visual Studio 2010 automatically converts project files from earlier versions of Visual Studio to the new file format. An existing project is affected if it depends on the previous build tool, VCBUILD.exe, or project file suffix, .vcproj.
In earlier releases, Visual C++ supported the late evaluation of property sheets. For example, a parent property sheet could import a child property sheet, and the parent could use a variable defined in the child to define other variables. Late evaluation enabled the parent to use the child variable even before the child property sheet was imported. In Visual Studio 2010, a project sheet variable can't be used before it's defined because MSBuild supports only early evaluation.
IDE
The application termination dialog box no longer ends an application. In previous releases, when the
abort()or
terminate()function closed the retail build of an application, the C Run-Time Library displayed an application termination message in a console window or dialog box. The message said in part, "This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information." The application termination message was redundant because Windows subsequently displayed the current termination handler, which was usually the Windows Error Reporting (Dr. Watson) dialog box or the Visual Studio debugger. Starting in Visual Studio 2010, the C Run-Time Library doesn't display the message. Furthermore, the runtime prevents the application from ending before a debugger starts. This is a breaking change only if you depend on the previous behavior of the application termination message.
Specifically for Visual Studio 2010, IntelliSense doesn't work for C++/CLI code or attributes, Find All References doesn't work for local variables, and Code Model doesn't retrieve type names from imported assemblies or resolve types to their fully qualified names.
Libraries
The SafeInt class is included in Visual C++ and is no longer in a separate download. This is a breaking change only if you've developed a class that's also named "SafeInt".
The libraries deployment model no longer uses manifests to find a particular version of a dynamic link library. Instead, the name of each dynamic link library contains its version number, and you use that name to locate the library.
In previous versions of Visual Studio, you could rebuild the run time libraries. Visual Studio 2010 no longer supports building your own copies of the C run time library files.
Standard Library
The <iterator> header is no longer included automatically by many other header files. Instead, include that header explicitly if you require support for the standalone iterators defined in the header. An existing project is affected if it depends on the previous build tool, VCBUILD.exe, or project file suffix, .vcproj.iterator.
In the <algorithm> header, the checked_* and unchecked_* functions are removed. And in the <iterator>> header, the
checked_iteratorclass is removed, and the
unchecked_array_iteratorclass has been added.
The
CComPtr::CComPtr(int)constructor is removed. That constructor allowed a
CComPtrobject to be constructed from the NULL macro, but was unnecessary and allowed nonsensical constructions from non-zero integers.
A
CComPtrcan still be constructed from NULL, which is defined as 0, but will fail if constructed from an integer other than literal 0. Use nullptr instead.
The following
ctypemember functions were removed:
ctype::_Do_narrow_s,
ctype::_Do_widen_s,
ctype::_narrow_s,
ctype::_widen_s. If an application uses one of these member functions, you must replace it with the corresponding non-secure version:
ctype::do_narrow,
ctype::do_widen,
ctype::narrow,
ctype::widen.
CRT, MFC, and ATL Libraries
Support has been removed for users to build the CRT, MFC, and ATL libraries. For example, no appropriate NMAKE file is provided. However, users still have access to the source code for these libraries. And a document that describes the MSBuild options that Microsoft uses to build these libraries will probably be posted in a Visual C++ Team Blog.
MFC support for IA64 has been removed. However, support for the CRT and ATL on IA64 is still provided.
Ordinals are no longer reused in MFC module-definition (.def) files. This change means ordinals will not be different between minor versions, and binary compatibility for service packs and quick fix engineering releases will be improved.
A new virtual function was added to the
CDocTemplateclass. This new virtual function is CDocTemplate Class. The previous version of
OpenDocumentFilehad two parameters. The new version has three parameters. To support the restart manager, any class derived from
CDocTemplatemust implement the version that has three parameters. The new parameter is
bAddToMRU.
Macros and Environment Variables
- The environment variable __MSVCRT_HEAP_SELECT is no longer supported. This environment variable is removed and there is no replacement.
Microsoft Macro Assembler Reference
- Several directives were removed from the Microsoft Macro Assembler Reference compiler. The removed directives are
.186,
.286,
.286P,
.287,
.8086,
.8087, and
.NO87.
Visual Studio 2008 Breaking Changes
Compiler
The Windows 95, Windows 98, Windows ME, and Windows NT platforms are no longer supported. These operating systems have been removed from the list of targeted platforms.
The compiler no longer supports multiple attributes that were directly associated with ATL Server. The following attributes are no longer supported:
perf_counter
perf_object
perfmon
request_handler
soap_handler
soap_header
soap_method
tag_name
Visual Studio C++ projects
When upgrading projects from previous versions of Visual Studio, you might have to modify the WINVER and _WIN32_WINNT macros so that they are greater than or equal to 0x0500.
Beginning with Visual Studio 2008, the new project wizard doesn't have an option to create a C++ SQL Server project. SQL Server projects created by using an earlier version of Visual Studio will still compile and work correctly.
The Windows API header file Winable.h has been removed. Include Winuser.h instead.
The Windows API library Rpcndr.lib has been removed. Link with rpcrt4.lib instead.
CRT
Support for Windows 95, Windows 98, Windows Millennium Edition, and Windows NT 4.0 has been removed.
The following global variables have been removed:
_osplatform
_osver
_winmajor
_winminor
_winver
The following functions have been removed. Use the Windows API functions
GetVersionor
GetVersionExinstead:
_get_osplatform
_get_osver
_get_winmajor
_get_winminor
_get_winver
The syntax for SAL Annotations has changed. For more information, see SAL Annotations.
The IEEE filter now supports the SSE 4.1 instruction set. For more information, see _fpieee_flt_fpieee_flt.
The C Run-Time Libraries that ship with Visual Studio are no longer dependent on the system DLL msvcrt.dll.
Standard Library
Support for Windows 95, Windows 98, Windows Millennium Edition, and Windows NT 4.0 has been removed.
When compiling in debug mode with _HAS_ITERATOR_DEBUGGING defined (superseded by _ITERATOR_DEBUG_LEVEL after Visual Studio 2010), an application will now assert when an iterator attempts to increment or decrement past the bounds of the underlying container.
The member variable c of the stack Class is now declared protected. Previously, this member variable was declared public.
The behavior of
money_get::do_gethas changed. Previously, when parsing a monetary amount with more fraction digits than are called for by
frac_digits,
do_getused to consume them all. Now,
do_getstops parsing after consuming at most
frac_digitscharacters.
ATL
ATL can't be built without a dependency on CRT. In earlier versions of Visual Studio, you could use #define ATL_MIN_CRT to.
The macros PROP_ENTRY and PROP_ENTRY_EX have been deprecated and replaced with the macros PROP_ENTRY_TYPE and PROP_ENTRY_TYPE_EX for security reasons.
ATL/MFC Shared Classes
ATL can't be built without a dependency on CRT. In earlier versions of Visual Studio, you could use
#define ATL_MIN_CRTto.
MFC
CTimeClass: The
CTimeclassClass: Custom templates for the
CFileDialogclass can't be automatically ported to Windows Vista. They are still usable, but will not have the additional functionality or looks of Windows Vista style dialogs.
CWndClass and
CFrameWndClass: The
CWnd::GetMenuBarInfomethod was removed.
The
CFrameWnd::GetMenuBarInfomethod.
Visual Studio 2005 Breaking Changes
CRT
Many functions have been deprecated. See Deprecated CRT Functions.
Many functions now validate their parameters, halting execution if given invalid parameters. This validation may break code that passes invalid parameters and relies on the function ignoring them or just returning an error code. See Parameter Validation.
The file descriptor value -2 is now used to indicate that
stdoutand
stderraren't available for output, for example, in a Windows application that has no console window. The previous value used was -1. For more information, see _fileno.
The single-threaded CRT libraries (libc.lib and libcd.lib) have been removed. Use the multi-threaded CRT libraries. The
/MLcompiler flag is no longer supported. Non-locking versions of some functions have been added in cases where the performance difference between the multithreaded code and the single-threaded code is potentially significant.
The overload of pow, double pow(int, int), was removed to better conform with the standard.
The %n format specifier is no longer supported by default in any of the printf family of functions because it's inherently insecure. If %n is encountered, the default behavior is to invoke the invalid parameter handler. To enable %n support, use
_set_printf_count_output(also see
_get_printf_count_output).
sprintfnow prints the negative sign of a signed zero.
swprintfhas been changed to conform with the Standard; it now requires a size parameter. The form of
swprintfwithout a size parameter has been deprecated.
_set_security_error_handlerhas been removed. Remove any calls to that function; the default handler is a much safer way of dealing with security errors.
time_tis now a 64-bit value (unless _USE_32BIT_TIME_T is defined).
The
_spawn,
_wspawnFunctions now leave
errnountouched on success, as specified by the C Standard.
RTC now uses wide characters by default.
Floating-point control word support functions have been deprecated for applications compiled with
/CLRor
/CLR:PURE. The affected functions are
_clear87,
_clearfp,
_control87,
_controlfp,
_fpreset,
_status87,
_statusfp. You can disable the deprecation warning by defining _CRT_MANAGED_FP_NO_DEPRECATE, but the use of these functions in managed code is unpredictable and unsupported.
Some functions now return const pointers. The old, non-const behavior can be reinstated by defining _CONST_RETURN. The affected functions are
memchr, wmemchr
strchr, wcschr, _mbschr, _mbschr_l
strpbrk, wcspbrk, _mbspbrk, _mbspbrk_l
strrchr, wcsrchr, _mbsrchr, _mbsrchr_l
strstr, wcsstr, _mbsstr, _mbsstr_l
When linking with Setargv.obj or Wsetargv.obj, it's no longer possible to suppress the expansion of a wildcard character on the command line by enclosing it in double quotes. For more information, see Expanding Wildcard Arguments.
Standard Library (2005)
The exception class (located in the <exception> header) has been moved to the
stdnamespace. In previous versions, this class was in the global namespace. To resolve any errors indicating that the exception class can't be found, add the following using statement to your code:
using namespace std;
When calling
valarray::resize(), the contents of the
valarraywill be lost and will be replaced by default values. The
resize()method is intended to reinitialize the
valarrayrather than grow it dynamically like a vector.
Debug Iterators: Applications built with a debug version of the C-Runtime Library and which use iterators incorrectly might begin to see asserts at runtime. To disable these asserts, you must define _HAS_ITERATOR_DEBUGGING (superseded by _ITERATOR_DEBUG_LEVEL after Visual Studio 2010) to 0. For more information, see Debug Iterator Support
Visual C++ .NET 2003 Breaking Changes
Compiler
Closing parentheses now required for the defined preprocessor directive (C2004).
Explicit specializations no longer find template parameters from primary template (Compiler Error C2146).
A protected member (n) can only be accessed through a member function of a class (B) that inherits from the class (A) of which it (n) is a member (Compiler Error C2247).
Improved accessibility checks in compiler now detect inaccessible base classes (Compiler Error C2248).
An exception can't be caught if the destructor and/or copy constructor is inaccessible (C2316).
Default arguments on pointers to functions no longer allowed (Compiler Error C2383).
A static data member can't be initialized via derived class (Compiler Error C2477).
The initialization of a typedef isn't allowed by the standard and now generates a compiler error (Compiler Error C2513).
bool is now a proper type (Compiler Error C2632).
A UDC can now create ambiguity with overloaded operators (C2666).
More expressions are now considered valid null pointer constants (Compiler Error C2668).
template<> is now required in places where the compiler would previously imply it (Compiler Error C2768).
The explicit specialization of a member function outside the class isn't valid if the function has already been explicitly specialized via a template class specialization (Compiler Error C2910).
Floating point non-type template parameters are no longer allowed (Compiler Error C2993).
Class templates aren't allowed as template type arguments (C3206).
Friend function names are no longer introduced into containing namespace (Compiler Error C37 (Compiler Warning (level 1) C4346).
Functions that were incorrectly considered template specializations are no longer considered so (C4347).
Static data members can't be initialized via derived class (C4356).
A class template specialization needs to be defined before it is used in a return type (Compiler Warning (level 3) C4686).
The compiler now reports unreachable code (C4702).
See also
What's New for Visual C++ in Visual Studio
Feedback | https://docs.microsoft.com/en-us/cpp/porting/visual-cpp-change-history-2003-2015?redirectedfrom=MSDN&view=vs-2019 | CC-MAIN-2019-43 | en | refinedweb |
I’m trying to use Sentry in a typescript app that will run in the browser, so I’m including it like:
import * as Sentry from "@sentry/browser";
This gives me a compiler error,
Cannot find module '@sentry/browser'. ts(2307), even though I’ve updated my package.json to include both
@sentry/browser and
@sentry/types. I’ve been trying to figure out this error for a while, but I found a reference to this error being thrown if TypeScript can’t find an ambient declaration file for a third-party library (). Is this the probable cause, and does one of these declaration files exist somewhere?
Thanks! | https://forum.sentry.io/t/ambient-typescript-declaration-file/7816 | CC-MAIN-2019-43 | en | refinedweb |
Introduction
This is a tutorial for perimeter of shapes in java. The program calculates perimeter and prints it. The program is extendable. Go enjoy the program. Lets begin……….
Program for perimeter of circle in java.
//import Scanner as we require it. import java.util.Scanner; // the name of our class its public public class Perimeter { //void main public static void main (String[] args) { //declare float float r,peri; //print message System.out.println("Enter radius of circle:"); //Take input Scanner input = new Scanner(System.in); r = input.nextFloat(); //calculate peri = 2*3.142f*r; //print the perimeter System.out.println("Perimeter = "+peri); } }
Output
Enter radius of circle:
2
Perimeter = 12.568
Program for perimeter of rectangle in java.
[code language=”java”]
//import Scanner as we require it.
import java.util.Scanner;
// the name of our class its public
public class Perimeter {
//void main
public static void main (String[] args)
{
//declare float
float a,b,peri;
//print message
System.out.println(“Enter length and breadth of rectangle:”);
//Take input
Scanner input = new Scanner(System.in);
a = input.nextFloat();
b = input.nextFloat();
//calculate
peri = (a+b)*2;
//print the perimeter
System.out.println(“Perimeter = “+peri);
}
}
[/code]
Output
Enter length and breadth of rectangle:
2
1
Perimeter = 6.0
How it works
- The program prints the message to enter required inputs.
- The user enters inputs.
- Perimeter is calculated
- The perimeter is printed.
Extending it
The program can be extended by using more different shapes of geometry. You can take the necessary inputs and calculate the perimeter of the given geometry.
Explanation.
- Import the Scanner.
- Declare the class as public
- Add the void main function
- Add system.out.println() function with the message to enter required inputs.
- Declare input as Scanner.
- Take the inputs and save it in variable.
- Calculate perimeter and save it in other variable.
- Add system.out.println() function to print the calculated perimeter.
At the end.
You learnt creating the Java program for Perimeter of different shapes . So now enjoy the program.
Please comment on the post and share it.
And like it if you liked. | https://techtopz.com/java-programming-perimeter-of-shapes/ | CC-MAIN-2019-43 | en | refinedweb |
Telerik UI for Blazor 0.2.0 Free Preview Available
Telerik UI for Blazor 0.2.0 Free Preview Available
A look into how one dev team is working tying the latest update for Blazor and ASP.NET Core into their products and code.
Join the DZone community and get the full member experience.Join For Free
Telerik UI for Blazor 0.2.0 is available for download today. This release makes Telerik UI for Blazor compatible with Visual Studio 2019 preview 2, and ASP.NET Core 3.0 preview 2.
A few weeks ago we released an early preview of Telerik UI for Blazor. The early preview is intended to give developers a peek into the development process while giving everyone a chance to share their feedback on the product. We are working closely to align our goals with those of Microsoft's ASP.NET Core team, which is why we're happy to announce Telerik UI for Blazor 0.2.0. This new version aligns our UI components with the ASP.NET Core 3.0 preview 2 and Blazor 0.8.0 release.
What's New
In this release, you may not see a lot of changes on the surface, but we've diligently kept up with things that are happening beneath the surface of the framework. ASP.NET Core 3.0 preview 2 marks a milestone for Razor Components, the underlying component model for Blazor. In preview 2, Razor Components have been decoupled from Blazor, making the component model much more flexible (for more about this see the preview 2 announcements). The change shifted some dependencies and namespaces used to power Telerik UI for Blazor, prompting us to add compatibility with the new release.
Telerik UI for Blazor is now compatible with:
- Visual Studio 2019 Preview 2
- ASP.NET Core 3.0 Preview 2
- Blazor 0.8.0
Going forward, Telerik UI for Blazor 0.1.0 is only compatible with Visual Studio 2017. You will need to migrate your experiments along with your tooling to Visual Studio 2019 to utilize 0.2.0.
Besides supporting the new, latest bits, some small changes were made to the Grid component:
- Grid columns are now better defined. In this version, Grid columns have improved semantics through nesting. The
KendoGridColumnscontainer clearly defines where columns begin in the Grid and gives us a framework to build upon. In the example below the columns are clearly defined and have different nesting from the
RowTemplate.
<KendoGrid Data=@GridData> <RowTemplate ... /> <KendoGridColumns> <KendoGridColumn Field=@nameof(Product.ProductName)/> <KendoGridColumn Field=@nameof(Product.UnitPrice)/> </KendoGridColumns> </KendoGrid>
HTML
- Grid Height is now a supported parameter, which allows the grid height to be defined programmatically.
<KendoGrid Data=@GridData Height=@Height>
HTML
- Code samples are here! We really wanted to help you hit the ground running. With the ecosystem changing rapidly, it's best to show solid examples of how to get things done. This is why we have released a new GitHub repository showing of our components doing cool things like templates, editing, and swapping themes. You can clone the samples here:
Stay Tuned
We're committed to Telerik UI for Blazor and plan to release more content in the form of video, blogs, and samples on GitHub. If you're watching with anticipation for what's coming next, make sure to visit our blogs frequently for news and announcements. For more insight into all things Blazor, you can watch me (Ed Charbeneau, Progress developer advocate), each week Friday at 12:00 pm EST on Twitch.
Join the Early Preview legacy JavaScript dependencies (there's no jQuery this time folks). Throughout the year Microsoft will be working on Blazor (a.k.a..
A Special Installation Note
For the Telerik UI for Blazor version 0.2.0 to show in your NuGet feed, you must visit the download page and create an account, or sign in. You will need to repeat this process even if you already have early preview 0.1.0. Visiting this page will activate your NuGet feed. For more information about adding the Telerik NuGet feed please see our documentation.
Published at DZone with permission of Ed Charbeneau , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/telerik-ui-for-blazor-020-free-preview-available?fromrel=true | CC-MAIN-2019-43 | en | refinedweb |
My views.py File Is Quickly Becoming Large. Should I Refactor It?
Eventually, every Django project reaches a state when you ask yourself: should I refactor this views.py file?
Maybe you find yourself jumping around randomly in a huge view.py file. Or it’s just the line count which seems larger than it should be. Are 500 lines too much? How about 2000?
Here are a few questions you can ask yourself to see if you should be worried, and a simple method to make large views.py files more manageable.
Is it really an issue already?
Refactoring should be motivated by an actual problem you’re experiencing. If you can navigate your views without issues, and your work isn’t negatively influenced, maybe you’re not doing anything wrong.
Being worried about line count should make you weary, but large files are not necessarily a bad thing. A huge file can help you spot underlying existing problems, which should be addressed.
Thin views and fat models
A large views.py file, might indicate that you’re doing too much in your views. Is there logic which would be better-off in your models, managers or forms?
Reusable parts
If there are functions which are not views, you could extract them to a single file of helper functions, and import it in your views. Is there maybe even duplicated functionality which could be extracted out of your views?
A views submodule
Split your single views.py file up into multiple ones. Without having to change imports and urls in other places.
It’s pretty straightforward to transition from a single views.py towards a submodule.
First, you need to create a new directory next to your views.py file, called
views.
Move your views.py file into it, and create an
__init__.py file:
views ├── __init__.py └── views.py
The
__init__.py file should have the following content:
from views import *
At this point, your application will work as expected, but you can begin splitting parts of the big views.py file into smaller parts.
If you have views, which are related to each other, you can add a new file called topic.py and move them there. The
views folder now looks like this:
views ├── __init__.py ├── topic.py └── views.py
Make sure your
__init__.py imports the content of topic.py, and you’re good to go!
from views import * from topic import *
This way, you can split a big views.py file into multiple ones by their topic, and will have an easier time navigating your views in the future, instead of having to scroll through a growing single views.py file.
Splitting files into modules in this fashion is a nice step, before you decide to create dedicated apps in your project. | https://vsupalov.com/large-django-views-file/ | CC-MAIN-2019-43 | en | refinedweb |
).
To implicitly link to a DLL, executables must obtain the following from the provider of the DLL:
A header file (.h file) containing the declarations of the exported functions and/or C++ classes. The classes, functions, and data should all have__declspec..
To successfully create and compile QT applications and use libdln.a library in Linux, you need to correctly configure and setup QT. This steps should be performed after you successfully performed steps from Software & Hardware Installation in Linux page.
First, download QT sources from QT website (...), unpack archive, open terminal in unpacked folder with QT, configure QT with-release and-static flags, then compile QT sources and install QT. For this example QT 4.8.5 version was used.
./configure -release -nomake demos -nomake examples make make install
After QT is compiled you can compile and run any QT application by using properqmake project_name from application sources folder. You can use terminal or QT Creator for creating applications.
device_list_gui project compilation from terminal:
path_to_qmake/qmake device_list_gui make
To use libdlb.a library in your application project, you need to add the following to project .pro file:
QMAKE_LFLAGS += -static-libgcc LIBS += /usr/local/lib/libdln.a # path to libdln.a library
Also do not forget to include required header .h files to your sources for successful usage API functions from libdln.a. For example:
#include "../common/dln.h" #include "../common/dln_generic.h" | http://dlnware.com/print/book/export/html/15 | CC-MAIN-2019-43 | en | refinedweb |
I’ve been waiting for this feature for what feels like forever.
Databricks-Connect is here! You can download here. It allows you to develop using an IDE like VSCode, PyCharm, IntelliJ etc and connect to a remote Databricks cluster to execute the task.
This means we can bring much better development experiences and best practices to data engineering workloads. Notebooks are great for exploring data, but they are not enterprise code for ETL jobs. They are near impossible to test, have zero concept of classes and methods which developers expect, and they are really designed for interactive use - not batch processing.
Previously you could create a PySpark application and execute it as a job. But this was very clunky - and you missed all the good features of Databricks like Delta, DBUtils etc.
Setup is pretty straightforward. The link above has detailed instructions, but in short I’ve summarised below.
Windows Users
UPDATE April 2019 - I recommend Windows users read through this blog post before continuing. Mac/Linux users - as you were.
Cluster Setup
First you need to enable the feature on your Databricks cluster. Your cluster must be using Databricks Runtime 5.1 or higher. In the web UI edit your cluster and add this/these lines to the spark.conf:
spark.databricks.service.server.enabled true
If you are using Azure Databricks also add this line:
spark.databricks.service.port 8787
(Note the single space between the setting name and value).
Restart your cluster.
Virtual Environment
Create a new Virtual environment, ensuring that Python matches your cluster (2.7 or 3.5). If you are using Anaconda then this command will create it for you:
conda create --name dbconnect python=3.5
Switch to the environment:
conda activate dbconnect
If you are re-using an existing environment uninstall PySpark before continuing. Now install the Databricks-Connect library:
pip install -U databricks-connect==5.1.* # or 5.2.*, etc. to match your cluster version
Configure Library
At prompt run:
databricks-connect configure
Complete the questions - they are pretty straightforward. Once done you can run this command to test:
databricks-connect test
If you get any errors check the troubleshooting section. But hopefully you are good to go.
VSCode
I’m going to use VSCode because it’s my tool of choice. You should install the Python extension first (if you haven’t got it already). Also consider disabling linting because you will get lots of red squiggles.
Create a new py file in any folder and paste in this code:
from pyspark.sql import SparkSession from pyspark.sql.functions import lit, col spark = SparkSession.builder.getOrCreate() # Extract df = spark.read.format("csv").option("header", "true").load("/databricks-datasets/asa/planes") # Transform df = df.withColumn("NewCol", lit(0)).filter(col("model").isNotNull()) # Load df.write.format("delta").mode("overwrite").saveAsTable("planes") # Verify resDf = spark.sql("SELECT * FROM planes") resDf.show()
You now need to ensure you have the right interpreter. From the Command Palette type: “select interpreter” and press enter:
Select your virtual environment that you created above:
You will only have to do that once. You can now execute your code by pressing F5, hopefully you will see this:
Debugging
Now for the heavenly bit. You can now debug by adding breakpoints to your code. Simply add a breakpoint, then you can hover over variables to view them etc! No more debugging 1980’s style with Print statements everywhere.
You can also use the Peak Definition and Go To Definition options in VSCode:
Thats awesome!!!!!
Notes
A few notes: in you Python files you will need to add these lines at the start of PySpark modules (normally notebooks do this for you in the background):
from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate()
If you want to use DBUtils you will need to run this first:
from pyspark.dbutils import DBUtils dbutils = DBUtils(spark.sparkContext)
Note that DBUtils will work locally but will not work if you deploy your code to your cluster and execute server side - this is a known issue.
There are some limitations with Databricks-Connect you should be aware of before getting too far in.
Wrap Up
I think that this is a huge step forward for data engineering in Databricks. I will post some more blogs around best practices and getting this working with CI/CD tools. | https://datathirst.net/blog/2019/3/7/databricks-connect-finally | CC-MAIN-2019-43 | en | refinedweb |
And one more thing, this is my source code:
public static void main(String str[]) throws Exception {
Map<String, Object> input = new LinkedHashMap<>();
SXSSFWorkbook workbook = new SXSSFWorkbook();
OutputStream os = new FileOutputStream(
new File("/home/gaian/Desktop/Sal.xlsx"));
workbook.setCompressTempFiles(true);
SXSSFSheet sheet = workbook.createSheet("firstSheet");
SXSSFRow row = sheet.createRow(0);
SXSSFCell cell = row.createCell(0);
cell.setCellValue("salary");
cell.setCellType(CellType.STRING);
row = sheet.createRow(1);
cell = row.createCell(0);
cell.setCellValue(new Double("4.0"));
cell.setCellType(CellType.NUMERIC);
workbook.write(os);
os.close();
workbook.close();
System.out.println("done creating excel workbook...");
}
I am creating a workbook and writing a value (4.0) as double in a sheet.
But when I open the sheet, I see the integral value 4 instead of decimal
4.0.
Any settings I need to do in my code to get the value 4.0?
Thanks.
Thanks,
On Thu, May 17, 2018 at 9:07 AM, Syed Mudassir Ahmed <
syed.mudassir@gaianconsultants.com> wrote:
> Thanks so much Tim and Fanning. Both of your suggestions are working
> out. I would suggest better to have such example in the test case section
> of your source folder.
>
> Thanks,
>
>
> On Wed, May 16, 2018 at 6:49 PM, Tim Allison <tallison@apache.org> wrote:
>
>> You need to make your SAXReader namespace aware:
>>
>> saxFactory.setNamespaceAware(true);
>>
>>
>> On Wed, May 16, 2018 at 8:59 AM, Tim Allison <tallison@apache.org> wrote:
>>
>> > Sorry for my delay. I just tested your file with Apache Tika 1.18 which
>> > uses POI 3.17..., and I got:
>> >
>> >
>> > <body><div><h1>Sheet1</h1>
>> > <table><tbody><tr> <td>Salary</td></tr>
>> > <tr> <td>99.965432</td></tr>
>> > </tbody></table>
>> > </div>
>> > <div><h1>Sheet2</h1>
>> > <table><tbody/></table>
>> > </div>
>> > <div><h1>Sheet3</h1>
>> > <table><tbody/></table>
>> > </div>
>> > </body></html>
>> >
>> > That's promising... Let me take a look at your example code.
>> >
>> > On Wed, May 16, 2018 at 1:31 AM, Syed Mudassir Ahmed <syed.mudassir@
>> > gaianconsultants.com> wrote:
>> >
>> >> any update on this pls? This is blocking me.
>> >>
>> >> Thanks,
>> >>
>> >>
>> >> On Tue, May 15, 2018 at 3:45 PM, Syed Mudassir Ahmed <
>> >> syed.mudassir@gaianconsultants.com> wrote:
>> >>
>> >>> Yes, pls find the file attached here.
>> >>>
>> >>> Thanks,
>> >>>
>> >>>
>> >>> On Tue, May 15, 2018 at 3:43 PM, Tim Allison <tallison@apache.org>
>> >>> wrote:
>> >>>
>> >>>> Any chanc you can share the file?
>> >>>>
>> >>>> On Tue, May 15, 2018 at 3:19 AM Syed Mudassir Ahmed <
>> >>>> syed.mudassir@gaianconsultants.com> wrote:
>> >>>>
>> >>>> > Hi,
>> >>>> > I am trying to read data from a XLSX sheet via
>> >>>> XSSFSheetXMLHandler. The
>> >>>> > source code is below.
>> >>>> >
>> >>>> > public static void main(String str[]) throws Exception {
>> >>>> > String filePath
>> >>>> > = "/home/gaian/Desktop/salary.xlsx";
>> >>>> > File file = new File(filePath);
>> >>>> > InputStream inputStream = new FileInputStream(file);
>> >>>> > OPCPackage pkg = OPCPackage.open(inputStream);
>> >>>> >
>> >>>> > SheetContentsHandler sheetContentsHandler = new
>> >>>> > SheetContentsHandler() {
>> >>>> > @Override
>> >>>> > public void startRow(int rowIndex) {
>> >>>> > }
>> >>>> >
>> >>>> > @Override
>> >>>> > public void endRow(int i) {
>> >>>> > }
>> >>>> >
>> >>>> > @Override
>> >>>> > public void cell(String cell, String formattedValue,
>> >>>> > XSSFComment c) {
>> >>>> > System.out.println("cell encountered with
>> addess:<" +
>> >>>> cell
>> >>>> > + "> and value:<" + formattedValue
+ ">");
>> >>>> > }
>> >>>> >
>> >>>> > @Override
>> >>>> > public void headerFooter(String text, boolean isHeader,
>> >>>> String
>> >>>> > tagName) {
>> >>>> > System.out.println("headerFooter()");
>> >>>> > }
>> >>>> > };
>> >>>> >
>> >>>> > ReadOnlySharedStringsTable strings = new
>> >>>> > ReadOnlySharedStringsTable(pkg);
>> >>>> > XSSFReader xssfReader = new XSSFReader(pkg);
>> >>>> > StylesTable styles = xssfReader.getStylesTable();
>> >>>> > XSSFReader.SheetIterator worksheets =
>> >>>> (XSSFReader.SheetIterator)
>> >>>> > xssfReader.getSheetsData();
>> >>>> > InputStream stream = worksheets.next();
>> >>>> > SAXParserFactory saxFactory =
>> SAXParserFactory.newInstance();
>> >>>> > XMLReader sheetParser = saxFactory.newSAXParser().getX
>> >>>> MLReader();
>> >>>> >
>> >>>> > ContentHandler handler
>> >>>> > = new XSSFSheetXMLHandler(styles, strings,
>> >>>> > sheetContentsHandler, false);
>> >>>> >
>> >>>> > sheetParser.setContentHandler(handler);
>> >>>> > sheetParser.parse(new InputSource(stream));
>> >>>> > }
>> >>>> >
>> >>>> > When I use the POI version 3.13, I am getting the following
>> output:
>> >>>> >
>> >>>> > cell encountered with addess:<A1> and value:<Salary>
>> >>>> > cell encountered with addess:<A2> and value:<99.965432>
>> >>>> >
>> >>>> > The moment I switch to version 3.14 or higher, I am no longer
>> >>>> getting
>> >>>> > any output.
>> >>>> >
>> >>>> > Can someone pls let me know if any more code changes needed
if I
>> >>>> switch
>> >>>> > to 3.14 or higher? I even checked the test cases in Apache
POI
>> 3.17
>> >>>> > sources but was shocked not to find any there. Any
>> >>>> example/references that
>> >>>> > I can go through pls? This is blocker for one of my applications.
>> >>>> >
>> >>>> >
>> >>>> > Thanks,
>> >>>> >
>> >>>> >
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>
> | http://mail-archives.apache.org/mod_mbox/poi-user/201805.mbox/%3CCAKbb=ruLvc4YWyTrC1JcMh+-Kc3afNsD3CoeB4SF85oApWpkxQ@mail.gmail.com%3E | CC-MAIN-2019-43 | en | refinedweb |
Provided by: aolserver4-dev_4.5.1-15_amd64
NAME
Ns_InfoAddress, Ns_InfoBootTime, Ns_InfoBuildDate, Ns_InfoConfigFile, Ns_InfoErrorLog, Ns_InfoHomePath, Ns_InfoHostname, Ns_InfoLabel, Ns_InfoNameOfExecutable, Ns_InfoPid, Ns_InfoPlatform, Ns_InfoServerName, Ns_InfoServerVersion, Ns_InfoServersStarted, Ns_InfoShutdownPending, Ns_InfoStarted, Ns_InfoTag, Ns_InfoUptime, Ns_PageRoot - Get server information
SYNOPSIS
#include "ns.h" char * Ns_InfoAddress(void) int Ns_InfoBootTime(void) char * Ns_InfoBuildDate(void) char * Ns_InfoConfigFile(void) char * Ns_InfoErrorLog(void) char * Ns_InfoHomePath(void) char * Ns_InfoHostname(void) char * Ns_InfoLabel(void) char * Ns_InfoNameOfExecutable(void) int Ns_InfoPid(void) char * Ns_InfoPlatform(void) char * Ns_InfoServerName(void) char * Ns_InfoServerVersion(void) int Ns_InfoServersStarted(void) int Ns_InfoShutdownPending(void) int Ns_InfoStarted(void) char * Ns_InfoTag(void) int Ns_InfoUptime(void) char * Ns_PageRoot(char *server) _________________________________________________________________
DESCRIPTION
These functions return information about the server. Many of the functions return pointers to strings or other types of information which, in most cases, you must not free. These are denoted as "read-only" in the sections below. Ns_InfoAddress() Return the server IP address of the server. The IP address is defined in the server configuration file. The IP address is returned as a string pointer which you must treat as read-only. If you want to alter the string, you must use ns_strdup to copy the string to another location in memory and modify that instead. Ns_InfoBootTime() Return the time that the server was started as an int. Treat the result as time_t. Ns_InfoBuildDate() Return the date and time that this server was compiled as a string pointer. Treat the result as read-only. Ns_InfoConfigFile() Return the absolute path name of the configuration file in use as a string pointer. Treat the result as read-only. Ns_InfoErrorLog() Return the name of the error log as a string pointer. Treat the result as read- only. The name may be just a name, a relative path or an absolute path depending on how it is defined in the server configuration file. Ns_InfoHomePath() Return the absolute directory path where AOLserver is installed as a string pointer. Treat the result as read-only. Ns_InfoHostname() Return the hostname of the host that AOLserver is running on as a string pointer. The gethostname(2) function is used. If gethostname(2) fails to return a hostname, "localhost" is used instead. Treat the result as read-only. Ns_InfoLabel() Return the source code label for AOLserver as a string pointer. Statically defined in the source code. If no label was used, "unlabeled" is returned. You can use these functions to provide the source code label when you report problems with the server. Treat the result as read-only. Ns_InfoNameOfExecutable() Return the name of the running executable as a string pointer. Treat the result as read-only. Ns_InfoPid() Return the pid of the running AOLserver executable as an int. Ns_InfoPlatform() Return the platform name as a string pointer, e.g. "linux". Treat the result as read-only. Ns_InfoServerName() Return the AOLserver name string, e.g. "AOLserver". Statically defined in the source code. Treat the result as read-only. Ns_InfoServerVersion() Return the AOLserver version string, e.g. "3.5.2". Statically defined in the source code. Treat the result as read-only. Ns_InfoServersStarted() Return TRUE if the server has started, i.e., if initialization and module loading is complete. This is a compatibility function that calls Ns_InfoStarted. Ns_InfoShutdownPending() Return TRUE if there is there a shutdown pending, i.e. if an INTR signal has been received or if ns_shutdown has been called. Ns_InfoStarted() Return TRUE if the server has started, i.e., if initialization and module loading is complete. Ns_InfoTag() Return the CVS tag of this build of AOLserver. Statically defined in the source code. The value may be meaningless. Treat the result as read-only. Ns_InfoUptime() Return how long, in seconds, AOLserver has been running. Ns_PageRoot(server) Return the path name of the AOLserver pages directory for a particular server as a string pointer. The server argument is not used. Treat the result as read-only.
SEE ALSO
nsd(1), info(n) | http://manpages.ubuntu.com/manpages/precise/man3/Ns_InfoAddress.3aolserver.html | CC-MAIN-2019-43 | en | refinedweb |
STORAGE TANKS
Storage tanks can be built on high grounds in which case they are termed ground-level reservoirs, or they are elevated reservoirs.
Ground-level reservoirs are usually built of masonry, mass concrete, or reinforced concrete, according to the materials and local skill available (see below)
A = Cross-sections of reservoir
B = Types of walls for reservoirs
C = Sketch detail of manhole opening in reservoir cover
D = Typical valve arrangement for ground-level reservoir with two compartments
Aa = Effluent
Bb = Supply
Cc =Overflow
Dd = Drain
A = Ground level
B = Water-bearing formation
C = Impervious stratum
D = Collection chamber
E = Opening protected by a stone-and-gravel pack in order to exclude sand and debris
F = Collecting room
G = Measuring weir
H = Measuring rod, bottom of which is level with lower edge of weir
I = Outlet pipe to reservoir or town
J = Floor drainage
K = Locked entrance door
L = Screened opening through door for ventilation purpose
M = Diversion ditch for surface run-off. Should be at least 15 m (49 ft) away from the collection structure
A = Protective drainage ditch to keep drainage water a safe distance from spring
B = Original slope and ground line
C = Screened outlet pipe: can discharge freely or be piped to village or residence
Springs can offer an economical and safe source of water. A thorough search should be made for signs of ground-water outcropping. Springs that can be piped to the user by gravity flow should be checked.
A = Protective drainage ditch to keep drainage water a safe from spring
B = Screened outlet pipe: to discharge freely or be piped to village or residence
In order to prevent leakage in the reservoirs, the following should be done:
1. Build concrete walls with as few joints as possible
2. Copper or polyethylene strips should be built in vertical joints if possible.
3. Paint the whole inside surface with a bitumen compound or with a solution of sodium silicate (water glass).
4. Render interior surface with about 3/4 inch thickness of mortar composed of water-proof cement and sand, after thoroughly rough ending the surface to be rendered to ensure a good key.
Elevated reservoirs may be of reinforced concrete or-of steel. Reinforced concrete is suitable when many tanks of similar size are to be built in a series of villages, so that the system is used over and over again. The construction techniques involved are the same as for ground-level storage, except that the elevating walls should be built first.
Steel reservoirs are suitable for single reservoir plans. The tank can be ordered from the manufacturers and comes complete with the accompanying assembly manual which is easy to follow. The tower foundations are to be locally built of concrete.
Steel reservoirs can also be used for ground-level tanks on rocky sites or in areas where masonry rocks are scarce. In such cases, the tank must be slightly elevated to allow painting of lower parts. Elevated storage tanks have valves to stop overflowing. When a float valve is used to control the level in the tank, the overflow should never come into action if the valve is working properly. In the case of a "floating" tank it is usual to control the inflow through a float valve and the outlet joins the delivery pipe through a non-return (see Fig. 48 ). A depth gauge operated by a float and wire shows the amount of water within the tank, and is visible from the outside.
Outlet always taken from 6 in above tank floor; wash-out at extreme bottom of tank
A = Diagrammatic arrangement of pipes when overhead tank acts as balancer ( floating tank ). Not suitable for use with reciprocating pumps.
B = Diagrammatic arrangement of pipes when pumping direct to storage tank
When a float valve is not used, there is no control on the depth of water except the intelligence of the operator of the supply pump and the overflow, and carelessness in adjusting the hours of pumping to the draw-off can result in considerable waste, while the farther the tank is from the pump-house the easier it is to overlook such waste. The simple indicator shown below is one way of reducing this to the minimum as, properly sited, it can be seen for a considerable distance. However, the nearer the tank is to the pump-house the easier this control becomes.
A = Suitable indicator for top two three feet of water in tank
B = Appearance of indicator from a distance; it should be orientated so that it appears against the skyline from observation point; it can be seen clearly a mile away.
C = Section at a, showing construction and operation of lower indicator
In the construction of storage facilities, the following provisions should be made:
1. Manhole covers must be tightly fitting to prevent surface water from entering the reservoir . They should be locable.
2. Surface covers must be water-tight and light-proof to prevent algae growth.
3. Ventilation must be included to let out air as water fills the tank. These must be covered with fine-mesh wires (not less than 18-mesh).
4. Inlet and outlet pipes, overflow and wash-out pipes should have mesh at their open ends. The outlet pipe should be 6 in. above the bottom of the tank. If the tank has concrete floors, the floor should slope towards the wash-out pipes to enhance cleaning. The diagrams below illustrate the proper design for a concrete storage tank.
WATER PURIFICATION SYSTEM
Water purification systems are usually incorporated in the storage tanks. Where only disinfection (chlorination) is required, the treatment tank can act as distributing reservoir. The cistern is a typical storage-purifier combination.
The cistern filter is a sand filter which keeps organic matter from entering the cistern. The water may then be disinfected and stored in the cistern. The diagrams below show the construction design for such a filter.
A catchment area always collects leaves, bird droppings, road dust, insects, etc. A cistern filter removes as much of these as possible before the water enters the cistern.
The sand filter is usually built at ground level and the filtered water runs into the cistern, which is mostly underground. The largest pieces, such as leaves, are caught in the splash plate. The splash plate also serves to distribute the water over the surface of the filter, so that the water does not make holes in the sand. A piece of window screen forms the splash plate.
Most filters are made too small to handle the normal rush of water from rainstorms. This results in the filter always overflowing or a channel being dug in the sand, which will ruin the filter. The filter area should be not less than one-tenth of the catchment area. A typical filter area would be 4 feet by 4 feet for a family-sized unit with average rainfall intensity.
About every 6 months, the manhole cover to the filter must be removed and the filter cleaned. Remove all matter from the splash plate and scrape off and remove the top half-inch of sand. When the depth of sand becomes only 12 inches, rebuild it with clean sand to the original depth of 18 inches.
A simple way to discard the first runoff from the roof, which is usually mostly leaves and dirt, should be provided. This will make your filter last longer between cleanings. The easiest way is to have a butterfly valve (like a damper in a stovepipe) in the downspout. After the rain has washed the roof, the valve is turned to allow the runoff water to enter the filter. A semiautomatic system is shown in Fig.
When building the filter, it is important to insure easy cleaning and to use properly-sized sand and gravel. The filter is usually mounted right on the cistern but can also be close to it. It rust have a screened overflow.
Water Purification Plant
Tools and Materials
3 barrels, concrete tanks or 55-gallon drums
1 eight inch funnel or sheet metal to make a funnel
2 smaller tanks, about 5 gallon or 20 liters in size, equipped with float valves
4 shut-off valves
1 throttle or needle valve (clamps may be used instead of the valves, if hose is used) some pipe or hose with fittings hypochlorite of lime or sodium hypochlorite (laundry bleach)
This plant can be used in small systems, using laundry bleach as a source of chlorine.
The water purifier should be made as in the drawing. The two large barrels on top of the structure are for weakening the bleach. The two smaller tanks on the shelf below are for holding equal amounts of weakened bleach solution and of water, at a constant pressure. This makes a constant flow of the solution water, at the same speed, into the hoses leading to the mixing points. The mix is further controlled by the valves and may be seen through the open funnel. If a throttle valve is not available, a shut-off valve may be used and a throttle action obtained by this valve and valve #4 in series.
Placing the two barrels at a height of 10 feet causes a pressure of only about five pounds a square inch. Thus the plumbing does not have to be of high quality except for valve #1 and the float valve of the water holdup tank, if the rain water supply is under higher pressure.
Sometimes special chlorinators are required; in which case when hypachlorinators are ordered, the following data should be furnished to the manufacturers:
If water is pumped:
1. Sketch of pumping installation
2. Number and type of pumps
3. Manual or automatic operation
4. Pumping rate (liters/second or gallons/minute) and total water pumped per day (cubic meters or gallons)
5. Electric current available (volts, phase, cycle)
6. Pressure on pump discharge (minimum and maximum)
7. Suction lift
8. Sizes of suction and discharge pipes
9. Other data (space available for installation, sizes of foot valves, check valves, etc.)
For gravity system:
1. Sketch of system, indicating source of water supply and distances
2. Size of main
3. Size of meter, if any, giving make and description
4. Pressure at meter or point of installation (minimum and maximum)
5. Rate of flow (minimum and maximum)
6. Average daily flow (cubic meters or gallons per day)
7. Fire flow, if any (liters/second or gallons/minute)
8. Allowable loss of pressure (m or ft)
9. Other data (space available for installation, etc.)
Boiler for Potable Water
Sometimes it is easier to boil drinking water than to disinfect. The following design can provide enough safe water for a smell community with a distribution system, since it would require a lot of fuel to boil enough water for the system.
Tools and Materials
1 - 55 Gallon drum
1 - 3/4" Pipe Nipple 2" long. Quantity of bricks for two layers of bricks to support drum.
1 - bag of cement plus sand for mortar and base of fireplace.
1 - large funnel and filter medium for filling.
1 - metal plate to control draft in front of firebox.
1 - 3/4" valve, preferably all metal such as a gate valve to withstand heat.
This drum for boiling of drinking water is intended for use in your residence to provide a convenient method for preparation and storage of sterile water. The fireplace is simple, oriented so that the prevailing wind or draft goes from front to back of the drum between the bricks. A chimney can be provided but is not necessary.
The unit has been tested in many Friend's workcamps in Mexico and elsewhere. A 55 gallon drum would normally last a 20 person camp group for an entire week, and certainly would provide adequate safe water supply for two or three individuals for a much longer time. Water must boll at least 15 minutes with steam escaping around the completely loosened filler plug. Be sure that the water in the pipe nipple and valve reach boiling temperatures by purging about two liters of water out through the valve while the drum is at a full boil. | http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0hdl--00-0----0-10-0---0---0direct-10---4-------0-0l--11-en-50---20-about---00-0-1-00-0-0-11-1-0utfZz-8-10-0-0-11-10-0utfZz-8-00&a=d&c=hdl&cl=CL3.43&d=HASH72d6a825afcdab4026ef60.7.6 | CC-MAIN-2019-43 | en | refinedweb |
Memo
Take and organize notes like text messages.
I’ve written articles about Simple Linear Regression and Multiple Linear Regression. Another type of regression analysis is called a logistic regression. Like all simple and multiple regression analysis, logistic regression is also a predictive analysis. The only difference is that while simple and multiple regression returns a quantitative response, logistic regression returns a binary response (success/failure, yes/no, 1/0).
We will model the success probability as
p = P(response = 1). The value
p will
depend on a quantitative predictor
p = P(response = 1), so
p = p(X).
Logistic regression is modeled by the sigmoid curve; and while
there are many solutions for the problem, the most common solution
is the logit function. Since
p(X) will return the
probability of success, we will set
p(x) to the logit
function.
In this article, I’m going to use SciKit-Learn to perform the regression analysis on a problem. The problem I’m attempting to solve was an example given by my Professor at San Jose State University.
The Challenger disaster in January 1986 was caused by the failure of an O-ring. The incidence could have been prevented because data on failure of these O-rings (as a function outside air temperature) was available at the time of the shuttle launch.
On the morning of January 28, 1986 the air temperature was about 31°F. Even though this value is outside the range of observed temperatures, use the logit model to predict the probability of O-ring failure for the Challenger flight.
X will be our temperature in a 2D array and our response will be our variable y, which is also a 2D array.
X = [[53.0],[56.0],[57.0],[63.0],[66.0],[67.0],[67.0],[67.0],[68.0],[69.0],[70.0],[70.0],[70.0],[70.0],[72.0],[73.0],[75.0],[75.0],[76.0],[76.0],[78.0],[79.0],[80.0],[81.0]] y = [[1.0],[1.0],[1.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[1.0],[0.0],[1.0],[0.0],[0.0],[0.0],[1.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0]]
Import our necessary Python packages. In this case, we only need
SciKit-Learn. From SKLearn, we want to import
LogisticRegression()
from sklearn.linear_model import LogisticRegression
Next, we want to create a logistic regression model with 2
parameters:
C and
solver.
Solver: According to the documentation, the solver parameter specifies the type of optimization algorithm. An optimization algorithm in mathematics is an iterative procedure that tries to find the best solution. In this example problem, we will use the Broyden–Fletcher–Goldfarb–Shanno algorithm. If you want to learn more about this algorithm, refer to this wikipedia page
C: Allows us to specify the strength of the regularization. In other words, the higher the number, the closer the algorithm will try to get to the right prediction. However, setting a too-high of a number would cause your model to overfit. Too small would not reach the optimal solution. In this example, we'll try 25.
model = LogisticRegression(C=25, solver='lbfgs')
Next, we want to fit our data to the model. We do this by simply using the
fit() function.
model.fit(X,y)
After running your code, we can grab the coefficients from our logistic regression model. We do this by using the
coef_ and
intercept_ functions.
m = model.coef_ b = model.intercept_ print(b,m)
intercept_ = [11.74238757] coef_ = [[-0.18837235]]
Based on the model, we can interpret that as temperature increases by one unit, the odds ration will change by a factor of
e**m. In this case,
m = -0.18837235
The question, what is the probability of failure when the temperature outside is 31°F. Simply let p(31) = ? where X as our predictor equal to 31.
model.predict_proba([[31]])
0.99727578
That means, there is a 99.7% chance of failure if the temperature outside is at 31°F. With that being said, it is no surprise that the challenger exploded in mid-flight. It would be surprising if it did not explode.
This example covered logistic regression with one predictor. SciKit-Learn is capable of performing logistic regression with more than one predictor. The only difference we need to change is the solver, C, and the multi_class parameters.
• SciKit-Learn Logistic Regression Documentation
• Multivariate Logistic Regression | https://articlesbycyril.com/statistics/logistic-regression-with-scikit-learn.html | CC-MAIN-2019-43 | en | refinedweb |
This topic covers how to print a Tree List layout and show a print preview for the control.
The TreeList.ShowPrintPreview and TreeList.ShowRibbonPrintPreview methods
open a window with print commands and a print preview of the current Tree List control. When using the first method, the print commands are displayed using the Bars UI, while the second method displays the print commands using a Ribbon UI. Using the Preview window, an end-user is able to customize the page settings (the page format, margins and orientation), provide a background image for pages, specify which Tree List elements must be printed, export the Tree List to various formats (PDF, HTML. XLS, Image, etc), and so on.
The TreeList.Print method prints the Tree List control immediately, without showing a preview, using the default page settings. Before calling this method, you can specify which Tree List elements must be printed. To do this, use the properties provided by the TreeList.OptionsPrint object. This object also provides settings that specify whether to expand nodes before printing, whether columns must be stretched to fit the page width, etc.
You can also use the TreeList.PrintDialog method to display the standard Print dialog, which allows you to select a printer and its settings, and then start or cancel the print operation.
The TreeList.ShowPrintPreview, TreeList.ShowRibbonPrintPreview and TreeList.Print methods provide basic printing capabilities. For information on advanced printing capabilities (e.g., setting the paper size and margins beforehand, add headers and footers to the printout, etc.), see How to: Set Paper Format and Add Custom Information to the Report when Printing/Exporting a Control.
The following example demonstrates how to print a TreeList with the TreeList.Print method, and show its print preview with the TreeList.ShowRibbonPrintPreview method.
The image below shows the Preview window for a sample Tree List control.
The print commands provided by this window are displayed using a Ribbon UI.
using DevExpress.XtraTreeList;
// ...
private void ShowTreeListPreview(TreeList treeList) {
// Check whether the Tree List can be previewed.
if (!treeList.IsPrintingAvailable) {
MessageBox.Show("The Printing Library is not found", "Error");
return;
}
// Open the Preview window.
treeList.ShowRibbonPrintPreview();
}
private void PrintTreeList(TreeList treeList) {
// Check whether the Tree List can be printed.
if (!treeList.IsPrintingAvailable) {
MessageBox.Show("The Printing Library is not found", "Error");
return;
}
// Print.
treeList.Print();
}
Imports DevExpress.XtraTreeList
' ...
Sub ShowTreeListPreview(ByVal treeList As TreeList)
' Check whether the Tree List can be previewed.
If Not treeList.IsPrintingAvailable Then
MessageBox.Show("The Printing Library is not found", "Error")
Return
End If
' Opens the Preview window.
treeList.ShowRibbonPrintPreview()
End Sub
Sub PrintTreeList(ByVal treeList As TreeList)
' Check whether the Tree List can be printed.
If Not treeList.IsPrintingAvailable Then
MessageBox.Show("The Printing Library is not found", "Error")
Return
End If
' Print.
treeList.Print()
End Sub | https://documentation.devexpress.com/WindowsForms/5708/Controls-and-Libraries/Tree-List/Feature-Center/Miscellaneous/Print-TreeList/Printing-Basics | CC-MAIN-2019-43 | en | refinedweb |
Opened 7 months ago
Closed 6 weeks ago
#7358 closed change (duplicate)
Remove ext.* namespace
Change History (1)
comment:1 Changed 6 weeks ago by greiner
- Resolution set to duplicate
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
Closing this ticket in favor of ui#361. | https://issues.adblockplus.org/ticket/7358 | CC-MAIN-2019-43 | en | refinedweb |
SYNTAX
C Syntax
#include <mpi.h> int MPI_Type_create_f90_real(int p, int r, MPI_Datatype *newtype)
Fortran Syntax
INCLUDE 'mpif.h' MPI_TYPE_CREATE_F90_REAL (P, R, NEWTYPE, IERROR) INTEGER P, R, NEWTYPE, IERROR
C++ Syntax
#include <mpi.h> static MPI::Datatype MPI::Datatype::Create_f90_real(int p, int r)
INPUT PARAMETERS
- p
- Precision, in decimal digits (integer).
- r
- Decimal exponent range (integer).
OUTPUT PARAMETERS
- newtype
- New data type (handle).
- IERROR
- Fortran only: Error status (integer).
DESCRIPTIONThis function provides a way to declare KIND-parameterized REAL MPI datatypes. The arguments are interpreted in a similar fashion to the F90 function SELECTED_REAL_KIND. The parameters p and r must be scalar integers. The argument p represents the required level of numerical precision, in decimal digits. The r parameter indicates the range of exponents desired: the returned datatype will have at least one exponent between +r and -r (inclusive).
Either p or r, but not both, may be omitted from calls to SELECTED_REAL_KIND. Similarly, either argument to MPI_Type_create_f90_real may be set to MPI_UNDEFINED.
NOTESIt is erroneous to supply values for p and r not supported by the compiler.. | http://manpages.org/mpi_type_create_f90_real/3 | CC-MAIN-2021-21 | en | refinedweb |
Media Embed Base
This plugin is a base of the Media Embed and Semantic Media Embed plugins. It exposes a set of tools under the CKEDITOR.plugins.embedBase namespace which can be used to create new media embed widgets.
Read more in the Embedding Media Resources guide.
61,775base';
- Download and configure all its dependencies, too.
Add-on dependencies | https://ckeditor.com/cke4/addon/embedbase | CC-MAIN-2021-21 | en | refinedweb |
Python NumPy MCQs
11. Numpy developed by?
View Answer
Explanation: Numpy developed by Travis Oliphant.
12. Which of the following Numpy operation are correct?
View Answer
Explanation: Using Numpy, a developer can perform all operations.
13. The basic ndarray is created using?
View Answer
Explanation: It creates an ndarray from any object exposing array interface, or from any method that returns an array : numpy.array(object, dtype = None, copy = True, order = None, subok = False, ndmin = 0).
14. What will be output for the following code?
import numpy as np
a = np.array([1, 2, 3,4,5], ndmin = 2)
print a
View Answer
Explanation: The output is as follows : [[1, 2, 3, 4, 5]]
15. What is the syntax for dtype object?
View Answer
Explanation: A dtype object is constructed using the following syntax : numpy.dtype(object, align, copy)
16. Which of the following function stacks 1D arrays as columns into a 2D array?
View Answer
Explanation: column_stack is equivalent to vstack only for 1D arrays.
17. Which of the following statement is true?
View Answer
Explanation: The array object returned by __array_prepare__ is passed to the ufunc for computation is true
18. Which of the following set the floating-point error callback function or log object?
View Answer
Explanation: setter sets how floating-point errors are handled.
19. What will be output for the following code?
import numpy as np
dt = np.dtype([('age',np.int8)])
a = np.array([(10,),(20,),(30,)], dtype = dt)
print a['age']
View Answer
Explanation: The output is as follows : [10 20 30]
20. What is the range of uint32 data type?
View Answer
Explanation: uint32 : Unsigned integer (0 to 4294967295) | https://letsfindcourse.com/data-science/python-numpy-mcq | CC-MAIN-2021-21 | en | refinedweb |
Generate beautiful invoices using Python.
Project description
Invogen
InvoGen is a package to generate beautiful invoices using Python.
Getting Started
To install InvoGen, simply run
pip install invogen
Using InvoGen
InvoGen is easy to use! In the command prompt or in a file type:
from invogen import * foobar_inc = Customer("test", name="Foobar Inc.") invoice = Invoice(foobar_inc) invoice.add_entry(InvoiceEntry( id_code="Test01", description="Some entry item", rate=5, quantity=1, )) invoice.shipping = 3
You can get a printout of your invoice like this:
>>> print(invoice) Invoice for Foobar Inc. (test) | ID | Description | Rate | Quantity | Amount | +--------+----------------------+----------+----------+----------+ | Test01 | Some entry item | 5.00 | 1 | 5.00 | +--------+----------------------+----------+----------+----------+ Sub-total: 5.00 Shipping: 3.00 Discount: -0.00 +---------------------+ Total: 8.00
To generate a PDF invoice using the default LaTeX template, use
template = LatexTemplate("default.tex") template.to_pdf(invoice)
N.B. To use LaTeX templates, you will have to have LaTeX installed. You can find out how to install LaTeX for your system here.
Documentation
Documentation can be found on Read the Docs
The docs are built with Sphinx and autodoc. To build the docs as html yourself, use
cd docs make html
Testing
The tests are in
/test.
To run the tests with coverage, use
pytest
Contributing
Please feel free to fork and open a pull request if you would like to change something.
The dependencies can be installed using pip and
requirements.txt or Pipenv and the
Pipfile.
More templates would be especially welcome!
Authors
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/invogen/ | CC-MAIN-2021-21 | en | refinedweb |
International shipping Secure payment
No products
Prices are tax included
9LCDTFT3.2AR
New
An Arduino Shield - LCD 3.2" Display Shield for Arduino Mega
0 Item Items
This product is no longer in stock
Warning: Last items in stock!
Availability date:
Free shipping over R1000 only for standard courier and within South Africa
LCD 3.2" Display for Arduino Mega
This TFT 3.2 Inch LCD display support 480x320 pixel resolutions. The display use the ILI9481 graphic controller. The module includes the 5V-3.3V power conversion circuit and no additional level conversion circuitry is required. This Module can be inserted directly into the Arduino Mega2560 Board. Additional power to the Arduino Mega is advised, but we have run the display just off the USB port in our lab without a problems. We try to keep the cost down so this display don't support touch.
Quick Speck
Display installed on Arduino Mega
How to setup
Demo 1 - Drawing a Red block 50 x 50 pixels
#include <UTFT.h> UTFT myGLCD(CTE32HR,38,39,40,41); //Make sure CTE32HR display driver is used void setup() { myGLCD.InitLCD(); // Setup the LCD myGLCD.InitLCD(); myGLCD.clrScr(); } void loop() { myGLCD.setColor(255, 0, 0); myGLCD.fillRect(0, 0, 50, 50); }
Resources
No customer reviews for the moment. | https://www.diyelectronics.co.za/store/shields/1194-lcd-32-display-for-arduino-mega.html | CC-MAIN-2021-21 | en | refinedweb |
Like
Total Posts
Correct Reply
Like
Total Posts
Correct Reply
17-04-2019
Hi Team,
I have created one simple servlet using resourceTypes ,
Below is my code snippet "
@Component(service=Servlet.class ,property={ Constants.SERVICE_DESCRIPTION + "=Simple Medicine Servlet",
"sling.servlet.methods=" + HttpConstants.METHOD_GET, "sling.servlet.paths=" + "/bin/mySimpleservlet1",
"sling.servlet.resourceTypes=" + "p1App/components/structure/samplePage" , "sling.servlet.selectors=" + "groups"})
public class ResourceServlet extends SlingSafeMethodsServlet{
private static final long serialVersionUID = 1L;
@Override
protected void doGet(SlingHttpServletRequest request, SlingHttpServletResponse response)
throws ServletException, IOException {
PrintWriter out = response.getWriter();
out.println("Servlet is called by resourceType!!!!!");}
}"
Case 1: By Path
When i hit the URL localhost:4502/bin/mySimpleservlet1 on browser getting output Servlet is called by resourceType!!!!!
& I am looking in Sling Servlet Resolver , by Path is visible.
To call above servlet i wrote Ajax call as well ,
$.ajax({
type: 'GET',
url: 'bin/mySimpleServlet1',
data:{"pageurl":"/content/p1App/mypage/jcr:content/par/mastercomponent/medicine"},
success: function(msg){
console.log(msg); }
});
Case 2 : By resourceTypes
When i hit the URL localhost:4502/content/mypage.groups.html on browser getting output Servlet is called by resourceType!!!!!
& I am looking in Sling Servlet Resolver resourceTypes is not visible.
a) Can we see th servlet by resourceTypes is resolved in Sling Servlet Resolver ?
b) Can you please tell what to mention in ajax call to get servlet using resourceTypes?
Also let me know if you need more details.
Thanks in advance.
Regards,
Pavan | https://experienceleaguecommunities.adobe.com/t5/ratings/ratingdetailpage/message-uid/311893/rating-system/forum_topic_metoo | CC-MAIN-2021-21 | en | refinedweb |
I dont understand the appearent discrepency in the treatment of the variabe x, y, and z.
Why y isn't treated as x and z?
#include <stdio.h> #include <string.h> int main() { char result[100] = "Philippe Dupont 30"; char x[50]; char y[50]; int z; /*We use sscanf to give a value to the three variables x, y and z. the two first are strings and don't need &.*/ sscanf(result, "%s%s%d", x, y, &z); /*Printing the value of the variables works fine.*/ printf("%s\n", x); printf("%s\n", y); printf("%d\n", z); /*But when I want to print a string in which the variables are, the variable y output is an address, not as for x and z*/ printf("My first name is %s \n my last name is %d \n and I am %d years old\n", x, y, z); return 0; } /*OUTPUT: Philippe Dupont 30 My first name is Philippe my last name is -478321712 and I am 30 years old */ | https://www.daniweb.com/programming/software-development/threads/520976/behevior-of-sscan-function | CC-MAIN-2021-21 | en | refinedweb |
Setup:
- Add a better theme
- Autodoc
- Autobuild
Getting started
Using sphinx is pretty easy:
Install with pip:
pip install sphinx
Run the quickstart:
sphinx-quickstart
Notes:
- I like to set the root path for documentation to:
docs. The rest of this tutorial will assume that you have also installed your sphinx docs into a docs folder
just like that .. you have your documentation setup. You can build your docs by running:
make html
from your
docs folder
Source: First Steps with Sphinx
Add a better Theme
I don't really like the default theme, I think the one over at ReadTheDocs is much better.
Adding it is a breeze:
pip install sphinx_rtd_theme
Then update your
conf.py file (
docs/source/conf.py):
Find
html_theme = 'default' and replace this with:
import sphinx_rtd_theme html_theme = "sphinx_rtd_theme" html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
Rebuild your docs with
make html, and you should see a much prettier looking version of your docs
Autobuild
It can be quite annoying to have to keep updating the docs everytime you make a change, it would be nice if it could auto-build. Fortunately, there's a plugin for that:
pip install sphinx-autobuild
Now, from inside our
docs folder, we can run autobuild with:
sphinx-autobuild source build/html
This will load up your docs on a server 127.0.0.1:8000. If you have the Chrome Live Reload plugin installed, it will even refresh your browser everytime you change your files.
Note:, if you want to run on a different port, use the
-p switch. e.g.:
sphinx-autobuild source build/html -p3000
Source: sphinx-autobuild
Bonus: publish your docs automatically with Jenkins
To do this, you need to install two plugins:
- Shining Panda plugin - gives you python virtualenvs
- HTML Publisher Report
Add the following to a virtualenvbuilder step:
pip install sphinx pip install sphinx_rtd_theme cd docs && make html
Add a post build action of type: "publish html report". Set:
- Directory:
docs/build/html/
- Index page:
index.html
Now, if you build, you should see a report attached to your job dashboard. | https://blog.toast38coza.me/documenting-your-project-with-sphinx/ | CC-MAIN-2021-21 | en | refinedweb |
Monthly Dev Update #3 | Q1 Core Development Review
ETC Labs Core launched in January to support and move the Ethereum Classic ecosystem forward. In the first few months, we reached important milestones with ETC-ETH compatibility, vital data analytic tooling, fundamental specifications to improve the DApp development environment, and significantly grew the team with prominent developers in the blockchain space.
Team
- Core dev. team size up 75%
The team began with Constantine Kryvomaz, Meowbits, Michael Collison, Mike Lubinets, Shane Jonas, Stevan Lohja, and Zachary Belford . We’re proud to have on-boarded additional developers since, including Alan Li, Devon Wesley, Jake Lang, Talha Cross, Zac Mitton, and Zane Starr.
Constantine, Meowbits, and Talha form our client team with Meowbits being our lead client developer. Together, they are contributing network analytic tooling, supporting Classic Geth, Multi-Geth, and championing network upgrades.
Alan Li, Jake Lang, Michael C., and Mike L. form our EVM/ Compiler team with Michael C. our lead compiler developer. They are driving the ETC JIT Compiler and EVM LLVM projects which will dramatically improve EVM and smart contract execution performance.
Shane and Zachary Belford co-lead the tooling team with Devon, Mitton, Zane as our new DApp tooling developers. The DApp tooling team are working on projects to support the DApp developer environment including the OpenRPC Specification which is a game changing innovation for how peer to peer applications communicate with each other and the blockchain.
Stevan Lohja is our team contact and contributing web, documentation, and team coordination. Stevan is working on projects to advocate education on our technologies and developer documentation.
First-Quarter 2019 Achievements
Client Team
We’ve proposed ECIP-1054 upgrade, code named Atlantis, which has tremendous support throughout the community. The specification contains proposed test-net and main-net target block heights, but there needs to be more discussion with client developer groups at this time. The motivation of the ECIP-1054 upgrade fork is to enable maximum ETC-ETH compatibility and performance improvements.
-.
Client team has remained focused on the task of providing high quality network-driving software, empowering developers to build decentralized and peer to peer applications. In 2019 Q1, we’ve addressed a number of issues alongside achieving this goal. Early in Q1 we had a double spend attack in the form of a 51% mining attack. We responded with monitoring tools to help users of the network adjust number of confirmations accordingly.
- Completed an open source network supervisor to monitor network distribution in light of 51% attack.
- Completed an ELK stack configuration for Geth clients.
EVM/ Compiler Team
Sputnik-VM
- Implemented a versatile Dynamic Path API that has feature-wise configuration from Geth clients.
- Implemented integration layer from Multi-Geth EVM.
- SputnikVM passes all ETH test-suit hard-forks. Enough for enabling Atlantis, but more testing is required.
- Repo housekeeping (CI config on Jenkins. Rust code formatting enabled by default. Code updated for Rust 2018 edition)
- Implemented Rust bindings from EVMC API and began assessing compatibility concerns for SVM.
Just-In-Time Compiler (JIT)
- Completed foundational subsystems: gas metering, exception handling, and runtime manager (Unit test for all subsystems).
- Completed phase 1 of the external interface subsystem (function signature provider)
- Implemented a wrapper API for constructing LLVM type declarations inline
Unexpected:
- Had to write attribute and intrinsic managers due to lack of support in inkwell and llvm-C API
- Found a leak related to LLVM context deallocation
- Began work on refactoring the JIT to have clearer lifetime model that doesn’t depend on singletons
LLVM EVM Backend
- Designed the LLVM EVM backend pipeline and workflow (Including stackify pass which translates LLVM virtual registers to stack operations).
- Implemented preliminary LLVM code generator.
- Designed EVM code generator optimization framework.
Tooling Team
Etherlog
- Completed an initial ELK-based logging setup that will run an ethereum client, ElasticSearch, Logstash, and Kibana It preloads some dashboards that are prefect for monitoring the health of the ETC network.
OpenRPC
In Q4 2018 we identified the there was a strong need for high level software quality at the base layer for most application developers whom.
- Released OpenRPC Specification 1x
In addition to ECIP-1053 for Ethereum Classic, it can level up the entire ecosystems tooling so we’ve contributed OpenRPC improvement proposals to Bitcoin and Ethereum:
Mock Server
- Completed a Mock Server to provide a JSON-RPC backend that will respond to methods defined in an OpenRPC document.
npm install -g @open-rpc/mock-server
open-rpc-mock-server -s \
-s <OpeRPC Document Reference>
This gives a fully functioning server to test against.
Generator Client
- Completed a generator client-sdk in (eventually) any language.
- Currently supports Rust, TypeScript, and JavaScript
Given an OpenRPC document, you may generate the client as simpls as:
npm install -g @open-rpc/generator-client
open-rpc-generator-client \
-s <OpenRPC Document Reference>
Playground
- Completed web IDE for OpenRPC
In-browser editor coupled with OpenRPC Meta schema and docs-react to provide interactive documentation / OpenRPC document editing experience. You can try it our at.
Docs React
- Completed React Docs Component for OpenRPC documents
Docs React is a react component that will render documentation for a given OpenRPC document.
To use, simply:
npm install –save @open-rpc/docs-react
Then inside your React app:
import React from ‘react’;
import ReactDOM from ‘react-dm’;import Documentation from “@open-rpc/docs-react”;import {petstore} from “@open-rpc/examples”;
ReactDOM.render(<Documentation schema={petstore} />, document.getElementById(“root”));
Jenkins
- Completed Jenkins setup allowing to provide builds of our tools for multiple platforms while working locally on windows, linux, and OSX. It also includes a terraform configuration for easy deployment to AWS.
git clone
cd jenkins-vagrant
vagrant up
# or
terraform up
Ethash Client Setups
Ready to use miner clients. By simply editing the `start_miner.bat` file with a desired pool server and payment address, a miner can easily begin mining ETC.
PhoenixMiner.exe -pool <INSERT POOL SERVER> -worker Rig001 -wal <INSERT PAY ADDRESS> -pass x -retrydelay 2
Forward Quarterly Goals
Q2
Client:
- Contribute to service discovery (OpenRPC) implementation to Multi-Geth.
- Analyze implications of EWASM
EVM:
- Release SVM versions 0.11 and 0.12
- Sputnik-VM-Dev improvements; Updated to work with latest SVM, run integration tests, and experiment with ‘miri’ tests runtime.
- Stabilize EVMC binding and prototyping SVM support for EVMC.
JIT:
- Complete external (sload/store, etc), memory, stack, and 256-bit arithmetic sub-systems.
- Begin main compiler code generation.
- Implement a helper subsystem for external callbacks that don’t require blockchain access.
LLVM EVM:
- Implement the remaining components to make LLVM framework work.
- Implement EVM optimization in LLVM.
- Integrate with contract language frontend (such as Vyper or Solidity)
- Reach at least 90% of performance of Solidity compiler.
Tooling:
- Complete service runner.
- Contribute to OpenRPC adoption in ETC clients.
- Infrastructure progress for Jade DApp framework.
Education:
- Launch open source dev portal for developer resources and documentation.
Q3
Client:
- Atlantis fork upgrades.
EVM:
- Atlantis fork upgrades.
JIT:
- Complete JIT.
LLVM EVM:
- Continued progress.
Tooling:
- Multi-network Explorer
- Smart contract deployment tool.
ETC Labs Core in the Media
ETC Labs is working on the Atlantis hard fork proposal to introduce compatibility to the Ethereum Virtual Machine…bitcoinexchangeguide.com | https://medium.com/@stevan.blog/monthly-dev-update-3-q1-core-development-review-862e285599bd | CC-MAIN-2021-21 | en | refinedweb |
Creating a licensing system for paid apps in Swift
The easiest way is to create a paid macOS app is to simply put a price tag in the App Store, but it's a common practice nowadays to provide a free download that can later be upgraded to a pro version. In this article, we'll use our knowledge of serial numbers and asymmetric cryptography to create license files that cannot be reverse-engineered and use them to activate an app's premium features.
The safest way to include a "pro" version in your app is to have a backend that is capable of providing content to premium users, but not every app falls into this category. If what you're developing is an offline productivity tool, then you might not have a backend at all. The easiest alternative is this case is to add an in-app purchase for the pro version, but for macOS apps, you might want to not use the App Store at all. In these cases, you'll have to ship your own licensing system that is capable of validating and upgrading an instance of the app.
A simple way to achieve this is to provide serial numbers -- a system in which a user of the app can purchase one of these numbers and input it into the app to unlock its premium features. But how do you know the code is legitimate? Is it possible to confirm that the code was 100%, without a shadow of a doubt, provided by you, and not faked by someone who reverse-engineered the logic?
Serial numbers in the past
In the mid-2000s, serial numbers were a very common way to validate purchases, and every software/game you bought from a store would come with a serial number in the box which you had to input when installing it to prove that you were in possession of a legitimate copy of the software. However, serial numbers at the time were also very flawed. The validation logic was often some sort of hash function that was calculated on top of the serial number, and because this all happened offline, it wasn't very hard for a hacker to decompile the software and find out what this logic was. In fact, decompilers like Hopper nowadays are so good that they can even convert the decompiled assembly code into a pretty readable pseudo C code, making it pretty easy to figure out how an app works. Hackers would then use this logic to create keygens that could produce fake serial numbers that these apps would naively accept as being legitimate. If you ever pirated anything from the time, you definitely used one of these!
Fortunately, with modern cryptography, the serial number system has been since replaced by a much more secure system that is practically impossible to break. Let's see how it works and how to implement one in Swift.
Creating an unbreakable(*) licensing system
* (Note: When I say unbreakable, I mean that it's impossible for someone to create fake licenses without modifying the app itself. If the validation process is placed in the client and a hacker decompiles your binary, they can simply disable the validation process and distribute a cracked version of your app. If you want to be truly unhackable, you should only serve content from a backend.)
As we've seen above, the biggest flaw in serial number systems is that the validation logic could simply be reverse engineered and reproduced to generate fake keys that the apps would think are legit. This seems like a dead-end scenario because we absolutely can't prevent the app from being reverse engineered, but we actually can prevent the reverse-engineered logic from being reproduced.
You might think this doesn't make sense, because if they know how the app validates a key, then you surely have all the tools you need to create a fake one, right? If you thought that, you're actually correct. But the thing is not that it's impossible for someone to reproduce it, it's that it's technically unfeasible.
The system we'll implement in this article is called a digital signature, and it works around asymmetric encryption (private/public key). Digital signatures work by providing some arbitrary data (for example, the name of the person who purchased the license) and a serial number, which we'll now call a signature. This signature was created by encrypting that data with one of the keys, and by inputting both the data and the signature into the app, the app can validate it by decrypting the signature with the other key of the pair and checking that the resulting value is equal to the accompanying data.
There's only one additional requirement we'll add to this system: Instead of encrypting the raw data, we'll instead encrypt a hash of it (which we'll call a digest). This is mainly for performance reasons since asymmetric encryption is meant to be used for small pieces of data, but also to prevent a security issue we'll see later on.
//--- How Digital Signatures Work ---
// Data: An arbitrary piece of data, like the user's name.
//--- Backend ---
// User purchases a license through a website
let digest = SHA512(userName)
let userSignature: String = encrypt(digest, withKey: privateKey)
return userSignature // Send the signature to the user
//--- App ---
// User will activate the app's premium features by validating a signature (the license)
let digest = SHA512(userName)
let result: String = decrypt(userSignature, withKey: publicKey)
if result == digest {
print("Pro version unlocked!")
} else {
print("Invalid license!")
}
If the validation succeeds, then the signature is absolutely legitimate. As you might know from asymmetric encryption, something encrypted by one key can only be decrypted by the other, so if you decrypt a value and it matches what you expected, then that value has 100% been generated by the other key of the pair.
The security of digital signatures comes from the fact that you can make it impossible for a hacker to have access to both keys. The idea is that you can ship your app with one of the keys (the "public key") so you can validate signatures, but the generation of these signatures will happen privately and safely inside your backend when a user purchases a license. Because the key that generates the signatures (the "private key") is never exposed to the outside world, a hacker would never be able to intercept it, making the creation of fake licenses impossible unless they kidnap you or spend 0.65 billion billion years trying to brute-force all possible combinations.
Can we break digital signatures by reversing the process?
Let's use intuition to validate the safety of a digital signature. We know the following:
PrivateKey + Data = Signature
PublicKey + Signature = Data
A hacker can't intercept the private key, so they can't generate a valid signature for a certain piece of data. However, they can definitely extract the public key from your app's binary. What if they input something random as the signature and attempt to decrypt it with the key? What do you think the result will be?
PublicKey + MyRandomValue = X
The result, X, will be the arbitrary data value that would cause the private key to generate this signature!
PrivateKey + X = MyRandomValue
PublicKey + MyRandomValue = X
Thus, even though a hacker doesn't know what the private key is, they can still find the data that matches a given signature by reversing the process. This is precisely why we need to first hash the data with a strong algorithm like SHA-512 -- even though hackers can easily find the X that matches a particular signature, that X will simply be the digest of the original data. The app will not validate that signature unless they figure out what data generated that digest, and unless they can literally survive the end of the universe, they probably won't.
On a different note, the advancements being made to computers (especially quantum ones) are slowly making this possible, with researches suggesting that brute-forcing algorithms like SHA-256 might become feasible sometime around 2030. However, by that time, you'll have hopefully already have migrated to whatever the standards of 2030 for security would be.
Implementing digital signatures in Swift
Before implementing a validation system, let's first define what our license key/validation will look like. You can use anything as your license keys, as long as it contains the data that we'll use to create the digest and the resulting signature. In this case, let's pretend that we have a
myApp.license file that is essentially a JSON:
{
"name": "Bruno Rocha"
"signature": "AUmrQ3cK+bZOjBPnrGV/3KWiTddu50zWvsas1tMlepc2zf="
}
In our app, we'll provide fields where the user can input this data.
Generating Signatures
For this example, we'll assume that both the app and backend are written in Swift for simplicity.
The first thing we need is a pair of encryption keys. It's possible to generate keys in Swift, but since the private key will be stored in the backend we'll use OpenSSL for simplicity. In this case, I want to generate a pair of 2048 bit RSA keys:
// Generate a 2048 bit RSA private key
openssl genrsa -out my_private_key.pem 2048
// Extract public key out of it
openssl rsa -in my_private_key.pem -outform PEM -pubout -out my_public_key.pem
If you open these files with a text editor, you'll be able to extract the base64 representation of the keys that we'll need for the rest of this tutorial.
Let's now assume that we want to generate a license file for someone who just bought a pro version of our app. We'll use the name of the user to create our digest and encrypt it to create the signature. Luckily for us, the Security framework has tons of built-in APIs and algorithms for digital signatures.
Let's start by creating an instance of our private key. Remember, this is supposed to be some backend code that nobody has access to. Do not reference your private key in the actual app! It's perfectly fine to ship your public key as a hardcoded string in your app, but never expose your private key to the outside world. If you suspect your private key has leaked, invalidate the current public key, generate a brand new pair of keys and restart the process.
import Security
func getPrivateKey(_ base64PrivateKeyString: String) throws -> SecKey {
let data = Data(base64Encoded: base64PrivateKeyString, options: [])!
let options: [String: Any] = [kSecAttrKeyType as String: kSecAttrKeyTypeRSA,
kSecAttrKeyClass as String: kSecAttrKeyClassPrivate,
kSecAttrKeySizeInBits as String: 2048]
var error: Unmanaged<CFError>?
guard let privateKey = SecKeyCreateWithData(
data as CFData,
options as CFDictionary,
&error
) else {
throw error!.takeRetainedValue() as Error
}
return privateKey
}
To create a signature, we can call Security's
SecKeyCreateSignature method:
func sign(userName: String, withKey privateKey: SecKey) throws -> String {
let data = userName.data(using: .utf8)!
var error: Unmanaged<CFError>?
guard let signature = SecKeyCreateSignature(
privateKey,
.rsaSignatureMessagePKCS1v15SHA512,
data as CFData,
&error
) as Data? else {
throw error!.takeRetainedValue() as Error
}
return signature.base64EncodedString()
}
This method takes an algorithm, and you might notice that there are several options. They simply represent different encryption methods, and in this case we'll want to select
.rsaSignatureMessagePKCS1v15SHA512. What this long name means is that we'll take a message (the user's name, in this case), create a digest using SHA-512 (our hashing algorithm of choice for this article) and encrypt it with an RSA asymmetric key (the one we just created) that follows the basic definitions of the Public Key Cryptography Standards.
The other algorithms are simply variations of this format. For example, if you prefer hashing the data yourself, you could use the series of enums that are named
SignatureDigest instead of
SignatureMessage to indicate that the data is already hashed. You can use these variations to use different hashing algorithms, and even different forms of asymmetric encryption like elliptic curve keys (ECDSA).
If your backend isn't written in Swift, it's likely that your programming language of choice has its own APIs for digital signatures. In the event that it doesn't, you can reproduce this algorithm by simply hashing the data yourself and encrypting the resulting digest.
Once the signature is successfully generated, we can create a license key in the format we created above and return it to the user.
func createLicense(forUser userName: String) throws -> String {
//// Remember: This is private backend code. Do not leak your private key!
//// PS: If you generated the key with OpenSSL, you need to remove the newlines for the key creation to work.
let privateKeyStringBase64 = "" // Add your key's base64 here
let privateKey = try getPrivateKey(privateKeyStringBase64)
////
let signature = try sign(userName: userName, withKey: privateKey)
return """
{
"name": "\(userName)",
"signature": "\(signature)"
}
"""
}
With my private key, the result of calling
createLicense(forUser: "Bruno Rocha") looked like this:
{
"name": =="
}
Now, with possession of the license file, this user can use this JSON to activate their copy of the app.
Validating a digital signature in the app
To validate the user's license in the app, we must decrypt the signature and check that the result is equal to the hash of our data of choice (the user's name). Like with the creation of the signature, Security provides the
SecKeyVerifySignature API to make this easy for us!
But before doing that, we need to create an instance of our public key. Unlike before, the logic from now on should live inside the app:
func getPublicKey(_ base64PublicKeyString: String) throws -> SecKey {
let data = Data(base64Encoded: base64PublicKeyString, options: [])!
let options: [String: Any] = [kSecAttrKeyType as String: kSecAttrKeyTypeRSA,
kSecAttrKeyClass as String: kSecAttrKeyClassPublic,
kSecAttrKeySizeInBits as String: 2048]
var error: Unmanaged<CFError>?
guard let publicKey = SecKeyCreateWithData(
data as CFData,
options as CFDictionary,
&error
) else {
throw error!.takeRetainedValue() as Error
}
return publicKey
}
With possession of the key, we can validate the signature by inputting it and the data into
SecKeyVerifySignature. The most important thing here is the algorithm of choice: It must match the one that created the signature.
func validateLicense(userName: String, signature: String, publicKey: SecKey) -> Bool {
let message = userName.data(using: .utf8)!
guard let signatureData = Data(base64Encoded: signature) else {
print("The signature isn't a base64 string!")
return false
}
var error: Unmanaged<CFError>?
if SecKeyVerifySignature(
publicKey,
.rsaSignatureMessagePKCS1v15SHA512,
message as CFData,
signatureData as CFData,
&error
) {
return true
} else {
if let error = error {
print(error.takeRetainedValue())
}
return false
}
}
This function will return true if the license is valid, and print an error otherwise.
Here's an example of this being used to check the previous signature. Try copying and pasting this to see the result yourself!
// It's not a problem to hardcode your public keys in your app! A hacker won't be able to do anything with them.
let publicKey = try! getPublicKey(
"MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArhBwaepKM5hZA4I/IZJ8oOTCbKMr+H5KZ3W4fx/ISMtZqbL6NJBNDLEqHCF/kA/Af9YbN5kFgQoysB9TzDCGnQMZ6nzMsne8muXklrPx7ApX317ckVVDph59mBNrx4IMYNM7BYCN2dv5RxraNFqHKQ9nDi510OIRHVGnKkulLa3RxGVVpTHs3GYI3rDiT/5a8Oi0Tku77lqeZDe368Kx7jsD8Pgxb+Xz7IQfh/H/xG/q9AfcDYNbmBgDbh/OH1+HF9t66/h7uXLPqEgMhkoc5jibd1h/7jFNAoMlB3o97KKGEAQjM61i5/Q1WpK5e1X4OIiFD+KpbERUwO1RvLToSwIDAQAB"
)
let isLicenseValid = validateLicense(
userName: ==",
publicKey: publicKey
)
if isLicenseValid {
print("The license is valid!")
// activatePremiumFeatures()
} else {
print("The license is not valid!")
}
Try changing the user name or the signature to see what happens! Because I never shared my private key with you, you will never be able to create a different license that passes the validation. I, however, can create as many of them as I wish! Here are some other ones:
{
"name": "SwiftRocks",
"signature": "mKb5hIV2/bWkus0VWNEWUcPEoFDcRS6Uv6wpWpbekCSCQbfusOW1mhwntQTSLhIdL+Wl6FK/upW1ztGyij5Y2EE8LjUU0a7Fa2ItdwV8QVhDb/J8ftjpc7U3H2KV8khL61R6QIVzh4aQ1hxjQ0Zs2aaN7dvjprq8gfbBe4rxnKTyllAoXsKG7aCqFgGWdMQVq3wNtiILCh1MnUjk/yRt5fa4vv3l20xHfjPindPnxhTspNCtghuGcgdon5GaHKvNtVYQcsSx7PXvvQ1wpKpDT6juohS/Q+Jz8D4tikgThuFBDoExOXIlN5ZbQJwgNugwWmS8mdnpaw+cbOI88Fm/AA=="
}
{
"name": "Can't hack me, eh?",
"signature": "Dq7EfDURo6mj/0Fk7XAnDt04WCDxXBQYJAdQMQh3fVV4K4UE4AaCGAv8XX9Mo/SKrnD54VU9oSpH3XOQKKBkLKcG59+GatKILO9Os0Ikf7/PiweaTmrtRwnY24o8PU7R3jlj+ces8A8KwZkw2up/XdIz3wS6TzPNGEq+oy38mI7sZuG7zeEKVwFsZPuSaK13zIH50jlhIndYVx/MVhSYbdHvf6mkF2n84QmwUEmQbc1ZGriUozlxNiZ+TxjeFywUvCfzidd0OR7j78kb32WgMsb7osAk1p4BSV9LTpFAOaJzmF2QiiVNr/UjgBxx5KkrXMxmznb4/wJPi902iE1IaA=="
}
{
"name": "Unlimited power!",
"signature": "rFT++9NEzcCsoxy0V8RRd7VOyO2aKfAQR0Cfwl1uLlbxp2ibRmZBRaAVWkCRw0YLOoNSb/VYkJVW++y04k+KWSq+X7QJcKpRfflZvyJCQczt8EVbYAcJrVSLyTpFVscxviwsuSFkVKsVzlJrfob/3+7YDg4hnTlBd1fvntzqUNomC0mzmyAuWcZs+EwVzHyQ7aGCnbn3tgbDq4W9TsKRjfEJBQOYrKX0WvWNpRUl5ScU5LL5wxE1Pt76CZUtBynrDlJHbRf0pNbWAdToFLUz6gJ+OqzeoUt/26ieEykfG0kwhLHKd8+N67nNWb3HuF5CiRkUoqC9nynKs4mUGmup0w=="
}
We can now safely activate our app's premium features for these users.
Conclusion
As a final note, remember what was said in regards to the meaning of uncrackable in this context. Unless your app is serving 100% of its content from a backend, it's impossible to make it uncrackable. In this case, a hacker could simply edit the assembly of the app to invert the
if that activates the premium features. Like a physical lock, security measures in macOS apps are simply deterrents, and you should keep that in mind when implementing features like this.
| https://swiftrocks.com/creating-a-license-system-for-paid-apps-in-swift?utm_campaign=AppCoda%20Weekly&utm_medium=email&utm_source=Revue%20newsletter | CC-MAIN-2021-21 | en | refinedweb |
tmpfiles.d — Configuration for creation, deletion and cleaning of volatile and temporary files
or
package.conf
.
The second variant should be used when it is desirable to make it
easy to override just this part of configuration.
package-
part.conf ").
The type consists of a single letter and optionally an exclamation mark ("
!")
and/or minus sign ("
-").
The following line types are understood:
f,
f+¶
f will create a file if it does not exist yet. If the argument
parameter is given and the file did not exist yet, it will be written to the file.
f+ will create or truncate the file. If the argument parameter is given, it will
be written to the file. Does not follow symlinks.
w,
w+¶
Write the argument parameter to a file, if the file exists.
If suffixed with
+, the line will be appended to the file.
If your configuration writes multiple lines to the same file, use
w+.
Lines of this type accept shell-style globs in place of normal path names.
The argument parameter will be written without a trailing newline.
C-style backslash escapes are interpreted. Follows symlinks.
d¶
Create a directory. The mode and ownership will be adjusted if specified. Contents of this directory are subject to time based cleanup if the age argument is specified.
D¶
Similar to
d, but in addition the contents of the directory will
be removed when
--remove is used.
e¶
Adjust the mode and ownership of existing directories and remove their contents
based on age.
Lines of this type accept shell-style globs in place of normal path names. Contents of the
directories are subject to time based cleanup if the age argument is specified. If the age argument
is "
0", contents will be unconditionally deleted every time
systemd-tmpfiles --clean is run.
For this entry to be useful, at least one of the mode, user, group, or age arguments must be
specified, since otherwise this entry has no effect. As an exception, an entry with no effect may
be useful when combined with
!, see the examples.
v¶¶
Create a subvolume or directory the same as
v, but assign the
subvolume to the same higher-level quota groups as the parent. This ensures that higher-level
limits and accounting applied to the parent subvolume also include the specified subvolume. On
non-btrfs file systems, this line type is identical to
d.¶
Create the subvolume or directory the same as
v, but assign the
new subvolume to a new leaf quota group. Instead of copying the higher-level quota group
assignments from the parent as is done with
q, already exists, regardless of whether the subvolume already belong to a quota group or not.
p,
p+¶
Create a named pipe (FIFO) if it does not
exist yet. If suffixed with
+ and a file
already exists where the pipe is to be created, it will be
removed and be replaced by the pipe.
L,
L+¶+¶+¶¶
Recursively copy a file or directory, if the
destination files or directories do not exist yet or the
destination directory is empty. Note that this command will not
descend into subdirectories if the destination directory already
exists and is not empty. Instead, the entire copy operation is
skipped. If the argument is omitted, files from the source directory
/usr/share/factory/ with the same name
are copied. Does not follow symlinks.
x¶¶¶
Remove a file or directory if it exists.
This may not be used to remove non-empty directories, use
R for that. Lines of this type accept
shell-style globs in place of normal path
names. Does not follow symlinks.
R¶
Recursively remove a path and all its subdirectories (if it is a directory). Lines of this type accept shell-style globs in place of normal path names. Does not follow symlinks.
z¶
Adjust the access mode, user and group ownership, and restore the SELinux security context of a file or directory, if it exists. Lines of this type accept shell-style globs in place of normal path names. Does not follow symlinks.
Z¶
Recursively set the access mode, user and group ownership,¶
Set extended attributes, see attr(5) for details. The argument field should take one or more
assignment expressions in the form
namespace.
attribute=
value,
for examples see below. Lines of this type accept shell-style globs in place of normal path
names. This can be useful for setting SMACK labels. Does not follow symlinks.
Please note that extended attributes settable with this line type are a different concept
from the Linux file attributes settable with
h/
H, see
below.
T¶
Same as
t, but operates recursively.
h¶
Set Linux file/directory attributes. Lines of this type accept shell-style globs in place of normal path names.¶
Sames as
h, but operates recursively.
a,
a+¶
Set POSIX ACLs (access control lists), see acl(5).+¶
Same as
a and
a+, but recursive. Does not follow
symlinks. for more information on requirements on system user/group definitions... can be used in the "path" and "argument" fields. An unknown or unresolvable specifier is treated as invalid configuration. The following expansions are understood:
Example 1. Create directories with specific mode and ownership/cache/dnf/ - - - 30d
The lock files will be removed during boot. Any files and directories in
/var/cache.
/run/and
/var/run/¶
/var/run/ is a deprecated symlink to
/run/, and
applications should use the latter. systemd-tmpfiles will warn if
/var/run/ is used. | https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html | CC-MAIN-2021-21 | en | refinedweb |
Modifying the data model by using Class Manager
The Common Data Model (CDM) is a set of Configuration Item (CI) classes and relationship classes that ship with BMC CMDB. These CI and relationship classes cover most business scenarios. If these out-of-the-box classes do not cover your business requirements, you must then extend the data model. You can extend the CDM by either creating your own classes or extending an existing class with additional attributes.
Best Practice
Whenever you extend the data model, use your own namespace instead of
BMC.CORE. This prevents your extensions from being overwritten by new classes when you upgrade to a future version of the CDM.
Never modify the core CDM class attributes because upgrades across versions overwrite these customizations. Modifications to the CDM, such as changing the attribute field length in a class are not preserved during upgrades. On the other hand, a new class or an extended class with additional attributes is not overwritten during an upgrade.
Creating or extending a class may have a significant business impact and you must perform these modifications only after careful planning and consideration of all eventualities.
This section contains information about the following tasks and concepts:
Related topics
Modification of your data model
Learning about the common data model
Knowledge Article 000095846: Restoring missing common data model attributes from class hierarchy | https://docs.bmc.com/docs/ac1908/modifying-the-data-model-by-using-class-manager-877695653.html | CC-MAIN-2021-21 | en | refinedweb |
1 2 3 4 5 6 7 Network Working Group T. Berners-Lee 8 Request for Comments: 2396 MIT/LCS 9 Updates: 1808, 1738 R. Fielding 10 Category: Standards Track U.C. Irvine 11 L. Masinter 12 Xerox Corporation 13 August 1998 14 15 16 Uniform Resource Identifiers (URI): Generic Syntax Copyright Notice 27 28 Copyright (C) The Internet Society (1998). All Rights Reserved. 29 30 IESG Note 31 32 This paper describes a "superset" of operations that can be applied 33 to URI. It consists of both a grammar and a description of basic 34 functionality for URI. To understand what is a valid URI, both the 35 grammar and the associated description have to be studied. Some of 36 the functionality described is not applicable to all URI schemes, and 37 some operations are only possible when certain media types are 38 retrieved using the URI, regardless of the scheme used. 39 40 Abstract 41 42 A Uniform Resource Identifier (URI) is a compact string of characters 43 for identifying an abstract or physical resource. This document 44 defines the generic syntax of URI, including both absolute and 45 relative forms, and guidelines for their use; it revises and replaces 46 the generic definitions in RFC 1738 and RFC 1808. 47 48 This document defines a grammar that is a superset of all valid URI, 49 such that an implementation can parse the common components of a URI 50 reference without knowing the scheme-specific requirements of every 51 possible identifier type. This document does not define a generative 52 grammar for URI; that task will be performed by the individual 53 specifications of each URI scheme. 54 55 56 57 58 Berners-Lee, et. al. Standards Track [Page 1] 59 60 RFC 2396 URI Generic Syntax August 1998 61 62 63 1. Introduction 64 65 Uniform Resource Identifiers (URI) provide a simple and extensible 66 means for identifying a resource. This specification of URI syntax 67 and semantics is derived from concepts introduced by the World Wide 68 Web global information initiative, whose use of such objects dates 69 from 1990 and is described in "Universal Resource Identifiers in WWW" 70 [RFC1630]. The specification of URI is designed to meet the 71 recommendations laid out in "Functional Recommendations for Internet 72 Resource Locators" [RFC1736] and "Functional Requirements for Uniform 73 Resource Names" [RFC1737]. 74 75 This document updates and merges "Uniform Resource Locators" 76 [RFC1738] and "Relative Uniform Resource Locators" [RFC1808] in order 77 to define a single, generic syntax for all URI. It excludes those 78 portions of RFC 1738 that defined the specific syntax of individual 79 URL schemes; those portions will be updated as separate documents, as 80 will the process for registration of new URI schemes. This document 81 does not discuss the issues and recommendation for dealing with 82 characters outside of the US-ASCII character set [ASCII]; those 83 recommendations are discussed in a separate document. 84 85 All significant changes from the prior RFCs are noted in Appendix G. 86 87 1.1 Overview of URI 88 89 URI are characterized by the following definitions: 90 91 Uniform 92 Uniformity provides several benefits: it allows different types 93 of resource identifiers to be used in the same context, even 94 when the mechanisms used to access those resources may differ; 95 it allows uniform semantic interpretation of common syntactic 96 conventions across different types of resource identifiers; it 97 allows introduction of new types of resource identifiers 98 without interfering with the way that existing identifiers are 99 used; and, it allows the identifiers to be reused in many 100 different contexts, thus permitting new applications or 101 protocols to leverage a pre-existing, large, and widely-used 102 set of resource identifiers. 103 104 Resource 105 A resource can be anything that has identity. Familiar 106 examples include an electronic document, an image, a service 107 (e.g., "today's weather report for Los Angeles"), and a 108 collection of other resources. Not all resources are network 109 "retrievable"; e.g., human beings, corporations, and bound 110 books in a library can also be considered resources. 111 112 113 114 Berners-Lee, et. al. Standards Track [Page 2] 115 116 RFC 2396 URI Generic Syntax August 1998 117 118 119 The resource is the conceptual mapping to an entity or set of 120 entities, not necessarily the entity which corresponds to that 121 mapping at any particular instance in time. Thus, a resource 122 can remain constant even when its content---the entities to 123 which it currently corresponds---changes over time, provided 124 that the conceptual mapping is not changed in the process. 125 126 Identifier 127 An identifier is an object that can act as a reference to 128 something that has identity. In the case of URI, the object is 129 a sequence of characters with a restricted syntax. 130 131 Having identified a resource, a system may perform a variety of 132 operations on the resource, as might be characterized by such words 133 as `access', `update', `replace', or `find attributes'. 134 135 1.2. URI, URL, and URN 136 137 A URI can be further classified as a locator, a name, or both. The 138 term "Uniform Resource Locator" (URL) refers to the subset of URI 139 that identify resources via a representation of their primary access 140 mechanism (e.g., their network "location"), rather than identifying 141 the resource by name or by some other attribute(s) of that resource. 142 The term "Uniform Resource Name" (URN) refers to the subset of URI 143 that are required to remain globally unique and persistent even when 144 the resource ceases to exist or becomes unavailable. 145 146 The URI scheme (Section 3.1) defines the namespace of the URI, and 147 thus may further restrict the syntax and semantics of identifiers 148 using that scheme. This specification defines those elements of the 149 URI syntax that are either required of all URI schemes or are common 150 to many URI schemes. It thus defines the syntax and semantics that 151 are needed to implement a scheme-independent parsing mechanism for 152 URI references, such that the scheme-dependent handling of a URI can 153 be postponed until the scheme-dependent semantics are needed. We use 154 the term URL below when describing syntax or semantics that only 155 apply to locators. 156 157 Although many URL schemes are named after protocols, this does not 158 imply that the only way to access the URL's resource is via the named 159 protocol. Gateways, proxies, caches, and name resolution services 160 might be used to access some resources, independent of the protocol 161 of their origin, and the resolution of some URL may require the use 162 of more than one protocol (e.g., both DNS and HTTP are typically used 163 to access an "http" URL's resource when it can't be found in a local 164 cache). 165 166 167 168 169 170 Berners-Lee, et. al. Standards Track [Page 3] 171 172 RFC 2396 URI Generic Syntax August 1998 173 174 175 A URN differs from a URL in that it's primary purpose is persistent 176 labeling of a resource with an identifier. That identifier is drawn 177 from one of a set of defined namespaces, each of which has its own 178 set name structure and assignment procedures. The "urn" scheme has 179 been reserved to establish the requirements for a standardized URN 180 namespace, as defined in "URN Syntax" [RFC2141] and its related 181 specifications. 182 183 Most of the examples in this specification demonstrate URL, since 184 they allow the most varied use of the syntax and often have a 185 hierarchical namespace. A parser of the URI syntax is capable of 186 parsing both URL and URN references as a generic URI; once the scheme 187 is determined, the scheme-specific parsing can be performed on the 188 generic URI components. In other words, the URI syntax is a superset 189 of the syntax of all URI schemes. 190 191 1.3. Example URI 192 193 The following examples illustrate URI that are in common use. 194 195 196 -- ftp scheme for File Transfer Protocol services 197 198 gopher://spinaltap.micro.umn.edu/00/Weather/California/Los%20Angeles 199 -- gopher scheme for Gopher and Gopher+ Protocol services 200 201 202 -- http scheme for Hypertext Transfer Protocol services 203 204 mailto:mduerst@ifi.unizh.ch 205 -- mailto scheme for electronic mail addresses 206 207 news:comp.infosystems. 208 -- news scheme for USENET news groups and articles 209 210 telnet://melvyl.ucop.edu/ 211 -- telnet scheme for interactive services via the TELNET Protocol 212 213 1.4. Hierarchical URI and Relative Forms 214 215 An absolute identifier refers to a resource independent of the 216 context in which the identifier is used. In contrast, a relative 217 identifier refers to a resource by describing the difference within a 218 hierarchical namespace between the current context and an absolute 219 identifier of the resource. 220 221 222 223 224 225 226 Berners-Lee, et. al. Standards Track [Page 4] 227 228 RFC 2396 URI Generic Syntax August 1998 229 230 231 Some URI schemes support a hierarchical naming system, where the 232 hierarchy of the name is denoted by a "/" delimiter separating the 233 components in the scheme. This document defines a scheme-independent 234 `relative' form of URI reference that can be used in conjunction with 235 a `base' URI (of a hierarchical scheme) to produce another URI. The 236 syntax of hierarchical URI is described in Section 3; the relative 237 URI calculation is described in Section 5. 238 239 1.5. URI Transcribability 240 241 The URI syntax was designed with global transcribability as one of 242 its main concerns. A URI is a sequence of characters from a very 243 limited set, i.e. the letters of the basic Latin alphabet, digits, 244 and a few special characters. A URI may be represented in a variety 245 of ways: e.g., ink on paper, pixels on a screen, or a sequence of 246 octets in a coded character set. The interpretation of a URI depends 247 only on the characters used and not how those characters are 248 represented in a network protocol. 249 250 The goal of transcribability can be described by a simple scenario. 251 Imagine two colleagues, Sam and Kim, sitting in a pub at an 252 international conference and exchanging research ideas. Sam asks Kim 253 for a location to get more information, so Kim writes the URI for the 254 research site on a napkin. Upon returning home, Sam takes out the 255 napkin and types the URI into a computer, which then retrieves the 256 information to which Kim referred. 257 258 There are several design concerns revealed by the scenario: 259 260 o A URI is a sequence of characters, which is not always 261 represented as a sequence of octets. 262 263 o A URI may be transcribed from a non-network source, and thus 264 should consist of characters that are most likely to be able to 265 be typed into a computer, within the constraints imposed by 266 keyboards (and related input devices) across languages and 267 locales. 268 269 o A URI often needs to be remembered by people, and it is easier 270 for people to remember a URI when it consists of meaningful 271 components. 272 273 These design concerns are not always in alignment. For example, it 274 is often the case that the most meaningful name for a URI component 275 would require characters that cannot be typed into some systems. The 276 ability to transcribe the resource identifier from one medium to 277 another was considered more important than having its URI consist of 278 the most meaningful of components. In local and regional contexts 279 280 281 282 Berners-Lee, et. al. Standards Track [Page 5] 283 284 RFC 2396 URI Generic Syntax August 1998 285 286 287 and with improving technology, users might benefit from being able to 288 use a wider range of characters; such use is not defined in this 289 document. 290 291 1.6. Syntax Notation and Common Elements 292 293 This document uses two conventions to describe and define the syntax 294 for URI. The first, called the layout form, is a general description 295 of the order of components and component separators, as in 296 297 <first>/<second>;<third>?<fourth> 298 299 The component names are enclosed in angle-brackets and any characters 300 outside angle-brackets are literal separators. Whitespace should be 301 ignored. These descriptions are used informally and do not define 302 the syntax requirements. 303 304 The second convention is a BNF-like grammar, used to define the 305 formal URI syntax. The grammar is that of [RFC822], except that "|" 306 is used to designate alternatives. Briefly, rules are separated from 307 definitions by an equal "=", indentation is used to continue a rule 308 definition over more than one line, literals are quoted with "", 309 parentheses "(" and ")" are used to group elements, optional elements 310 are enclosed in "[" and "]" brackets, and elements may be preceded 311 with <n>* to designate n or more repetitions of the following 312 element; n defaults to 0. 313 314 Unlike many specifications that use a BNF-like grammar to define the 315 bytes (octets) allowed by a protocol, the URI grammar is defined in 316 terms of characters. Each literal in the grammar corresponds to the 317 character it represents, rather than to the octet encoding of that 318 character in any particular coded character set. How a URI is 319 represented in terms of bits and bytes on the wire is dependent upon 320 the character encoding of the protocol used to transport it, or the 321 charset of the document which contains it. 322 323 The following definitions are common to many elements: 324 325 alpha = lowalpha | upalpha 326 327" and double-quote (") characters are 552 excluded because they are often used as the delimiters around URI in 553 text documents and protocol fields. The character "#" is excluded 554 because it is used to delimit a URI from a fragment identifier in URI 555 references (Section 4). The percent character "%" is excluded because 556 it is used for the encoding of escaped characters. 557 558" | "#" | "%" | <"> 559 560 561 562 Berners-Lee, et. al. Standards Track [Page 10] 563 564 RFC 2396 URI Generic Syntax August 1998 565 566 567 Other characters are excluded because gateways and other transport 568 agents are known to sometimes modify such characters, or they are 569 used as delimiters. 570 571 1816 <HTML><HEAD> 1817 <TITLE>An example HTML document</TITLE> 1818 <BASE href=""> 1819 </HEAD><BODY> 1820 ... <A href="../x">a hypertext anchor</A> ... 1821 </BODY></HTML> 1822 1823 A parser reading the example document should interpret the given 1824 relative URI "../x" as representing the absolute URI 1825 1826 <> 1827 1828 regardless of the context in which the example document was obtained. 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 Berners-Lee, et. al. Standards Track [Page 33] 1851 1852 RFC 2396 URI Generic Syntax August 1998 1853 1854 1855 E. Recommendations for Delimiting URI in Context 1856 1857 URI are often transmitted through formats that do not provide a clear 1858 context for their interpretation. For example, there are many 1859 occasions when URI are included in plain text; examples include text 1860 sent in electronic mail, USENET news messages, and, most importantly, 1861 printed on paper. In such cases, it is important to be able to 1862 delimit the URI from the rest of the text, and in particular from 1863 punctuation marks that might be mistaken for part of the URI. 1864 1865 In practice, URI are delimited in a variety of ways, but usually 1866 within double-quotes "", angle brackets 1867 <>, or just using whitespace 1868 1869 1870 1871 These wrappers do not form part of the URI. 1872 1873 In the case where a fragment identifier is associated with a URI 1874 reference, the fragment would be placed within the brackets as well 1875 (separated from the URI with a "#" character). 1876 1877 In some cases, extra whitespace (spaces, linebreaks, tabs, etc.) may 1878 need to be added to break long URI across lines. The whitespace 1879 should be ignored when extracting the URI. 1880 1881 No whitespace should be introduced after a hyphen ("-") character. 1882 Because some typesetters and printers may (erroneously) introduce a 1883 hyphen at the end of line when breaking a line, the interpreter of a 1884 URI containing a line break immediately after a hyphen should ignore 1885 all unescaped whitespace around the line break, and should be aware 1886 that the hyphen may or may not actually be part of the URI. 1887 1888 Using <> angle brackets around each URI is especially recommended as 1889 a delimiting style for URI that contain whitespace. 1890 1891 The prefix "URL:" (with or without a trailing space) was recommended 1892 as a way to used to help distinguish a URL from other bracketed 1893 designators, although this is not common in practice. 1894 1895 For robustness, software that accepts user-typed URI should attempt 1896 to recognize and strip both delimiters and embedded whitespace. 1897 1898 For example, the text: 1899 1900 1901 1902 1903 1904 1905 1906 Berners-Lee, et. al. Standards Track [Page 34] 1907 1908 RFC 2396 URI Generic Syntax August 1998 1909 1910 1911 Yes, Jim, I found it under "", 1912 but you can probably pick it up from <. 1913 net/rfc/>. Note the warning in < 1914 ietf/uri/historical.html#WARNING>. 1915 1916 contains the URI references 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 Berners-Lee, et. al. Standards Track [Page 35] 1963 1964 RFC 2396 URI Generic Syntax August 1998 1965 1966 1967 F. Abbreviated URLs 1968 1969 The URL syntax was designed for unambiguous reference to network 1970 resources and extensibility via the URL scheme. However, as URL 1971 identification and usage have become commonplace, traditional media 1972 (television, radio, newspapers, billboards, etc.) have increasingly 1973 used abbreviated URL references. That is, a reference consisting of 1974 only the authority and path portions of the identified resource, such 1975 as 1976 1977 1978 1979 or simply the DNS hostname on its own. Such references are primarily 1980 intended for human interpretation rather than machine, with the 1981 assumption that context-based heuristics are sufficient to complete 1982 the URL (e.g., most hostnames beginning with "www" are likely to have 1983 a URL prefix of "http://"). Although there is no standard set of 1984 heuristics for disambiguating abbreviated URL references, many client 1985 implementations allow them to be entered by the user and 1986 heuristically resolved. It should be noted that such heuristics may 1987 change over time, particularly when new URL schemes are introduced. 1988 1989 Since an abbreviated URL has the same syntax as a relative URL path, 1990 abbreviated URL references cannot be used in contexts where relative 1991 URLs are expected. This limits the use of abbreviated URLs to places 1992 where there is no defined base URL, such as dialog boxes and off-line 1993 advertisements. 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 Berners-Lee, et. al. Standards Track [Page 36] 2019 2020 RFC 2396 URI Generic Syntax August 1998 2021 2022 2023 G. Summary of Non-editorial Changes 2024 2025 G.1. Additions 2026 2027 Section 4 (URI References) was added to stem the confusion regarding 2028 "what is a URI" and how to describe fragment identifiers given that 2029 they are not part of the URI, but are part of the URI syntax and 2030 parsing concerns. In addition, it provides a reference definition 2031 for use by other IETF specifications (HTML, HTTP, etc.) that have 2032 previously attempted to redefine the URI syntax in order to account 2033 for the presence of fragment identifiers in URI references. 2034 2035 Section 2.4 was rewritten to clarify a number of misinterpretations 2036 and to leave room for fully internationalized URI. 2037 2038 Appendix F on abbreviated URLs was added to describe the shortened 2039 references often seen on television and magazine advertisements and 2040 explain why they are not used in other contexts. 2041 2042 G.2. Modifications from both RFC 1738 and RFC 1808 2043 2044 Changed to URI syntax instead of just URL. 2045 2046 Confusion regarding the terms "character encoding", the URI 2047 "character set", and the escaping of characters with %<hex><hex> 2048 equivalents has (hopefully) been reduced. Many of the BNF rule names 2049 regarding the character sets have been changed to more accurately 2050 describe their purpose and to encompass all "characters" rather than 2051 just US-ASCII octets. Unless otherwise noted here, these 2052 modifications do not affect the URI syntax. 2053 2054 Both RFC 1738 and RFC 1808 refer to the "reserved" set of characters 2055 as if URI-interpreting software were limited to a single set of 2056 characters with a reserved purpose (i.e., as meaning something other 2057 than the data to which the characters correspond), and that this set 2058 was fixed by the URI scheme. However, this has not been true in 2059 practice; any character that is interpreted differently when it is 2060 escaped is, in effect, reserved. Furthermore, the interpreting 2061 engine on a HTTP server is often dependent on the resource, not just 2062 the URI scheme. The description of reserved characters has been 2063 changed accordingly. 2064 2065 The plus "+", dollar "$", and comma "," characters have been added to 2066 those in the "reserved" set, since they are treated as reserved 2067 within the query component. 2068 2069 2070 2071 2072 2073 2074 Berners-Lee, et. al. Standards Track [Page 37] 2075 2076 RFC 2396 URI Generic Syntax August 1998 2077 2078 2079 The tilde "~" character was added to those in the "unreserved" set, 2080 since it is extensively used on the Internet in spite of the 2081 difficulty to transcribe it with some keyboards. 2082 2083 The syntax for URI scheme has been changed to require that all 2084 schemes begin with an alpha character. 2085 2086 The "user:password" form in the previous BNF was changed to a 2087 "userinfo" token, and the possibility that it might be 2088 "user:password" made scheme specific. In particular, the use of 2089 passwords in the clear is not even suggested by the syntax. 2090 2091 The question-mark "?" character was removed from the set of allowed 2092 characters for the userinfo in the authority component, since testing 2093 showed that many applications treat it as reserved for separating the 2094 query component from the rest of the URI. 2095 2096 The semicolon ";" character was added to those stated as being 2097 reserved within the authority component, since several new schemes 2098 are using it as a separator within userinfo to indicate the type of 2099 user authentication. 2100 2101 RFC 1738 specified that the path was separated from the authority 2102 portion of a URI by a slash. RFC 1808 followed suit, but with a 2103 fudge of carrying around the separator as a "prefix" in order to 2104 describe the parsing algorithm. RFC 1630 never had this problem, 2105 since it considered the slash to be part of the path. In writing 2106 this specification, it was found to be impossible to accurately 2107 describe and retain the difference between the two URI 2108 <foo:/bar> and <foo:bar> 2109 without either considering the slash to be part of the path (as 2110 corresponds to actual practice) or creating a separate component just 2111 to hold that slash. We chose the former. 2112 2113 G.3. Modifications from RFC 1738 2114 2115 The definition of specific URL schemes and their scheme-specific 2116 syntax and semantics has been moved to separate documents. 2117 2118 The URL host was defined as a fully-qualified domain name. However, 2119 many URLs are used without fully-qualified domain names (in contexts 2120 for which the full qualification is not necessary), without any host 2121 (as in some file URLs), or with a host of "localhost". 2122 2123 The URL port is now *digit instead of 1*digit, since systems are 2124 expected to handle the case where the ":" separator between host and 2125 port is supplied without a port. 2126 2127 2128 2129 2130 Berners-Lee, et. al. Standards Track [Page 38] 2131 2132 RFC 2396 URI Generic Syntax August 1998 2133 2134 2135 The recommendations for delimiting URI in context (Appendix E) have 2136 been adjusted to reflect current practice. 2137 2138 G.4. Modifications from RFC 1808 2139 2140 RFC 1808 (Section 4) defined an empty URL reference (a reference 2141 containing nothing aside from the fragment identifier) as being a 2142 reference to the base URL. Unfortunately, that definition could be 2143 interpreted, upon selection of such a reference, as a new retrieval 2144 action on that resource. Since the normal intent of such references 2145 is for the user agent to change its view of the current document to 2146 the beginning of the specified fragment within that document, not to 2147 make an additional request of the resource, a description of how to 2148 correctly interpret an empty reference has been added in Section 4. 2149 2150 The description of the mythical Base header field has been replaced 2151 with a reference to the Content-Location header field defined by 2152 MHTML [RFC2110]. 2153 2154 RFC 1808 described various schemes as either having or not having the 2155 properties of the generic URI syntax. However, the only requirement 2156 is that the particular document containing the relative references 2157 have a base URI that abides by the generic URI syntax, regardless of 2158 the URI scheme, so the associated description has been updated to 2159 reflect that. 2160 2161 The BNF term <net_loc> has been replaced with <authority>, since the 2162 latter more accurately describes its use and purpose. Likewise, the 2163 authority is no longer restricted to the IP server syntax. 2164 2165 Extensive testing of current client applications demonstrated that 2166 the majority of deployed systems do not use the ";" character to 2167 indicate trailing parameter information, and that the presence of a 2168 semicolon in a path segment does not affect the relative parsing of 2169 that segment. Therefore, parameters have been removed as a separate 2170 component and may now appear in any path segment. Their influence 2171 has been removed from the algorithm for resolving a relative URI 2172 reference. The resolution examples in Appendix C have been modified 2173 to reflect this change. 2174 2175 Implementations are now allowed to work around misformed relative 2176 references that are prefixed by the same scheme as the base URI, but 2177 only for schemes known to use the <hier_part> syntax. 2178 2179 2180 2181 2182 2183 2184 2185 2186 Berners-Lee, et. al. Standards Track [Page 39] 2187 2188 RFC 2396 URI Generic Syntax August 1998 2189 2190 2191 H. Full Copyright Statement 2192 2193 Copyright (C) The Internet Society (1998). All Rights Reserved. 2194 2195 This document and translations of it may be copied and furnished to 2196 others, and derivative works that comment on or otherwise explain it 2197 or assist in its implementation may be prepared, copied, published 2198 and distributed, in whole or in part, without restriction of any 2199 kind, provided that the above copyright notice and this paragraph are 2200 included on all such copies and derivative works. However, this 2201 document itself may not be modified in any way, such as by removing 2202 the copyright notice or references to the Internet Society or other 2203 Internet organizations, except as needed for the purpose of 2204 developing Internet standards in which case the procedures for 2205 copyrights defined in the Internet Standards process must be 2206 followed, or as required to translate it into languages other than 2207 English. 2208 2209 The limited permissions granted above are perpetual and will not be 2210 revoked by the Internet Society or its successors or assigns. 2211 2212 This document and the information contained herein is provided on an 2213 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 2214 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 2215 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 2216 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 2217 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Berners-Lee, et. al. Standards Track [Page 40] 2243 | https://fossies.org/linux/gcgi/doc/rfc2396.txt | CC-MAIN-2021-21 | en | refinedweb |
respect...
I.
Fine information, many thanks to the author. It is puzzling to me now, but in general, the usefulness and importance is overwhelming. Very much thanks again wise essays and best of luck!
Yes. It is usually an object in real world applications. But an index can always represent an object, because it denotes the position of the object in an object array.
In the most of the text/video tutorials,vertices are always( as far as I've noticed) represented by integers ( from 0 to any positive integer). Is this always the case?? I ponder if we're dealing with real life,then vertices would be an object,right??
First, let's watch a slow-paced easy-to-follow walkthrough of the breadth-first search and depth-first search algorithms.
good!
ukessays review
I would like to see it again.
EDIT: Correction! it's optimal; sorry!
This tutori...
Programming
you are right brother.
ProgrammingOneOnOne
Would be good if all the questions were from one source (codeforces/atcoder)
Motivation shorte...
Thanks bro
Solution to Xenia and Tree. This is the cleanest i could make it
#include <bits/stdc++.h>using namespace std;
#include <bits/stdc++.h>
using namespace std;
Lecture on Centroid Decomposition by Tanuj Khattar. His blog in quora is linked above so i thought i should link this recent video.
Implementation of centroid decomposition on a tree. ...
The.
Some more questions to practice:
Wobble Man
can somebody provide pythonic sollution of problem templeq spoj
The links to C++ and Python implementations don't work anymore, seems like the github repo is deleted. | https://www.commonlounge.com/community/919705a0927646f9b49853ba13793b36/c814dc6bfffa4629b83e08e5acf80675 | CC-MAIN-2021-21 | en | refinedweb |
I have the need to take a string argument and create an object of the class named in that string in Python. In Java, I would use
Class.forName().newInstance(). Is there an equivalent in Python?
Thanks for the responses. To answer those who want to know what I’m doing: I want to use a command line argument as the class name, and instantiate it. I’m actually programming in Jython and instantiating Java classes, hence the Java-ness of the question.
getattr() works great. Thanks much.
Reflection in python is a lot easier and far more flexible than it is in Java.
I recommend reading this tutorial
There’s no direct function (that I know of) which takes a fully qualified class name and returns the class, however you have all the pieces needed to build that, and you can connect them together.
One bit of advice though: don’t try to program in Java style when you’re in python.
If you can explain what is it that you’re trying to do, maybe we can help you find a more pythonic way of doing it.
Here’s a function that does what you want:
def get_class( kls ): parts = kls.split('.') module = ".".join(parts[:-1]) m = __import__( module ) for comp in parts[1:]: m = getattr(m, comp) return m
You can use the return value of this function as if it were the class itself.
Here’s a usage example:
>>> D = get_class("datetime.datetime") >>> D <type 'datetime.datetime'> >>> D.now() datetime.datetime(2009, 1, 17, 2, 15, 58, 883000) >>> a = D( 2010, 4, 22 ) >>> a datetime.datetime(2010, 4, 22, 0, 0) >>>
How does that work?
We’re using
__import__ to import the module that holds the class, which required that we first extract the module name from the fully qualified name. Then we import the module:
m = __import__( module )
In this case,
m will only refer to the top level module,
For example, if your class lives in
foo.baz module, then
m will be the module
foo
We can easily obtain a reference to
foo.baz using
getattr( m, 'baz' )
To get from the top level module to the class, have to recursively use
gettatr on the parts of the class name
Say for example, if you class name is
foo.baz.bar.Model then we do this:
m = __import__( "foo.baz.bar" ) #m is package foo m = getattr( m, "baz" ) #m is package baz m = getattr( m, "bar" ) #m is module bar m = getattr( m, "Model" ) #m is class Model
This is what’s happening in this loop:
for comp in parts[1:]: m = getattr(m, comp)
At the end of the loop,
m will be a reference to the class. This means that
m is actually the class itslef, you can do for instance:
a = m() #instantiate a new instance of the class b = m( arg1, arg2 ) # pass arguments to the constructor
Assuming the class is in your scope:
globals()['classname'](args, to, constructor)
Otherwise:
getattr(someModule, 'classname')(args, to, constructor)
Edit: Note, you can’t give a name like ‘foo.bar’ to getattr. You’ll need to split it by . and call getattr() on each piece left-to-right. This will handle that:
module, rest = 'foo.bar.baz'.split('.', 1) fooBar = reduce(lambda a, b: getattr(a, b), rest.split('.'), globals()[module]) someVar = fooBar(args, to, constructor)
def import_class_from_string(path): from importlib import import_module module_path, _, class_name = path.rpartition('.') mod = import_module(module_path) klass = getattr(mod, class_name) return klass
Usage
In [59]: raise import_class_from_string('google.appengine.runtime.apiproxy_errors.DeadlineExceededError')() --------------------------------------------------------------------------- DeadlineExceededError Traceback (most recent call last) <ipython-input-59-b4e59d809b2f> in <module>() ----> 1 raise import_class_from_string('google.appengine.runtime.apiproxy_errors.DeadlineExceededError')() DeadlineExceededError:
Yet another implementation.
def import_class(class_string): """Returns class object specified by a string. Args: class_string: The string representing a class. Raises: ValueError if module part of the class is not specified. """ module_name, _, class_name = class_string.rpartition('.') if module_name == '': raise ValueError('Class name must contain module part.') return getattr( __import__(module_name, globals(), locals(), [class_name], -1), class_name)
It seems you’re approaching this from the middle instead of the beginning. What are you really trying to do? Finding the class associated with a given string is a means to an end.
If you clarify your problem, which might require your own mental refactoring, a better solution may present itself.
For instance: Are you trying to load a saved object based on its type name and a set of parameters? Python spells this unpickling and you should look at the pickle module. And even though the unpickling process does exactly what you describe, you don’t have to worry about how it works internally:
>>> class A(object): ... def __init__(self, v): ... self.v = v ... def __reduce__(self): ... return (self.__class__, (self.v,)) >>> a = A("example") >>> import pickle >>> b = pickle.loads(pickle.dumps(a)) >>> a.v, b.v ('example', 'example') >>> a is b False
This is found in the python standard library, as unittest.TestLoader.loadTestsFromName. Unfortunately the method goes on to do additional test-related activities, but this first ha looks re-usable. I’ve edited it to remove the test-related functionality:
def get_object(name): """Retrieve a python object, given its dotted.name.""" parts = name.split('.') parts_copy = parts[:] while parts_copy: try: module = __import__('.'.join(parts_copy)) break except ImportError: del parts_copy[-1] if not parts_copy: raise parts = parts[1:] obj = module for part in parts: parent, obj = obj, getattr(obj, part) return obj
Tags: class, java, python | https://exceptionshub.com/does-python-have-an-equivalent-to-java-class-forname-2.html | CC-MAIN-2021-21 | en | refinedweb |
span of a matrixspace
given a matrix subspace of sparse matrices over a ring Q
M=MatrixSpace(Q,1000,1000, sparse=True)
I need a way to define the subspace N of M given a list of matrices of M
I tried to turn matrices into lists and use the VectorSpace category, however my matrices are sparse matrices, and this takes very long
def lista(A): #converts a matrix into a list return A.list() def coor(P): # for the determined base B, it gives the coordinate vector of a matrix P[0] A=base(P[1][1]) k=len(P[1][1]) V=VectorSpace(Q,2**(2*k), sparse=True) #I would like to work with MatrixSpace instead B=[] #this will be my basis for i in range(len(A)): a=A[i][0] B.append(V(lista(a))) W=W=V.span_of_basis(B) #I dont know if this function exists in MatrixSpace p=W.coordinate_vector(V(lista(P[0]))) return p
Could you please provide the code of what you tried, so that we see where the problem is ?
@tmonteil done!
Could you please provide the constructon of the list of matrices ? | https://ask.sagemath.org/question/42313/span-of-a-matrixspace/?comment=42315 | CC-MAIN-2021-17 | en | refinedweb |
Readonly references
- [x] Proposed
- [x] Prototype
- [x] Implementation: Started
- [ ] Specification: Not Started
Summary
The "readonly references" feature is actually a group of features that leverage the efficiency of passing variables by reference, but without exposing the data to modifications:
inparameters
ref readonlyreturns
readonlystructs
ref/
inextension methods
ref readonlylocals
refconditional expressions
Passing arguments as readonly references.
There is an existing proposal that touches this topic as a special case of readonly parameters without going into many details. Here I just want to acknowledge that the idea by itself is not very new.
Motivation
Prior to this feature C# did not have an efficient way of expressing a desire to pass struct variables into method calls for readonly purposes with no intention of modifying. Regular by-value argument passing implies copying, which adds unnecessary costs. That drives users to use by-ref argument passing and rely on comments/documentation to indicate that the data is not supposed to be mutated by the callee. It is not a good solution for many reasons.
The examples are numerous - vector/matrix math operators in graphics libraries like XNA are known to have ref operands purely because of performance considerations. There is code in Roslyn compiler itself that uses structs to avoid allocations and then passes them by reference to avoid copying costs.
Solution (
in parameters)
Similarly to the
out parameters,
in parameters are passed as managed references with additional guarantees from the callee.
Unlike
out parameters which must be assigned by the callee before any other use,
in parameters cannot be assigned by the callee at all.
As a result
in parameters allow for effectiveness of indirect argument passing without exposing arguments to mutations by the callee.
Declaring
in parameters
in parameters are declared by using
in keyword as a modifier in the parameter signature.
For all purposes the
in parameter is treated as a
readonly variable. Most of the restrictions on the use of
in parameters inside the method are the same as with
readonly fields.
Indeed an
inparameter may represent a
readonlyfield. Similarity of restrictions is not a coincidence.
For example fields of an
in parameter which has a struct type are all recursively classified as
readonly variables .
static Vector3 Add (in Vector3 v1, in Vector3 v2) { // not OK!! v1 = default(Vector3); // not OK!! v1.X = 0; // not OK!! foo(ref v1.X); // OK return new Vector3(v1.X + v2.X, v1.Y + v2.Y, v1.Z + v2.Z); }
inparameters are allowed anywhere where ordinary byval parameters are allowed. This includes indexers, operators (including conversions), delegates, lambdas, local functions.
(in int x) => x // lambda expression TValue this[in TKey index]; // indexer public static Vector3 operator +(in Vector3 x, in Vector3 y) => ... // operator
inis not allowed in combination with
outor with anything that
outdoes not combine with.
It is not permitted to overload on
ref/
out/
indifferences.
It is permitted to overload on ordinary byval and
indifferences.
For the purpose of OHI (Overloading, Hiding, Implementing),
inbehaves similarly to an
outparameter. All the same rules apply. For example the overriding method will have to match
inparameters with
inparameters of an identity-convertible type.
For the purpose of delegate/lambda/method group conversions,
inbehaves similarly to an
outparameter. Lambdas and applicable method group conversion candidates will have to match
inparameters of the target delegate with
inparameters of an identity-convertible type.
For the purpose of generic variance,
inparameters are nonvariant.
NOTE: There are no warnings on
inparameters that have reference or primitives types. It may be pointless in general, but in some cases user must/want to pass primitives as
in. Examples - overriding a generic method like
Method(in T param)when
Twas substituted to be
int, or when having methods like
Volatile.Read(in int location)
It is conceivable to have an analyzer that warns in cases of inefficient use of
inparameters, but the rules for such analysis would be too fuzzy to be a part of a language specification.
Use of
in at call sites. (
in arguments)
There are two ways to pass arguments to
in parameters.
in arguments can match
in parameters:
An argument with an
in modifier at the call site can match
in parameters.
int x = 1; void M1<T>(in T x) { // . . . } var x = M1(in x); // in argument to a method class D { public string this[in Guid index]; } D dictionary = . . . ; var y = dictionary[in Guid.Empty]; // in argument to an indexer
inargument must be a readable LValue(*). Example:
M1(in 42)is invalid
(*) The notion of LValue/RValue vary between languages.
Here, by LValue I mean an expression that represent a location that can be referred to directly. And RValue means an expression that yields a temporary result which does not persist on its own.
In particular it is valid to pass
readonlyfields,
inparameters or other formally
readonlyvariables as
inarguments. Example:
dictionary[in Guid.Empty]is legal.
Guid.Emptyis a static readonly field.
inargument must have type identity-convertible to the type of the parameter. Example:
M1<object>(in Guid.Empty)is invalid.
Guid.Emptyis not identity-convertible to
object
The motivation for the above rules is that
in arguments guarantee aliasing of the argument variable. The callee always receives a direct reference to the same location as represented by the argument.
- in rare situations when
inarguments must be stack-spilled due to
awaitexpressions used as operands of the same call, the behavior is the same as with
outand
refarguments - if the variable cannot be spilled in referentially-transparent manner, an error is reported.
Examples:
M1(in staticField, await SomethingAsync())is valid.
staticFieldis a static field which can be accessed more than once without observable side effects. Therefore both the order of side effects and aliasing requirements can be provided.
M1(in RefReturningMethod(), await SomethingAsync())will produce an error.
RefReturningMethod()is a
refreturning method. A method call may have observable side effects, therefore it must be evaluated before the
SomethingAsync()operand. However the result of the invocation is a reference that cannot be preserved across the
awaitsuspension point which make the direct reference requirement impossible.
NOTE: the stack spilling errors are considered to be implementation-specific limitations. Therefore they do not have effect on overload resolution or lambda inference.
Ordinary byval arguments can match
in parameters:
Regular arguments without modifiers can match
in parameters. In such case the arguments have the same relaxed constraints as an ordinary byval arguments would have.
The motivation for this scenario is that
in parameters in APIs may result in inconveniences for the user when arguments cannot be passed as a direct reference - ex: literals, computed or
await-ed results or arguments that happen to have more specific types.
All these cases have a trivial solution of storing the argument value in a temporary local of appropriate type and passing that local as an
in argument.
To reduce the need for such boilerplate code compiler can perform the same transformation, if needed, when
in modifier is not present at the call site.
In addition, in some cases, such as invocation of operators, or
in extension methods, there is no syntactical way to specify
in at all. That alone requires specifying the behavior of ordinary byval arguments when they match
in parameters.
In particular:
- it is valid to pass RValues. A reference to a temporary is passed in such case. Example:
Print("hello"); // not an error. void Print<T>(in T x) { //. . . }
- implicit conversions are allowed.
This is actually a special case of passing an RValue
A reference to a temporary holding converted value is passed in such case. Example:
Print<int>(Short.MaxValue) // not an error.
- in a case of a receiver of an
inextension method (as opposed to
refextension methods), RValues or implicit this-argument-conversions are allowed. A reference to a temporary holding converted value is passed in such case. Example:
public static IEnumerable<T> Concat<T>(in this (IEnumerable<T>, IEnumerable<T>) arg) => . . .; ("aa", "bb").Concat<char>() // not an error.
More information on
ref/
in extension methods is provided further in this document.
- argument spilling due to
awaitoperands could spill "by-value", if necessary. In scenarios where providing a direct reference to the argument is not possible due to intervening
awaita copy of the argument's value is spilled instead.
Example:
M1(RefReturningMethod(), await SomethingAsync()) // not an error.
Since the result of a side-effecting invocation is a reference that cannot be preserved across
await suspension, a temporary containing the actual value will be preserved instead (as it would in an ordinary byval parameter case).
Omitted optional arguments
It is permitted for an
in parameter to specify a default value. That makes the corresponding argument optional.
Omitting optional argument at the call site results in passing the default value via a temporary.
Print("hello"); // not an error, same as Print("hello", c: Color.Black); void Print(string s, in Color c = Color.Black) { // . . . }
Aliasing behavior in general
Just like
ref and
out variables,
in variables are references/aliases to existing locations.
While callee is not allowed to write into them, reading an
in parameter can observe different values as a side effect of other evaluations.
Example:
static Vector3 v = Vector3.UnitY; static void Main() { Test(v); } static void Test(in Vector3 v1) { Debug.Assert(v1 == Vector3.UnitY); // changes v1 deterministically (no races required) ChangeV(); Debug.Assert(v1 == Vector3.UnitX); } static void ChangeV() { v = Vector3.UnitX; }
in parameters and capturing of local variables.
For the purpose of lambda/async capturing
in parameters behave the same as
out and
ref parameters.
inparameters cannot be captured in a closure
inparameters are not allowed in iterator methods
inparameters are not allowed in async methods
Temporary variables.
Some uses of
in parameter passing may require indirect use of a temporary local variable:
inarguments are always passed as direct aliases when call-site uses
in. Temporary is never used in such case.
inarguments are not required to be direct aliases when call-site does not use
in. When argument is not an LValue, a temporary may be used.
inparameter may have default value. When corresponding argument is omitted at the call site, the default value are passed via a temporary.
inarguments may have implicit conversions, including those that do not preserve identity. A temporary is used in those cases.
- receivers of ordinary struct calls may not be writeable LValues (existing case!). A temporary is used in those cases.
The life time of the argument temporaries matches the closest encompassing scope of the call-site.
The formal life time of temporary variables is semantically significant in scenarios involving escape analysis of variables returned by reference.
Metadata representation of
in parameters.
When
System.Runtime.CompilerServices.IsReadOnlyAttribute is applied to a byref parameter, it means that the parameter is an
in parameter.
In addition, if the method is abstract or virtual, then the signature of such parameters (and only such parameters) must have
modreq[System.Runtime.InteropServices.InAttribute].
Motivation: this is done to ensure that in a case of method overriding/implementing the
in parameters match.
Same requirements apply to
Invoke methods in delegates.
Motivation: this is to ensure that existing compilers cannot simply ignore
readonly when creating or assigning delegates.
Returning by readonly reference.
Motivation
The motivation for this sub-feature is roughly symmetrical to the reasons for the
in parameters - avoiding copying, but on the returning side. Prior to this feature, a method or an indexer had two options: 1) return by reference and be exposed to possible mutations or 2) return by value which results in copying.
Solution (
ref readonly returns)
The feature allows a member to return variables by reference without exposing them to mutations.
Declaring
ref readonly returning members
A combination of modifiers
ref readonly on the return signature is used to to indicate that the member returns a readonly reference.
For all purposes a
ref readonly member is treated as a
readonly variable - similar to
readonly fields and
in parameters.
For example fields of
ref readonly member which has a struct type are all recursively classified as
readonly variables. - It is permitted to pass them as
in arguments, but not as
ref or
out arguments.
ref readonly Guid Method1() { } Method2(in Method1()); // valid. Can pass as `in` argument. Method3(ref Method1()); // not valid. Cannot pass as `ref` argument
ref readonlyreturns are allowed in the same places were
refreturns are allowed. This includes indexers, delegates, lambdas, local functions.
It is not permitted to overload on
ref/
ref readonly/ differences.
It is permitted to overload on ordinary byval and
ref readonlyreturn differences.
For the purpose of OHI (Overloading, Hiding, Implementing),
ref readonlyis similar but distinct from
ref. For example the a method that overrides
ref readonlyone, must itself be
ref readonlyand have identity-convertible type.
For the purpose of delegate/lambda/method group conversions,
ref readonlyis similar but distinct from
ref. Lambdas and applicable method group conversion candidates have to match
ref readonlyreturn of the target delegate with
ref readonlyreturn of the type that is identity-convertible.
For the purpose of generic variance,
ref readonlyreturns are nonvariant.
NOTE: There are no warnings on
ref readonlyreturns that have reference or primitives types. It may be pointless in general, but in some cases user must/want to pass primitives as
in. Examples - overriding a generic method like
ref readonly T Method()when
Twas substituted to be
int.
It is conceivable to have an analyzer that warns in cases of inefficient use of
ref readonlyreturns, but the rules for such analysis would be too fuzzy to be a part of a language specification.
Returning from
ref readonly members
Inside the method body the syntax is the same as with regular ref returns. The
readonly will be inferred from the containing method.
The motivation is that
return ref readonly <expression> is unnecessary long and only allows for mismatches on the
readonly part that would always result in errors.
The
ref is, however, required for consistency with other scenarios where something is passed via strict aliasing vs. by value.
Unlike the case with
inparameters,
ref readonlyreturns never return via a local copy. Considering that the copy would cease to exist immediately upon returning such practice would be pointless and dangerous. Therefore
ref readonlyreturns are always direct references.
Example:
struct ImmutableArray<T> { private readonly T[] array; public ref readonly T ItemRef(int i) { // returning a readonly reference to an array element return ref this.array[i]; } }
- An argument of
return refmust be an LValue (existing rule)
- An argument of
return refmust be "safe to return" (existing rule)
- In a
ref readonlymember an argument of
return refis not required to be writeable . For example such member can ref-return a readonly field or one of its
inparameters.
Safe to Return rules.
Normal safe to return rules for references will apply to readonly references as well.
Note that a
ref readonly can be obtained from a regular
ref local/parameter/return, but not the other way around. Otherwise the safety of
ref readonly returns is inferred the same way as for regular
ref returns.
Considering that RValues can be passed as
in parameter and returned as
ref readonly we need one more rule - RValues are not safe-to-return by reference.
Consider the situation when an RValue is passed to an
inparameter via a copy and then returned back in a form of a
ref readonly. In the context of the caller the result of such invocation is a reference to local data and as such is unsafe to return. Once RValues are not safe to return, the existing rule
#6already handles this case.
Example:
ref readonly Vector3 Test1() { // can pass an RValue as "in" (via a temp copy) // but the result is not safe to return // because the RValue argument was not safe to return by reference return ref Test2(default(Vector3)); } ref readonly Vector3 Test2(in Vector3 r) { // this is ok, r is returnable return ref r; }
Updated
safe to return rules:
- refs to variables on the heap are safe to return
- ref/in parameters are safe to return
inparameters naturally can only be returned as readonly.
- out parameters are safe to return (but must be definitely assigned, as is already the case today)
- instance struct fields are safe to return as long as the receiver is safe to return
- 'this' is not safe to return from struct members
- a ref, returned from another method is safe to return if all refs/outs passed to that method as formal parameters were safe to return. Specifically it is irrelevant if receiver is safe to return, regardless whether receiver is a struct, class or typed as a generic type parameter.
- RValues are not safe to return by reference. Specifically RValues are safe to pass as in parameters.
NOTE: There are additional rules regarding safety of returns that come into play when ref-like types and ref-reassignments are involved. The rules equally apply to
refand
ref readonlymembers and therefore are not mentioned here.
Aliasing behavior.
ref readonly members provide the same aliasing behavior as ordinary
ref members (except for being readonly).
Therefore for the purpose of capturing in lambdas, async, iterators, stack spilling etc... the same restrictions apply. - I.E. due to inability to capture the actual references and due to side-effecting nature of member evaluation such scenarios are disallowed.
It is permitted and required to make a copy when
ref readonlyreturn is a receiver of regular struct methods, which take
thisas an ordinary writeable reference. Historically in all cases where such invocations are applied to readonly variable a local copy is made.
Metadata representation.
When
System.Runtime.CompilerServices.IsReadOnlyAttribute is applied to the return of a byref returning method, it means that the method returns a readonly reference.
In addition, the result signature of such methods (and only those methods) must have
modreq[System.Runtime.CompilerServices.IsReadOnlyAttribute].
Motivation: this is to ensure that existing compilers cannot simply ignore
readonly when invoking methods with
ref readonly returns
Readonly structs
In short - a feature that makes
this parameter of all instance members of a struct, except for constructors, an
in parameter.
Motivation
Compiler must assume that any method call on a struct instance may modify the instance. Indeed a writeable reference is passed to the method as
this parameter and fully enables this behavior. To allow such invocations on
readonly variables, the invocations are applied to temp copies. That could be unintuitive and sometimes forces people to abandon
readonly for performance reasons.
Example:
After adding support for
in parameters and
ref readonly returns the problem of defensive copying will get worse since readonly variables will become more common.
Solution
Allow
readonly modifier on struct declarations which would result in
this being treated as
in parameter on all struct instance methods except for constructors.
static void Test(in Vector3 v1) { // no need to make a copy of v1 since Vector3 is a readonly struct System.Console.WriteLine(v1.ToString()); } readonly struct Vector3 { . . . public override string ToString() { // not OK!! `this` is an `in` parameter foo(ref this.X); // OK return $"X: {X}, Y: {Y}, Z: {Z}"; } }
Restrictions on members of readonly struct
- Instance fields of a readonly struct must be readonly.
Motivation: can only be written to externally, but not through members.
- Instance autoproperties of a readonly struct must be get-only.
Motivation: consequence of restriction on instance fields.
- Readonly struct may not declare field-like events.
Motivation: consequence of restriction on instance fields.
Metadata representation.
When
System.Runtime.CompilerServices.IsReadOnlyAttribute is applied to a value type, it means that the type is a
readonly struct.
In particular:
- The identity of the
IsReadOnlyAttributetype is unimportant. In fact it can be embedded by the compiler in the containing assembly if needed.
ref/
in extension methods
There is actually an existing proposal () and corresponding prototype PR ().
I just want to acknowledge that this idea is not entirely new. It is, however, relevant here since
ref readonly elegantly removes the most contentious issue about such methods - what to do with RValue receivers.
The general idea is allowing extension methods to take the
this parameter by reference, as long as the type is known to be a struct type.
public static void Extension(ref this Guid self) { // do something }
The reasons for writing such extension methods are primarily:
- Avoid copying when receiver is a large struct
- Allow mutating extension methods on structs
The reasons why we do not want to allow this on classes
- It would be of very limited purpose.
- It would break long standing invariant that a method call cannot turn non-
nullreceiver to become
nullafter invocation.
In fact, currently a non-
nullvariable cannot become
nullunless explicitly assigned or passed by
refor
out. That greatly aids readability or other forms of "can this be a null here" analysis. 3. It would be hard to reconcile with "evaluate once" semantics of null-conditional accesses. Example:
obj.stringField?.RefExtension(...)- need to capture a copy of
stringFieldto make the null check meaningful, but then assignments to
thisinside RefExtension would not be reflected back to the field.
An ability to declare extension methods on structs that take the first argument by reference was a long-standing request. One of the blocking consideration was "what happens if receiver is not an LValue?".
- There is a precedent that any extension method could also be called as a static method (sometimes it is the only way to resolve ambiguity). It would dictate that RValue receivers should be disallowed.
- On the other hand there is a practice of making invocation on a copy in similar situations when struct instance methods are involved.
The reason why the "implicit copying" exists is because the majority of struct methods do not actually modify the struct while not being able to indicate that. Therefore the most practical solution was to just make the invocation on a copy, but this practice is known for harming performance and causing bugs.
Now, with availability of
in parameters, it is possible for an extension to signal the intent. Therefore the conundrum can be resolved by requiring
ref extensions to be called with writeable receivers while
in extensions permit implicit copying if necessary.
// this can be called on either RValue or an LValue public static void Reader(in this Guid self) { // do something nonmutating. WriteLine(self == default(Guid)); } // this can be called only on an LValue public static void Mutator(ref this Guid self) { // can mutate self self = new Guid(); }
in extensions and generics.
The purpose of
ref extension methods is to mutate the receiver directly or by invoking mutating members. Therefore
ref this T extensions are allowed as long as
T is constrained to be a struct.
On the other hand
in extension methods exist specifically to reduce implicit copying. However any use of an
in T parameter will have to be done through an interface member. Since all interface members are considered mutating, any such use would require a copy. - Instead of reducing copying, the effect would be the opposite. Therefore
in this T is not allowed when
T is a generic type parameter regardless of constraints.
Valid kinds of extension methods (recap):
The following forms of
this declaration in an extension method are now allowed:
this T arg- regular byval extension. (existing case)
T can be any type, including reference types or type parameters. Instance will be the same variable after the call. Allows implicit conversions of this-argument-conversion kind. Can be called on RValues.
in this T self-
inextension. T must be an actual struct type. Instance will be the same variable after the call. Allows implicit conversions of this-argument-conversion kind. Can be called on RValues (may be invoked on a temp if needed).
ref this T self-
refextension. T must be a struct type or a generic type parameter constrained to be a struct. Instance may be written to by the invocation. Allows only identity conversions. Must be called on writeable LValue. (never invoked via a temp).
Readonly ref locals.
Motivation.
Once
ref readonly members were introduced, it was clear from the use that they need to be paired with appropriate kind of local. Evaluation of a member may produce or observe side effects, therefore if the result must be used more than once, it needs to be stored. Ordinary
ref locals do not help here since they cannot be assigned a
readonly reference.
Solution.
Allow declaring
ref readonly locals. This is a new kind of
ref locals that is not writeable. As a result
ref readonly locals can accept references to readonly variables without exposing these variables to writes.
Declaring and using
ref readonly locals.
The syntax of such locals uses
ref readonly modifiers at declaration site (in that specific order). Similarly to ordinary
ref locals,
ref readonly locals must be ref-initialized at declaration. Unlike regular
ref locals,
ref readonly locals can refer to
readonly LValues like
in parameters,
readonly fields,
ref readonly methods.
For all purposes a
ref readonly local is treated as a
readonly variable. Most of the restrictions on the use are the same as with
readonly fields or
in parameters.
For example fields of an
in parameter which has a struct type are all recursively classified as
readonly variables .
static readonly ref Vector3 M1() => . . . static readonly ref Vector3 M1_Trace() { // OK ref readonly var r1 = ref M1(); // Not valid. Need an LValue ref readonly Vector3 r2 = ref default(Vector3); // Not valid. r1 is readonly. Mutate(ref r1); // OK. Print(in r1); // OK. return ref r1; }
Restrictions on use of
ref readonly locals
Except for their
readonly nature,
ref readonly locals behave like ordinary
ref locals and are subject to exactly same restrictions.
For example restrictions related to capturing in closures, declaring in
async methods or the
safe-to-return analysis equally applies to
ref readonly locals.
Ternary
ref expressions. (aka "Conditional LValues")
Motivation
Use of
ref and
ref readonly locals exposed a need to ref-initialize such locals with one or another target variable based on a condition.
A typical workaround is to introduce a method like:
ref T Choice(bool condition, ref T consequence, ref T alternative) { if (condition) { return ref consequence; } else { return ref alternative; } }
Note that
Choice is not an exact replacement of a ternary since all arguments must be evaluated at the call site, which was leading to unintuitive behavior and bugs.
The following will not work as expected:
// will crash with NRE because 'arr[0]' will be executed unconditionally ref var r = ref Choice(arr != null, ref arr[0], ref otherArr[0]);
Solution
Allow special kind of conditional expression that evaluates to a reference to one of LValue argument based on a condition.
Using
ref ternary expression.
The syntax for the
ref flavor of a conditional expression is
<condition> ? ref <consequence> : ref <alternative>;
Just like with the ordinary conditional expression only
<consequence> or
<alternative> is evaluated depending on result of the boolean condition expression.
Unlike ordinary conditional expression,
ref conditional expression:
- requires that
<consequence>and
<alternative>are LValues.
refconditional expression itself is an LValue and
refconditional expression is writeable if both
<consequence>and
<alternative>are writeable LValues
Examples:
ref ternary is an LValue and as such it can be passed/assigned/returned by reference;
// pass by reference foo(ref (arr != null ? ref arr[0]: ref otherArr[0])); // return by reference return ref (arr != null ? ref arr[0]: ref otherArr[0]);
Being an LValue, it can also be assigned to.
// assign to (arr != null ? ref arr[0]: ref otherArr[0]) = 1; // error. readOnlyField is readonly and thus conditional expression is readonly (arr != null ? ref arr[0]: ref obj.readOnlyField) = 1;
Can be used as a receiver of a method call and skip copying if necessary.
// no copies (arr != null ? ref arr[0]: ref otherArr[0]).StructMethod(); // invoked on a copy. // The receiver is `readonly` because readOnlyField is readonly. (arr != null ? ref arr[0]: ref obj.readOnlyField).StructMethod(); // no copies. `ReadonlyStructMethod` is a method on a `readonly` struct // and can be invoked directly on a readonly receiver (arr != null ? ref arr[0]: ref obj.readOnlyField).ReadonlyStructMethod();
ref ternary can be used in a regular (not ref) context as well.
// only an example // a regular ternary could work here just the same int x = (arr != null ? ref arr[0]: ref otherArr[0]);
Drawbacks
I can see two major arguments against enhanced support for references and readonly references:
- The problems that are solved here are very old. Why suddenly solve them now, especially since it would not help existing code?
As we find C# and .Net used in new domains, some problems become more prominent.
As examples of environments that are more critical than average about computation overheads, I can list
- cloud/datacenter scenarios where computation is billed for and responsiveness is a competitive advantage.
- Games/VR/AR with soft-realtime requirements on latencies
This feature does not sacrifice any of the existing strengths such as type-safety, while allowing to lower overheads in some common scenarios.
- Can we reasonably guarantee that the callee will play by the rules when it opts into
readonlycontracts?
We have similar trust when using
out. Incorrect implementation of
out can cause unspecified behavior, but in reality it rarely happens.
Making the formal verification rules familiar with
ref readonly would further mitigate the trust issue.
Alternatives
The main competing design is really "do nothing".
Unresolved questions
Design meetings | https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/proposals/csharp-7.2/readonly-ref | CC-MAIN-2021-17 | en | refinedweb |
Transcript: unbelievable journey FESTIVAL IN UK The show is held in the grounds of the Royal Hospital Chelsea, home of the iconic Chelsea Pensioners who are all retired soldiers of the British Army. Some 300 veterans live in the retirement and nursing home on site. RHS Chelsea Flower Show RHS Chelsea Flower Show Important Data Important Data London Fashion Week Festival London Fashion Week takes place twice a year in February and September, showcasing over 250 designers to a global audience of influential media and retailers. It is estimated that orders of over £100m are placed during LFW each season.. London Fashion Week Festival Almost as soon as the temperature drops and the clocks go back, London is transformed into a wintery wonderland. Ice rinks appear in squares, on rooftops and outside London icons while the city glitters with Christmas lights and creatively decorated trees. Christmas in London Christmas Day The streets of west London come alive every August bank holiday weekend with a huge Caribbean party at Europe's biggest street festival Notting Hill Carnival Notting Hill Carnival. Totally Thames Totally Thames
Transcript: Festive The Regulators Exciting, fun packed resort event for children and families Three-day event Interactive experiences Christmas themed events Our Vision Project Goal Deliverables Success Criteria Work Breakdown Structure Today's Topics Deliverables Venue at Four Seasons Resort at Walt Disney World Orlando • Food & beverage accommodations for the event • Activities and entertainment for families: arts and crafts, outdoor games and character photo ops • Staff scheduling and placement • Marketing and promotion: event flyers and schedules, communication between departments including front desk and concierge • Inventory order and management • Vendor contracts • Contingency plans for inclement weather and other obstacles Success Criteria Family participation and revenue will be our measure of success. Measure participation: 40 participants per day Measure Revenue: In dollars We plan to exceed last year’s numbers in all areas. Work Breakdown Structure Cost Constrained above all because it is a limited amount, do the most with that we have. Time Crucial because it is a three-day event. It cannot be constrained. Scope Must be accepted because we are a service above all else, needing to care for the needs of the guests.: Templates The key to generic programs a simple code! Output? Answer Namespaces Namespace is a feature added in C++ and not present in C. A namespace is a declarative region that provides a scope to the identifiers (names of the types, function, variables etc) inside it. Multiple namespace blocks with the same name are allowed. Templates Templates Templates are powerful features of C++ which allows you to write generic programs. In simple terms, you can create a single function or a class to work with different data types using templates. Advantages: Readability Flexibility Re-usability Function Template FUnction Templates A single function template can work with different data types at once but, a single normal function can only work with one set of data types. Normally, if you need to perform identical operations on two or more types of data, you use function overloading. However, a better approach would be to use function templates because you can perform the same task writing less and maintainable code. Example. Class Template Class Templates Example virtual functions virtual functions Virtual functions ensure that the correct function is called for an object, regardless of the type of reference (or pointer) used for function call. They are mainly used to achieve Run-time polymorphism. The prototype of virtual functions should be same in base as well as derived class. They are always defined in base class and overridden in derived class. It is not mandatory for derived class to override Example Pass by reference Pass-by-reference means to pass the reference of an argument in the calling function to the corresponding formal parameter of the called function. The called function can modify the value of the argument by using its reference passed in. Does not copy the arguments. The formal parameter is an alias for the argument. References cannot be NULL.
Transcript: "If there is no joyous way to give a festive gift, give love away." A brief presentation on some intense yet festive words. Happy Merry Jubilant Mirthful Festal Jocund Ebeneezer Scrooge was finally happy when he realized the true meaning of the holidays. Clark was in a festal mood when his Christmas lights finally turned on. Works Cited Quote: Photos: Jack was merry when he discovered Christmastown for the very first time. Rudolph was mirthful upon learning he would be allowed to fly with Santa's sleigh. Buddy the Elf was jocund when he discovered Santa was coming to the toy store. By Julianna Desiato Olaf the snowman was jubilant when he found a single flower growing amidst the powdery snow. | https://prezi.com/l/festive-powerpoint-templates/ | CC-MAIN-2021-17 | en | refinedweb |
ASPxRatingControl Class
Represents a rating control.
Namespace: DevExpress.Web
Assembly: DevExpress.Web.v19.2.dll
Declaration
public class ASPxRatingControl : ASPxWebControl, IRequiresLoadPostDataControl
Public Class ASPxRatingControl Inherits ASPxWebControl Implements IRequiresLoadPostDataControl
Remarks
The ASPxRatingControl class implements the functionality of a rating control, which enables end-users to rate by selecting a number of items (stars).
The appearance of the control can be customized using the standard control properties.
Items within the control can be customized by using the following item characteristics: item images (the ASPxRatingControl.ImageMapUrl property along with the ASPxRatingControl.ItemWidth and ASPxRatingControl.ItemHeight properties), titles of items (the ASPxRatingControl.Titles property), if it is a fractional number, the way in which items are filled to represent the control's value (the ASPxRatingControl.FillPrecision property).
The control's value is specified via the ASPxRatingControl.Value property.
NOTE
The client-side equivalent of this rating control is represented by the ASPxClientRatingControl object. The control's client-side API is enabled if the ASPxRatingControl.ClientInstanceName property is defined or any client event is handled. Available client events can be accessed via the ASPxRatingControl.ClientSideEvents property. | https://docs.devexpress.com/AspNet/DevExpress.Web.ASPxRatingControl?v=19.2 | CC-MAIN-2021-17 | en | refinedweb |
Problem Statement
For.
Sample Test Cases
Input: n = 4, edges = [[1, 0], [1, 2], [1, 3]] 0 | 1 / \ 2 3 Output: [1] Input: n = 6, edges = [[0, 3], [1, 3], [2, 3], [4, 3], [5, 4]] 0 1 2 \ | / 3 | 4 | 5 Output: [3, 4]
Problem Solution
Gather all leaves in a queue first. At each iteration, destroy all leaves, destroy the edges that they form in the graph, then add their neighbors to the queue, iff the neighbor is a leaf.
- We initialize a queue of leaf nodes after creating the graph.
- How do we know which are leaf nodes? Keep track of in-degree using an array.
- Now, for the current queue of leaf nodes, we shall remove them from graph, then remove the edges they’re connected to, then update the in-degree of their neighbors.
- After updating the neighbor’s in-degree, see if we can add it into the queue of leaf nodes
- Repeat till we’ve 1 or 2 leaf nodes
Complexity Analysis
Time Complexity: O(n) Because each node will be processed exactly once. They go into the queue of leaves once, and they exit (at most) once.
Space Complexity: Graph creation takes O(E) space ie no of edges to be included.
Code Implementation
#include <bits/stdc++.h> using namespace std; // This class represents a undirected graph using adjacency list class Graph { public: int V; // No. of vertices list<int> *adj; vector<int> degree; Graph(int V); void addEdge(int v, int w); // function to get roots which give minimum height vector<int> rootForMinimumHeight(); }; Graph::Graph(int V) { this->V = V; adj = new list<int>[V]; for (int i = 0; i < V; i++) degree.push_back(0); } // addEdge method adds vertex to adjacency list and increases // degree by 1 void Graph::addEdge(int v, int w) { adj[v].push_back(w); adj[w].push_back(v); degree[v]++; degree[w]++; } // Method to return roots which gives minimum height to tree vector<int> Graph::rootForMinimumHeight() { queue<int> q; // first enqueue all leaf nodes in queue for (int i = 0; i < V; i++) if (degree[i] == 1) q.push(i); // loop untill total vertex remains less than 2 while (V > 2) { for (int i = 0; i < q.size(); i++) { int t = q.front(); q.pop(); V--; // for each neighbour, decrease its degree and // if it become leaf, insert into queue list<int>::iterator j; for ( j = adj[t].begin(); j != adj[t].end(); j++) { degree[*j]--; if (degree[*j] == 1) q.push(*j); } } } // copying the result from queue to result vector vector<int> res; while (!q.empty()) { res.push_back(q.front()); q.pop(); } return res; } // Driver code to test above methods int main() { Graph g(6); g.addEdge(0, 3); g.addEdge(1, 3); g.addEdge(2, 3); g.addEdge(4, 3); g.addEdge(5, 4); vector<int> res = g.rootForMinimumHeight(); for (int i = 0; i < res.size(); i++) cout << res[i] << " "; cout << endl; } | https://prepfortech.in/interview-topics/trees/minimum-height-trees | CC-MAIN-2021-17 | en | refinedweb |
The Data Science Lab
After previously detailing how to examine data files and how to identify and deal with missing data, Dr. James McCaffrey of Microsoft Research now uses a full code sample and step-by-step directions to deal with outlier data.
This article explains how to programmatically identify and deal with outlier data (it's a follow-up to "Data Prep for Machine Learning: Missing Data"). Suppose you have a data file of loan applications. Examples of outlier data include a person's age of 99 (either a very old applicant or possibly a placeholder value that was never changed) and a person's country of "Cannada" (probably a transcription error).
In situations where the source data file is small, about 500 lines or less, you can usually find and deal with outlier data manually. But in almost all realistic scenarios with large datasets you must handle outlier data programmatically.
Preparing data for use in a machine learning (ML) system is time consuming, tedious, and error prone. A reasonable rule of thumb is that data preparation requires at least 80 percent of the total time needed to create an ML system. There are three main phases of data preparation: cleaning, normalizing and encoding, and splitting. Each of the three phases has several steps. Dealing with outlier data is part of the data cleaning phase.
A good way to understand outlier data and see where this article is headed is to take a look at the screenshot of a demo program in Figure 1. The demo uses a small text file where each line represents an employee. The demo analyzes a representative numeric column (age) and then analyzes a representative categorical column (region).
The demo is a Python language program that examines and performs a series of transformations on the original data. The first five lines of the demo source data are:
M 32 eastern 59200.00 moderate
F 43 central 38400.00 moderate
M 35 central 30800.00 liberal
F 36 ? 47800.00 moderate
M 26 western 53800.00 conservative
. . .
There are five tab-delimited fields: sex, age, region, annual income, and political leaning. The eventual goal of the ML system that will use the data is to create a neural network that predicts political leaning from other fields. The source data file has been standardized so that all lines have the same number of fields/columns.
The demo begins by displaying the source data file. Next the demo scans through the age column and computes a z-score value for each age. Data lines with outlier values where the z-score is less than -2.0 or greater than +2.0 are displayed. These are line [7] where age = 61 and z = +2.26, and line [9] where age = 3 and z = -2.47.
When a line with an outlier value has been identified, you can do one of three things. You can ignore the data line, you can correct the data line, or you can delete the line. The demo leaves line [7] with age = 61 alone. The implication is that the person is just significantly older than the other people in the dataset. The demo updates line [9] with age = 3 by changing the age to 33. The implication is that the value was incorrectly entered and the correct age value of 33 was located in some way.
Next, the demo scans through the region column and computes a frequency count for each value. There are four instances of "eastern", four instances of "central", and three instances of "western." There is one instance of "?" in line [4] and one instance of "centrel" in line [6]. The demo deletes line [4]. The implication is that "?" was entered as a placeholder value to mean "unknown" and that the correct region value could not be determined. The demo updates line [6] by replacing "centrel" with "central." The implication is that this was a typo.
To summarize, outliers are unusual values. For numeric variables, one way to find outliers is to compute z-scores. For categorical variables, one way to find outliers is to compute frequency counts.
This article assumes you have intermediate or better skill with a C-family programming language. The demo program is coded using Python but you shouldn't have too much trouble refactoring the demo code to another language if you wish. The complete source code for the demo program is presented in this article. The source code is also available in the accompanying file download.
The Data Preparation Pipeline
Although data preparation is different for every source dataset, in general the data preparation pipeline for most ML systems is usually something similar to the steps shown in Figure 2.
Data preparation for ML is deceptive because the process is conceptually easy. However, there are many steps, and each step is much trickier than you might expect if you're new to ML. This article explains the fifth and sixth steps in Figure 2. Future Data Science Lab articles will explain the other steps. The articles.
The Demo Program
The structure of the demo program, with a few minor edits to save space, is shown in Listing 1. I indent my Python programs using two spaces, rather than the more common four spaces or a tab character, as a matter of personal preference. The program has six worker functions plus a main() function to control program flow. The purpose of worker functions line_count(), show_file(), delete_lines(), show_numeric_outliers(), show_cat_outliers(), and update_line() should be clear from their names.
Listing 1: Outlier Data Detection Demo Program
# file_outliers.py
# Python 3.7.6 NumPy 1.18.1
import numpy as np
def line_count(fn): . . .
def show_file(fn, start, end, indices=False,
strip_nl=False): . . .
def delete_lines(src, dest, omit_lines): . . .
def show_numeric_outliers(fn, col, z_max,
delim): . . .
def show_cat_outliers(fn, col, ct_min,
delim): . . .
def update_line(src, dest, line_num, col_num,
new_val, delim): . . .
def main():
# 1. display source file
print("\nSource file: ")
fn = ".\\people_no_missing.txt"
show_file(fn, 1, 999, indices=True, strip_nl=True)
#")
src = ".\\people_no_missing_update2.txt"
dest = ".\\people_clean.txt"
delete_lines(src, dest, [4])
print("\nCleaned data: ")
fn = ".\\people_clean.txt"
show_file(fn, 1, 999, indices=True, strip_nl=True)
if __name__ == "__main__":
main()
Program execution begins with:
def main():
# 1. display source file
print("\nSource file: ")
fn = ".\\people_no_missing.txt"
show_file(fn, 1, 999, indices=True, strip_nl=True). . .
The first step when working with any machine learning data file is to do a preliminary investigation. The source data is named people_no_missing.txt ("no missing columns") and has only 13 lines to keep the main ideas of dealing with outlier data as clear as possible. The number of lines in the file could have been determined by a call to the line_count() function. The entire data file is examined by a call to show_file() with arguments start=1 and end=999. In most cases you'll examine just specified lines of the shell so that there aren't blank lines between data lines in the display.
The demo continues with:
#")
. . .
The call to function show_numeric_outliers() means, "Scan the age values in 1-based column number [2], and display lines where the z-score is less than or equal to -2.0 or greater than or equal to +2.0."
The call to function update_line() means, "Take file people_no_missing.txt, change the age value in 1-based column [2] on 1-based line number [9] to "33" and save the result as people_no_missing_update1.txt."
Function update_line() uses a functional programming paradigm and accepts a source file and writes the results to a destination file. It's possible to implement update_line().
The demo program examines only the age column. In a non-demo scenario you should examine all numeric columns. The demo continues by examining the region column of categorical values and updating line [6] from "centrel" to "central":
#")
. . .
The call to function show_cat_outliers() means, "Scan the region values in 1-based column [3], and display lines where a region value occurs 1 time or less." Note that "one time or less" usually means exactly one time because there can't be any frequency counts of zero unless an external list of possible values was supplied to the show_cat_outliers() function.
The demo program concludes by deleting line [4] which has a region value of "?" and then displaying the final people_clean.txt result file:
. . .
src = ".\\people_no_missing_update2.txt"
dest = ".\\people_clean.txt"
delete_lines(src, dest, [4])
print("\nCleaned data: ")
fn = ".\\people_clean.txt"
show_file(fn, 1, 999, indices=True, strip_nl=True)
if __name__ == "__main__":
main()
Notice that updating then deleting is not the same as deleting then updating. If you update, line numbering does not change but if you did a delete line [4] followed by update line [6], after the delete operation line numbering changes and so you'd update the wrong line.
Exploring the Data
When working with data for an ML system you always need to determine how many lines there are in the data, how many columns/fields there are on each line, and what type of delimiter is used. The demo defines a function line_count() as:
def line_count(fn):
ct = 0
fin = open(fn, "r")
for line in fin:
ct += 1
fin.close()
return ct
The file is opened for reading and then traversed using a Python for-in idiom. Each line of the file, including the terminating newline character, is stored into variable named "line" but that variable isn't used. There are many alternative approaches.
The definition of function show_file() is presented in Listing 2. As is the case with all data preparation functions, there are many possible implementations.
Listing 2: Displaying Specified Lines of a File
def show_file(fn, start, end, indices=False,
strip_nl=False):
fin = open(fn, "r")
ln = 1 # advance to start line
while ln < start:
fin.readline()
ln += 1
while ln <= end: # show specified lines
line = fin.readline()
if line == "": break # EOF
if strip_nl == True:
line = line.strip()
if indices == True:
print("[%3d] " % ln, end="")
print(line)
ln += 1
fin.close()
Because the while-loop terminates with a break statement, if you specify an end parameter value that's greater than the number of lines in the source file, such as 999 for the 13-line demo data, the display will end after the last line has been printed, which is usually what you want.
Finding Numeric Outlier Data
The demo program definition of function show_numeric_outlers() is presented in Listing 3. There are several ways to identify outlier values in a numeric column. The approach I prefer is to compute a z-score value for each raw value and then examine lines where the z-score is greater than or less than a plus-or-minus problem-dependent threshold value, typically about 2.0, 3.0, or 4.0. The z-score for a numeric value x in a column of data is computed as z = (x - mean) / sd where mean is the average of all values in the column and sd is the standard deviation of all values in the column.
Suppose you have only n = 3 lines of data and one of the columns is age with values (28, 34, 46). The mean of the column is (28 + 34 + 46) / 3 = 36.0. The standard deviation is the square root of the sum of the squared difference of each value and the mean, divided by the number of values, and so is:
sd = sqrt( [(28 - 36.0)^2 + (34 - 36.0)^2 + (46 - 36.0)^2] / 3 )
= sqrt( [(-8.0)^2 + (-2.0)^s + (10.0)^2 ] / 3 )
= sqrt( [64.0 + 4.0 + 100.0] / 3 )
= sqrt( 168.0 / 3 )
= sqrt(56.0)
= 7.48
Therefore the z-score values of the three ages are:
x = 28, z = (28 - 36.0) / 7.48 = -1.07
x = 34, z = (34 - 36.0) / 7.48 = -0.27
x = 46, z = (46 - 36.0) / 7.48 = +1.34
A z-score value that is positive corresponds to an x value that is greater than the mean value, and a z-score that is negative corresponds to an x value that is less than the mean value. For data that is normally distributed, meaning follows the Gaussian, bell-shaped distribution, almost all z-score values will be between -4.0 and +4.0 so values outside that range are usually outliers.
Listing 3: Identifying Numeric Outliers Using Z-Score Values
def show_numeric_outliers(fn, col, z_max, delim):
# need 3 passes so read into memory
ct = line_count(fn)
data = np.empty(ct, dtype=np.object)
fin = open(fn, "r")
i = 0
for line in fin:
data[i] = line
i += 1
fin.close()
# compute mean, sd of a 1-based col
sum = 0.0
for i in range(len(data)):
line = data[i]
tokens = line.split(delim)
sum += np.float32(tokens[col-1])
mean = sum / len(data)
ss = 0.0
for i in range(len(data)):
line = data[i]
tokens = line.split(delim)
x = np.float32(tokens[col-1])
ss += (x - mean) * (x - mean)
sd = np.sqrt(ss / len(data)) # population sd
print("mean = %0.2f" % mean)
print("sd = %0.2f" % sd)
# display outliers
for i in range(len(data)):
line = data[i]
tokens = line.split(delim)
x = np.float32(tokens[col-1])
z = (x - mean) /sd
if z <= -z_max or z >= z_max:
print("[%3d] x = %0.4f z = %0.2f" % \
((i+1), x, z))
The z-score calculation uses the population standard deviation (dividing sum of squares by n) rather than the sample standard deviation (dividing by n-1). Both versions of standard deviation will work for identifying outliers. A sample standard deviation calculation is usually performed in situations where you want to estimate the standard deviation of a population from which a sample was selected.
No real-life data is exactly Gaussian-normal but computing and examining z-scores is a good way to start when identifying numeric outlier values. Another approach is to bin data and then look for bins that have low counts. For example, you could bin age values into [1, 10], [11, 20], [21, 30] and so on. If you did his for the demo data you'd see that bin [1, 10] had a count of 1 due to the age of 3 in line [12] of the raw data. You can bin raw data values or z-score values.
There are many possible approaches for computing and examining z-scores. The show_numeric_outliers() demo function uses this approach:
read all data into a NumPy array of lines
loop each line in memory
accumulate sum of numeric value in target col
compute mean = sum / n
loop each line in memory
accumulate sum squared deviations from mean
compute sd = ssd / n
loop each line in memory
compute z using mean, sd
if z < -threshold or z > +threshold
display line
Because the data file has to be traversed three times to compute the mean, then the sd, and then each z-score, it makes sense to initially read the entire source file into a NumPy array of string-objects:
ct = line_count(fn)
data = np.empty(ct, dtype=np.object)
fin = open(fn, "r")
i = 0
for line in fin:
data[i] = line
i += 1
fin.close()
One of the many minor details when doing programmatic data preparation with Python is that a NumPy array of strings has to be specified as dtype=np.object rather than dtype=np.str as you would expect.
The demo implementation of show_numeric_outliers() computes the mean and standard deviation of the target numeric column. An alternative is to read the target column into memory using the NumPy loadtxt() function with the usecols parameter and then get the mean and standard deviation using the built-in NumPy mean() and std() functions.
Finding Categorical Outlier Data
The demo program definition of function show_cat_outlers() is presented in Listing 4. There are several ways to identify outlier values in a column of categorical values. The approach I prefer is to compute the frequency counts of each value in the target column and then look for rare values, typically those that occur once or twice in the target column.
Listing 4: Finding Categorical Outliers in a Column
def show_cat_outliers(fn, col, ct_min, delim):
# need 2 passes so read into memory
ct = line_count(fn)
data = np.empty(ct, dtype=np.object)
fin = open(fn, "r")
i = 0
for line in fin:
data[i] = line
i += 1
fin.close()
# construct dictionary for column
d = dict()
for i in range(len(data)):
line = data[i]
tokens = line.split(delim)
sv = tokens[col-1] # string value
if sv not in d:
d[sv] = 1
else:
d[sv] += 1
print("\nValues \t Counts")
for (sv, ct) in d.items():
print("%s \t %d" % (sv, ct))
print("\nRare values:")
for i in range(len(data)):
line = data[i]
tokens = line.split(delim)
sv = tokens[col-1] # string value
ct = d[sv] # get count
if ct <= ct_min:
print("[%3d] cat value = %s \
count = %d" % ((i+1), sv, ct))
The idea is to create a Dictionary object where the key is a categorical value like "western" and the value is a frequency count like 3. Then the specified column is traversed. If the current categorical value has not been seen before, that categorical value is added to the Dictionary with a count of 1. If the current categorical value is already in the Dictionary, the associated count is incremented.
After the Dictionary object for the specified column has been created, the collection can be traversed to show the counts of all categorical values in the column so that rare values can be identified. Additionally, the file can be traversed and lines with rare values can be displayed so that you can decided whether to delete the line, update the line, or do nothing to the line.
Updating a Line
When a line of data that has an outlier numeric or categorical value has been identified, if the outlier value is a simple error such as a typo, one possible action is to update the line. The demo program implements a function update_line() shown in Listing 5.
In pseudo-code, the approach used by the demo to update is:
loop until reach target line of src
read line, write to dest
end-loop
read line-to-update
split line into tokens
replace target token with new value
reconstruct line, write to dest
loop remaining lines
read from src, write to dest
end-loop
Listing 5: Updating a Line
def update_line(src, dest, line_num, col_num,
new_val, delim):
# line_num and col_num are 1-based
fin = open(src, "r")
fout = open(dest, "w")
ln = 1
while ln < line_num:
line = fin.readline() # has embedded nl
fout.write(line)
ln += 1
line_to_update = fin.readline() # trailing nl
tokens = line_to_update.split(delim)
tokens[col_num-1] = new_val # 0-based
s = ""
for j in range(len(tokens)): # including nl
if j == len(tokens)-1: # at the newline
s += tokens[j]
else:
s += tokens[j] + delim # interior column
fout.write(s) # must have embedded nl
for line in fin:
fout.write(line) # remaining lines
fout.close(); fin.close()
return
As is the case with most data preparation functions, function update_line() is not conceptually difficult but there are many details to deal with, such as correctly handling the embedded trailing newline character in each line of data.
Deleting a Line
When an outlier value on a line is determined to be an uncorrectable error, in most cases the best option is to delete the line. The demo program defines a delete_lines() function that removes one or more 1-based lines from a source file and writes the result to a destination file: omit_lines parameter can be a Python list such as [1, 3, 5] or a NumPy array of integer values such as np.array([1, 3, 5], dtype=np.int64). In situations where you want to delete a single line from the source file, you should specify the line using list notation, such as [3] rather than a simple scalar such as 3.
Wrapping Up
When preparing data for use in a machine learning system, no single part of the preparation pipeline is conceptually difficult but there are many steps and each step is time consuming and error prone. The net effect is that data preparation almost always takes much longer than expected.
After the steps explained in this article have been performed, the data is clean in the sense that there are no lines with missing columns so that all lines have a standardized format, and there are no lines with erroneous outlier values. The next step is to normalize numeric values so that they're in roughly the same range, to prevent variables with large magnitudes, such as annual income, don't overwhelm variables with small magnitude, such as age. After normalizing, the data must be encoded to convert categorical values such as "central" into numeric vectors like (0, 1, | https://visualstudiomagazine.com/articles/2020/07/14/ml-data-prep-outliers.aspx | CC-MAIN-2021-17 | en | refinedweb |
New Features of .NET 2008
.NET 2008 New Features
LINQ Support :
This is the composition of many standard query operators.They are significantly used in compile time checking and debug step by step through queries.
Expression blend support :
This XAML generator for Silver-light applications, this is a embedded plug in for .NET 2008.
Windows Presentation Foundation :
This is the extensive graphic functionality.It contains WPF foundation library templates.By this one can develop 2D and 3D applications.
Multi targeting supports :
Earlier the .NET 1.1 applications are not able to work in Visual studio 2005 applications.Now we can able to create ,develop and debug .NET 2.0 and 3.0 and 3.5 applications. we can also deploy .NET 2.0 applications in the machines which contains only .NET 2.0 not .NET 3.x.
AJAX :
Previously developer has to install AJAX control library separately that does not come from VS, but now if we install Visual Studio 2008, we can built-in AJAX control library. This Ajax Library contains plenty of rich AJAX controls like Menu, TreeView, webparts and also these components support JSON and VS 2008 contains in built ASP.NET AJAX Control Extenders.
Javascript debugging :
Since starting of web development all the developers got frustration with solving javascript errors. Debugging the error in javascript is very difficult. Now Visual Studio 2008 makes it is simpler with javascript debugging. we can set break points and run the javaScript step by step and we can watch the local variables when you were debugging the javascript and solution explorer provides javascript document navigation support.
Nested Master Page :
Already Visual Studio 2005 supports nested master pages concept with .NET 2.0, but the problem with this Visual Studio 2005 that pages based on nested masters cannot be edited using WYSIWYG web designer. But now in VS 2008 we can even edit the nested master pages.
LINQ Intellisense and Javascript Intellisense support for silverlight applications :
It contains intellisense support .NET 2008 for javascript.
Organize Imports or Usings
This removes unnecessary namespaces .
we can select all the namespaces and right click on it, then we can get context menu with Organize imports options like "Remove Unused Usings", "Sort Usings", "Remove and Sort".
Refactoring support for new .NET 3.x features like Anonymous types, Extension Methods, Lambda Expressions.
Intellisense Filtering :
Earlier in VS2005 when we were typing with intellisense box, all the items were displayed, but now in this .NET 2008 when we press k, only the items starting in the letter k is displayed.
Intellisense box display position :
Earlier in some cases when we were typing the an object name and pressing . (period) then intellisense was being displayed in the position of the object which.
VS2008 split view :
VS 2005 has a feature show both design and source code in single window. but both the windows tiles horizontally. In VS 2008 we can configure this split view feature to vertically, this allows developers to use maximum screen on laptops and wide-screen monitors.
HTML javascript warnings, not as Errors
VS 2005 mixes HTML errors and C# and VB.NET errors and shows in one window. Now VS 2008 separates this and shows javascript and HTML errors as warnings. But this is configurable feature.
Debugging .NET Framework Library Source Code:
we can debug the source .NET framework library methods.
Lets say If we want to debug the DataBind() method of DataGrid control we can place a debugging point over there and continue with debug the source code of DataBind() method.
In built Silverlight Library :
Earlier we used to install silverlight SDK separately, Now in VS 2008 it is inbuilt, with this we can create, debug and deploy the silverlight applications
Visual Studio LINQ Designer :
Already we know in VS 2005 we have inbuilt SQL Server IDE feature. by this we no need to use any other tools like SQL Server Query Analyzer and SQL Server Enterprise Manger. we have directly database explorer by this we can create connections to your database and we can view the tables and stored procedures in VS IDE itself. But now in VS 2008 it has View Designer window capability with LINQ-to-SQL.
Inbuilt C++ SDK :
Earlier It was so difficult to download and configure the C++ SDK Libraries and tools for developing windows based applications. Now it is inbuilt with VS 2008 and configurable.
Microsoft Popfly Support :
Microsoft Popfly explorer is an add-on to VS 2008, by this directly we can deploy or hosting the Silverlight applications and Marshup objects
Free Tools and Resources :
VS 2008 provides plenty of free tools and resources with the demo version tool kit.
We can use for Commercial :
Earlier we were spending lot of money to host our .NET applications for commercial use. Now we dont need to worry Now Microsoft is providing us to host our application free on Popfly for web pages and also visual studio express projects. | https://www.dotnetspider.com/resources/38982-New-Features-NET.aspx | CC-MAIN-2021-17 | en | refinedweb |
Biophysical Models¶
The Allen Cell Types Database contains biophysical models that characterize the firing behavior of neurons measured in slices through current injection by a somatic whole-cell patch clamp electrode. These models contain a set of 10 active conductances placed at the soma and use the reconstructed 3D morphologies of the modeled neurons. The biophysical modeling technical white paper contains details on the specific construction of these models and the optimization of the model parameters to match the experimentally-recorded firing behaviors.
The biophysical models are run with the NEURON simulation environment. The Allen SDK package contains libraries that assist in downloading and setting up the models available on the Allen Institute web site for users to run using NEURON. The examples and scripts provided run on Linux using the bash shell.
Prerequisites¶
You must have NEURON with the Python interpreter enabled and the Allen SDK installed.
The Allen Institute perisomatic biophysical models were generated using NEURON version v7.4.rel-1370. Instructions for compiling NEURON with the Python interpreter are available from the NEURON team under the heading Installation with Python as an alternative interpreter. The Allen SDK is compatible with Python version 2.7.9, included in the Anaconda 2.1.0 distribution.
Instructions for optional Docker installation are also available.
Note
Building and installing NEURON with the Python wrapper enabled is not always easy. This page targets users that have a background in NEURON usage and installation.
Downloading Biophysical Models¶
There are two ways to download files necessary to run a biophysical model. The first way is to visit and find cells that have biophysical models available for download. The electrophysiology details page for a cell has a neuronal model download link. Specifically:
- Click ‘More Options+’
- Check ‘Models -> Biophysical - perisomatic’ or ‘Biophysical - all active’
- Use the Filters, Cell Location and Cell Feature Filters to narrow your results.
- Click on a Cell Summary to view the Mouse Experiment Electrophysiology.
- Click the “download data” link to download the NWB stimulus and response file.
- Click “show model response” and select ‘Biophysical - perisomatic’ or ‘Biophysical - all active’.
- Scroll down and click the ‘Biophysical - perisomatic’ or ‘Biophysical - all active’ “download model” link.
This may be also be done programmatically. The neuronal model id can be found to the left of the corresponding ‘Biophysical - perisomatic’ or ‘Biophysical - all active’ “download model” link.
from allensdk.api.queries.biophysical_api import BiophysicalApi bp = BiophysicalApi() bp.cache_stimulus = True # change to False to not download the large stimulus NWB file neuronal_model_id = 472451419 # get this from the web site as above bp.cache_data(neuronal_model_id, working_directory='neuronal_model')
More help can be found in the online help for the Allen Cell Types Database web application.
Directory Structure¶
The structure of the directory created looks like this. It includes stimulus files, model parameters, morphology, cellular mechanisms and application configuration.
neuronal_model |-- manifest.json |-- 472451419_fit.json |-- Nr5a1-Cre_Ai14_IVSCC_-169248.04.02.01.nwb |-- Nr5a1-Cre_Ai14_IVSCC_-169248.04.02.01_403165543_m.swc |-- modfiles | |--CaDynamics.mod | |--Ca_HVA.mod | |--Ca_LVA.mod | |--Ih.mod | `--...etc. | |--x86_64 `---work
Running the Simulation (Linux shell prompt)¶
All of the sweeps available from the web site are included in manifest.json and will be run by default. This can take some time.
cd neuronal_model nrnivmodl ./modfiles # compile the model (only needs to be done once) python -m allensdk.model.biophysical.runner manifest.json # perisomatic models python -m allensdk.model.biophysical.runner manifest.json # legacy all-active models # new all-active models (axon replaced by a 60 micron long 1 micron diameter stub) python -m allensdk.model.biophysical.runner manifest.json --axon_type stub
Selecting a Specific Sweep¶
The sweeps are listed in manifest.json. You can remove all of the sweep numbers that you do not want run.
Simulation Main Loop¶
The top level script is in the
run()
method of the
allensdk.model.biophysical.runner
module. The implementation of the method is discussed here step-by-step:
First configure NEURON based on the configuration file, which was read in from the command line at the very bottom of the script.
# configure NEURON -- this will infer model type (perisomatic vs. all-active) utils = Utils.create_utils(description) h = utils.h
The next step is to get the path of the morphology file and pass it to NEURON.
# configure model manifest = description.manifest morphology_path = description.manifest.get_path('MORPHOLOGY') utils.generate_morphology(morphology_path.encode('ascii', 'ignore')) utils.load_cell_parameters()
Then read the stimulus and recording configuration and configure NEURON
# configure stimulus and recording stimulus_path = description.manifest.get_path('stimulus_path') nwb_out_path = manifest.get_path("output") output = NwbDataSet(nwb_out_path) run_params = description.data['runs'][0] sweeps = run_params['sweeps'] junction_potential = description.data['fitting'][0]['junction_potential'] mV = 1.0e-3
Loop through the stimulus sweeps and write the output.
# run sweeps for sweep in sweeps: utils.setup_iclamp(stimulus_path, sweep=sweep) vec = utils.record_values() h.finitialize() h.run() # write to an NWB File output_data = (numpy.array(vec['v']) - junction_potential) * mV output.set_sweep(sweep, None, output_data)
Customization¶
Much of the code in the perisomatic simulation is not core Allen SDK code.
The runner.py script largely reads the configuration file and calls into
methods in the
Utils class.
Utils is a subclass of the
HocUtils
class, which provides access to objects in the NEURON package.
The various methods called by the runner.script are implemented here, including:
generate_morphology(),
load_cell_parameters(),
setup_iclamp(),
read_stimulus()
and
record_values().
from allensdk.model.biophys_sim.neuron.hoc_utils import HocUtils ..... class Utils(HocUtils): ..... def __init__(self, description): super(Utils, self).__init__(description) ....
To create a biophysical model using your own software or data, simply model your directory structure on one of the downloaded simulations or one of the examples below. Add your own runner.py and utils.py module to the simulation directory.
Compile the .mod files using NEURON’s nrnivmodl command (Linux shell):
nrnivmodl modfiles
Then call your runner script directly, passing in the manifest file to your script:
python runner.py manifest.json
The output from your simulation and any intermediate files will go in the work directory.
Examples¶
A
minimal example (simple_example.tgz)
and a
multicell example (multicell_example.tgz)
are available to download as a starting point for your own projects.
Each example provides its own utils.py file along with a main script (Linux shell) and supporting configuration files.
simple_example.tgz:
tar xvzf simple_example.tgz cd simple nrnivmodl modfiles python simple.py
multicell_example.tgz:
tar xvzf multicell_example.tgz cd multicell nrnivmodl modfiles python multi.py python multicell_diff.py
Exporting Output to Text Format or Image¶
This is an example of using the AllenSDK to save a response voltage to other formats.
from allensdk.core.dat_utilities import \ DatUtilities from allensdk.core.nwb_data_set import \ NwbDataSet import numpy as np import matplotlib matplotlib.use("Agg") import matplotlib.pyplot as plt nwb_file = '313862020.nwb' sweep_number = 52 dat_file = '313862020_%d.dat' % (sweep_number) nwb = NwbDataSet(nwb_file) sweep = nwb.get_sweep(sweep_number) # read v and t as numpy arrays v = sweep['response'] dt = 1.0e3 / sweep['sampling_rate'] num_samples = len(v) t = np.arange(num_samples) * dt # save as text file data = np.transpose(np.vstack((t, v))) with open (dat_file, "w") as f: np.savetxt(f, data) # save image using matplotlib fig, ax = plt.subplots(nrows=1, ncols=1) ax.plot(t, v) ax.set_title("Sweep %s" % (sweep_number)) fig.savefig('out.png')
Model Description Files¶
Basic Structure¶
A model description file is simply a JSON object with several sections at the top level and an array of JSON objects within each section.{ "cell_section": [ { "name": "cell 1", "shape": "pyramidal" "position": [ 0.1, 0.2, 0.3 ] }, { "name": "cell 2", "shape": "glial", "position": [ 0.1, 0.2, 0.3 ] } ], "extra": [ { "what": "wood", "who": "woodchuck" } ] }
Even if a section contains no objects or only one object the array brackets must be present.
Objects Within Sections¶
While no restrictions are enforced on what kinds of objects are stored in a section, some rules of thumb make the file easier to work with.
- All objects within a section are the same structure. Common operations on a section are to display it as a table, iterate over it, load from or write to a spreadsheet or csv file. These operations are all easier if the section is fairly homogeneous.
- Objects are not deeply nested. While some shallow nesting is often useful, deep nesting such as a tree structure is not recommended. It makes interoperability with other tools and data formats more difficult.
- Arrays are allowed, though they should not be deeply nested either.
- Object member values should be literals. Do not use pickled classes, for example.
Split Description Files by Section¶
A model description can be split into multiple files by putting some sections in one file and other sections into another file. This can be useful if you want to put a topology of cells and connections in one file and experimental conditions and stimulus in another file. The resulting structure in memory will behave the same way as if the files were not split. This allows a small experiment to be described in a single file and large experiments to be more modular.
cells.json:{ "cell_section": [ { "name": "cell 1", "shape": "pyramidal" "position": [ 0.1, 0.2, 0.3 ] }, { "name": "cell 2", "shape": "glial", "position": [ 0.1, 0.2, 0.3 ] } ] }
extras.json:{ "extra": [ { "what": "wood", "who": "woodchuck" } ] }
Split Sections Between Description Files¶
If two description files containing the same sections are combined, the resulting description will contain objects from both files. This feature allows sub-networks to be described in separate files. The sub-networks can then be composed into a larger network with an additional description of the interconnections.
network1.json:/* A self-contained sub-network */ { "cells": [ { "name": "cell1" }, { "name": "cell2" } ], /* intra-network connections /* "connections": [ { "source": "cell1", "target" : "cell2" } ] }
network2.json:/* Another self-contained sub-network */ { "cells": [ { "name": "cell3" }, { "name": "cell4" } ], "connections": [ { "source": "cell3", "target" : "cell4" } ] }
interconnect.json:{ // the additional connections needed to // connect the network1 and network2 // into a ring topology. "connections": [ { "source": "cell2", "target": "cell3" }, { "source": "cell4", "target": "cell1" } ] }
Resource Manifest¶
JSON has many advantages. It is widely supported, readable and easy to parse and edit. As data sets get larger or specialized those advantages diminish. Large or complex models and experiments generally need more than a single model description file to completely describe an experiment. A manifest file is a way to describe all of the resources needed within the Allen SDK description format itself.
The manifest section is named “manifest” by default, though it is configurable. The objects in the manifest section each specify a directory, file, or file pattern. Files and directories may be organized in a parent-child relationship.
A Simple Manifest¶
This is a simple manifest file that specifies the BASEDIR directory using “.”, meaning the current directory:
{ "manifest": [ { "key": "BASEDIR", "type": "dir", "spec": "." } ] } }
Parent Child Relationships¶
Adding the optional “parent_key” member to a manifest object creates a parent-child relation. In this case WORKDIR will be found in “./work”:
{ "manifest": [ { "key": "BASEDIR", "type": "dir", "spec": "." }, { "key": "WORKDIR", "type": "dir", "spec": "/work", "parent_key": "BASEDIR" } ] } }
File Spec Patterns¶
Files can be specified using the type “file” instead of “dir”. If a sequence of many files is needed, the spec may contain patterns to indicate where the sequence number (%d) or string (%s) will be interpolated:
{ "manifest": [ { "key": "BASEDIR", "type": "dir", "spec": "." }, { "key": "voltage_out_cell_path", "type": "file", "spec": "v_out-cell-%d.dat", "parent_key": "BASEDIR" } ] } }
Split Manifest Files¶
Manifest files can be split like any description file. This allows the specification of a general directory structure in a shared file and specific files in a separate configuration (i.e. stimulus and working directory)
Comment Lines¶ | https://allensdk.readthedocs.io/en/latest/biophysical_models.html | CC-MAIN-2021-17 | en | refinedweb |
Basic framework for training models with PyTorch
Project description
boiler-pytorch
Basic framework for training stuff in PyTorch. It's quite tailored to projects
I've been working on lately, so it's meant for personal use. Its sole purpose is
to do away with
boilrplate code, and having it here makes it easier to
share it across projects.
Install
pip install boilr
Usage example/template
There's a usage example that can be useful as template. It's a basic VAE for MNIST quickly hacked together. The example files are:
example.py
example_evaluate.py
experiments/mnist_experiment/data.py
experiments/mnist_experiment/experiment_manager.py
models/mnist_vae.py
Install requirements and run the example:
pip install -r requirements.txt CUDA_VISIBLE_DEVICES=0 python example.py
For evaluation:
CUDA_VISIBLE_DEVICES=0 python example_evaluate.py --ll --ll-samples 100 --load $RUN_NAME
using the name of the folder in
output/ generated from running the example.
Quick reference
Built-in functionalities
The following functionalities are available out-of-the-box:
- Easy logging of metrics to tensorboard and to a pickle file. Metrics are collected at every training step, smoothed, and logged/saved at a specified frequency. The amount of smoothing is also customizable.
- Summaries of the metrics are automatically printed after each training and testing phase. This can be easily customized.
- Training speed, gradient norm (global and per-parameter), and L2 norm of the model parameters are all automatically logged.
- It's easy to save images from testing, in a dedicated folder.
- Gradient clipping (by global norm), controllable through a command-line argument.
- Automatic model checkpointing, with command-line argument to control the maximum number of recent checkpoints to be kept.
- Command-line argument to resume training from checkpoint, and everything is taken care of.
- Progress bar for training and testing, using
tqdm. Can be switched off.
- Data-dependent initialization (command-line argument).
- Reproducibility: set random seed across all devices and Python libraries.
- A suite of utility classes and methods in the packages
boilr.nnand
boilr.utils(most of them for internal use). In particular
boilr.nn.modulesand
boilr.utils.vizmight be more generally useful.
- A long list of command-line arguments to control some of the behaviour above. Some arguments are not directly used, but it's convenient to have them already defined: e.g. if a custom
DataLoaderis necessary, the batch size is easily accessible with
args.batch_size; and when creating the optimizer, the learning rate is
args.lr.
- See
boilr.optionsfor package-wide options. Usually it's not necessary to change them, but they give some more flexibility.
Command-line arguments
There are built-in command-line arguments with default values. These defaults can be easily
overridden programmatically when making the experiment class that subclasses
boilr's.
The built-in arguments are the following:
batch-size: training batch size (default: None)
test-batch-size: test batch size (default: None)
lr: learning rate (default: None)
max-grad-norm: maximum global norm of the gradient. It is clipped if larger. If None, no clipping is performed. (default: None)
seed: random seed (default: 54321)
tr-log-every: log training metrics every this number of training steps (default: 1000)
ts-log-every: log test metrics every this number of training steps. It must be a multiple of
--tr-log-every(default: 1000)
ts-img-every: save test images every this number of training steps. It must be a multiple of
--ts-log-every(default: same as
--ts-log-every)
checkpoint-every: save model checkpoint every this number of training steps (default: 1000)
keep-checkpoint-max: keep at most this number of most recent model checkpoints (default: 3)
max-steps: max number of training steps (default: 1e10)
max-epochs: max number of training epochs (default: 1e7)
nocuda: do not use cuda (default: False)
descr: additional description for experiment name
dry-run: do not save anything to disk (default: False)
resume: load the run with this name and resume training
Additionally, for
VAEExperimentManager, the following arguments are available:
ll-every: evaluate log likelihood (with the importance-weighted bound) every this number of training steps (default: 50000)
ll-samples: number of importance-weighted samples to evaluate log likelihood (default: 100)
Getting started
- subclass a base dataset manager class;
- subclass a base model class;
- subclass a base experiment manager class (the model class is used in here);
- make a short script that creates the experiment object, uses it to create a
boilr.Trainer, and runs the trainer;
- optionally, subclass the base evaluator to set up an "offline" evaluation pipeline.
See below for more details.
Dataset manager class (1)
The class
boilr.data.BaseDatasetManager must be subclassed. The subclass must implement
the method
_make_datasets which should return a tuple
(train, test) with the training
and test sets as PyTorch
Datasets.
A basic implementation of
_make_dataloaders is already provided, but can be overridden to make
custom data loaders.
Model class (2)
One of the model classes must be subclassed to inherit core methods in the base implementation
boilr.models.BaseModel.
These models also automatically subclass
torch.nn.Module (so it must implement
forward).
In addition,
boilr.models.BaseGenerativeModel (subclassing
BaseModel) defines a method
sample_prior that must be implemented by subclasses.
Experiment manager class (3)
One of the base experiment classes in
boilr.experiments must be subclassed. The subclass must implement:
_make_datamanagerto create the dataset manager, which should subclass
boilr.data.BaseDatasetManager;
_make_modelto create the model, which should subclass
boilr.models.BaseModel;
_make_optimizerto create the optimizer, which should subclass
torch.optim.optimizer.Optimizer;
forward_passto perform a simple single-pass model evaluation and returns losses and metrics;
test_procedureto evaluate the model on the test set (usually heavily based on the
forward_passmethod).
Typically should be overridden:
_define_args_defaults,
_add_args, and
_check_args(or a subset of these) to manage parsing of command-line arguments;
_make_run_descriptionwhich returns a string description of the run, used for output folders;
save_imagesto save output images (e.g. reconstructions and samples in VAEs).
May be overridden for additional control:
post_backward_callbackis called by the
Trainerafter the backward pass but before the optimization step;
get_metrics_dicttranslates a dictionary of results to a dictionary of metrics to be logged (by default this simply copies over the keys);
train_log_strand
test_log_strreturn log strings for test and training metrics.
Note: The class
VAEExperimentManager implements default
test_procedure and
save_images
methods for variational inference with VAEs.
Example training script (4)
from boilr import Trainer from my_experiment import MyExperimentClass if __name__ == "__main__": experiment = MyExperimentClass() trainer = Trainer(experiment) trainer.run()
Offline evaluator class (5)
If offline evaluation is necessary,
boilr.eval.BaseOfflineEvaluator can be subclassed by implementing:
runto run the evaluation;
- as above,
_define_args_defaults,
_add_args, and
_check_args(or a subset of these) to manage parsing of command-line arguments.
The method
run can be executed by simply calling the evaluator object.
See
example_evaluate.py.
Notes
- It also works without
tensorboard, but it won't save tensorboard logs.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/boilr/ | CC-MAIN-2021-17 | en | refinedweb |
A feature pool is based on a vector layer and caches features. More...
#include <qgsfeaturepool.h>
A feature pool is based on a vector layer and caches features.
Definition at line 37 of file qgsfeaturepool.h.
Creates a new feature pool for layer.
Definition at line 31 of file qgsfeaturepool.cpp.
Returns the complete set of feature ids in this pool.
Note that this concerns the features governed by this pool, which are not necessarily all cached.
Definition at line 97 of file qgsfeaturepool.cpp.
The coordinate reference system of this layer.
Definition at line 171 of file qgsfeaturepool.cpp.
Removes a feature from this pool.
Implementations will remove the feature from the layer or from the data provider.
Implemented in QgsVectorLayerFeaturePool, and QgsVectorDataProviderFeaturePool.
The geometry type of this layer.
Definition at line 177 of file qgsfeaturepool.cpp.
Retrieves the feature with the specified id into feature.
It will be retrieved from the cache or from the underlying feature source if unavailable. If the feature is neither available from the cache nor from the source it will return
false.
Definition at line 41 of file qgsfeaturepool.cpp.
Gets features for the provided request.
No features will be fetched from the cache and the request is sent directly to the underlying feature source. Results of the request are cached in the pool and the ids of all the features are returned. This is used to warm the cache for a particular area of interest (bounding box) or other set of features. This will get a new feature source from the source vector layer. This needs to be called from the main thread. If feedback is specified, the call may return if the feedback is canceled.
Definition at line 73 of file qgsfeaturepool.cpp.
Gets all feature ids in the bounding box rect.
It will use a spatial index to determine the ids.
Definition at line 102 of file qgsfeaturepool.cpp.
Inserts a feature into the cache and the spatial index.
To be used by implementations of
addFeature.
Definition at line 121 of file qgsfeaturepool.cpp.
Checks if the feature fid is cached.
Definition at line 160 of file qgsfeaturepool.cpp.
Gets a pointer to the underlying layer.
May return a
\c nullptr if the layer has been deleted. This must only be called from the main thread.
Definition at line 109 of file qgsfeaturepool.cpp.
The layer id of the layer.
Definition at line 182 of file qgsfeaturepool.cpp.
Returns the name of the layer.
Should be preferred over layer().name() because it can directly be run on the background thread.
Definition at line 166 of file qgsfeaturepool.cpp.
Gets a QPointer to the underlying layer.
Note that access to any methods of the object will need to be done on the main thread and the pointer will need to be checked for validity before usage.
Definition at line 116 of file qgsfeaturepool.cpp.
Changes a feature in the cache and the spatial index.
To be used by implementations of
updateFeature.
Definition at line 131 of file qgsfeaturepool.cpp.
Removes a feature from the cache and the spatial index.
To be used by implementations of
deleteFeature.
Definition at line 142 of file qgsfeaturepool.cpp.
Sets all the feature ids governed by this feature pool.
Should be called by subclasses constructor and whenever they insert a new feature.
Definition at line 155 of file qgsfeaturepool.cpp.
Updates a feature in this pool.
Implementations will update the feature on the layer or on the data provider.
Implemented in QgsVectorLayerFeaturePool, and QgsVectorDataProviderFeaturePool. | https://qgis.org/api/classQgsFeaturePool.html | CC-MAIN-2021-17 | en | refinedweb |
semanage_bool_set_active - Man Page
update an existing SELinux boolean in the currently active policy
Synopsis
#include <semanage/booleans_active.h>
extern int semanage_bool_set_active (
semanage_handle_t *handle,
const semanage_bool_key_t *key,
const semanage_bool_t *data);
Description
-) ).). | https://www.mankier.com/3/semanage_bool_set_active | CC-MAIN-2021-17 | en | refinedweb |
1.0 Introduction
Data is an important part of a program. In fact, programs are written so that data can be captured, processed, stored and presented to the user. The success of a program depends on how well data has been organized and used. In this post, we will be looking at data types and expressions in programming in C language.
2.0 Data Type
All data has an underlying type. The number of persons in a room is an integer. It cannot be a fraction. On the other hand, the average rainfall received by a city in a year is a floating point number, because there are always some digits after the decimal point. Without further ado, we can list the basic data types in C language, which are,
2.1 char
The basic type char is for storing characters. A char is basically an integer; it stores the integer code for a character. A character is stored in a byte. This is true for characters in the ASCII character set. However, with special libraries, it is possible to provide for all the languages supported by Unicode with multi-byte characters. But, to start with, we will assume the default 1 byte character and the ASCII character set. By default, a variable of type char may have a value 0 to 255 or -128 to 127 depending upon the processor architecture. The first 32 codes, 0-31, in ASCII are control characters and are used for signalling to devices like printers and video terminals. For example, carriage return (13) and line feed (10) cause the cursor to return to the first column and the next line on a video terminal. And, code zero is the null character, used to mark the end of a text string.The printable characters in ASCII are from 32 through 126. ASCII code 127 is for delete.
The qualifier signed or unsigned can be added to char. An unsigned char variable can store a value in the range 0 through 255. A signed char variable can store values between -128 and 127 in 2s complement machines.
2.2 int
The basic type int is for storing integers. An int is normally 32 bits long. The qualifiers short and long can be applied to int. A short int is 16 bits long and a long int is 64 bits long. Normally "int" is omitted after short and long. That is "short x" means "short int x" and "long y" means "long int y". A signed or unsigned qualifier may be used with int. The unsigned qualifier is more common and it is often used for bit masks.
2.3 float and double
The float, double and long double are single, double and extended precision floating point data types respectively. The float, double and long double occupy 4, 8 and 16 bytes of memory respectively. Of the three, double is mostly used, as it provides a balance between accuracy and economy of storage space.
3.0 Identifiers
Identifier names can be constructed with uppercase and lowercase alphabets, underscore and digits. The first character must be an alphabet. As a convention, symbolic constant names are made of uppercase characters, whereas variable names are made up of lower case alphabets. In both cases, digits and underscore may be used.
4.0 typedef
typedef defines a new name for an existing type. This helps in defining meaningful names for involved declarations. For example,
typedef unsigned char __uint8_t; typedef unsigned short int __uint16_t; typedef unsigned int __uint32_t; typedef unsigned long int __uint64_t; typedef __uint8_t uint8_t; typedef __uint16_t uint16_t; typedef __uint32_t uint32_t; typedef __uint64_t uint64_t;
which defines easy to remember 8, 16, 32 and 64-bit unsigned integers. However, it is not necessary to include these typedefs in your C program. You can include the file stdint.h, and the types uint8_t, uint16_t, etc. become available automatically. For example, consider the following program.
// try2.c #include <stdio.h> #include <string.h> #include <stdint.h> int main () { printf ("__WORDSIZE = %d bits\n", __WORDSIZE); printf ("sizeof (int) = %d bytes, sizeof (int *) = %d bytes\n", (int) sizeof (int), (int) sizeof (int *)); printf ("sizeof (uint8_t) = %d byte\n", (int) sizeof (uint8_t)); printf ("sizeof (uint16_t) = %d bytes\n", (int) sizeof (uint16_t)); printf ("sizeof (uint32_t) = %d bytes\n", (int) sizeof (uint32_t)); printf ("sizeof (uint64_t) = %d bytes\n", (int) sizeof (uint64_t)); }
After compiling and running the above program, we get following results.
$ gcc try2.c -o try2 $ ./try2 __WORDSIZE = 64 bits sizeof (int) = 4 bytes, sizeof (int *) = 8 bytes sizeof (uint8_t) = 1 byte sizeof (uint16_t) = 2 bytes sizeof (uint32_t) = 4 bytes sizeof (uint64_t) = 8 bytes
5.0 Enumeration
An enumeration is a series of some constants. The default value of the first constant is zero. The default value of a constant is the value of predecessor plus 1. The default value can be overridden my explicit assignment. For example, consider the enumeration,
enum Day {Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday};
here, the value of Sunday is zero and that of Saturday is 6. For example,
#include <stdio.h> #include <string.h> enum Day {Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday}; int main () { enum Day weekday = Saturday; printf ("weekday = %d\n", weekday); }
The above program prints weekday as 6. If we had defined the enumeration as
enum Day {Sunday = 1, Monday, Tuesday, Wednesday = 7, Thursday, Friday, Saturday};
The above program would have printed week day as 10 (for Saturday).
6.0 Boolean type
C does not have a boolean data type. However the integer type has provided for the boolean type. The value zero (0) is considered false and anything non-zero is true. Since 1 is non-zero, we can say 1 is true. We can provide the boolean type with an enumeration, as in the example below.
#include <stdio.h> #include <string.h> typedef enum {False, True} Boolean; int main () { Boolean over = False; int i = 0; printf ("over = %d\n", over); while (!over) { printf ("i = %d\n", i); i++; if (i > 2) over = True; } printf ("over = %d\n", over); }
And, after compilation and running the program, we get the following results.
$ gcc try4.c -o try $ ./try over = 0 i = 0 i = 1 i = 2 over = 1
For using the boolean type, it is not really necessary to define the typedef enum Boolean. With the C99 standard, C provides the type bool which can have the value false or true. To get this functionality, one has to include the file stdbool.h. Using the stdbool.h file,
#include <stdio.h> #include <string.h> #include <stdbool.h> int main () { bool over = false; int i = 0; printf ("over = %d\n", over); while (!over) { printf ("i = %d\n", i); i++; if (i > 2) over = true; } printf ("over = %d\n", over); }
And, the result is the same as before.
$ gcc try4.c -o try $ ./try over = 0 i = 0 i = 1 i = 2 over = 1
7.0 Declarations
All variables need to be declared before use. We have seen declaration such as,
int i; char buffer [20]; double radius;
There are two terms, declarations and definitions for variables. A definition for a variable introduces the name of the variable and sets aside storage for the variable. For example,
int num;
is a definition of variable num, as it introduces the name, num, and storage is set aside for it. But, there are cases, where the variable is defined in some other file and we just need its name for using it. For example,
extern int counter;
Here, the declaration, extern int counter, says that counter is an integer and is defined elsewhere. So no storage is set aside for it, only the name is introduced in this declaration. The name, counter, can now be used in expressions.
7.1 const qualifier
The const qualifier in a declaration specifies that the value of the variable would not be changed in that scope. For example,
const double pi = 3.14159265358979323844; const int scores [] = {34, 67, 98, 23};
If const is applied before an array, it means that the values of the array elements would not be changed.
8.0 Storage class
There are two properties associated with variable names, scope and lifetime. The scope of a variable name is the portion of the program in which the variable name is visible and can be used. The lifetime of a variable is the time the variable is "live"; the time during which a variable has valid memory and retains its value. Collectively, the scope and lifetime define the storage class of a variable. There are four storage classes: automatic, external, static and register.
8.1 Automatic
Automatic variables are defined inside functions. The storage for these variables is allocated on the stack. The variables can be defined using the keyword auto. However, auto is the default and is generally not mentioned. The scope and lifetime for automatic variables is the point of definition to the end of the block. Automatic variables are not initialized.
8.2 External
As opposed to automatic variables which are internal to functions, there are variables that are external to functions. These are global variables and are a part of the data segment. The scope of these variables is the point of definition to the end of program. However, if there is an automatic variable with the same name, the automatic variable gets precedence and the scope of global variable is obscured by the automatic variable bearing the same name. The lifetime of the external variables is the lifetime of the program. The external variables are initialized to zero at the time of definition. The difference between definition and declaration is most significant for external variables. An external variable is defined in one file. It is declared with the extern qualifier in other files of the program. Once declared, it can be used in expressions.
8.3 Static
We have seen that automatic variables come and go in functions. Once a function exits, the value of the automatic variables is lost. However, if a variable is declared with the static keyword, it retains its value between different function invocations. So their lifetime is that of the program. If an external variable or a function is declared static, it is only visible in the file of the definition. It can not be accessed from other files.
8.4 Register
Register variables are automatic variables. By the putting the keyword "register" in front of a variable, it is suggested to the compiler that the variable would be heavily used in calculations, and, so the compiler could place the variable in a register. The compiler is free to ignore the suggestion and place the variable where it deems fit. However, it is not possible to take the address of a variable declared with the register keyword and this true even when the variable is not stored in a register.
9.0 Operators
9.1 Arithmetic Operators
C has the four binary arithmetic operators, +, -, * and / for addition, subtraction, multiplication and division respectively. The precedence of * and / is higher than that of + and -. Binary arithmetic operators associate left to right. Then, there is the modulus operator, %, which gives the remainder of division of two integers. If x and y are two integers, x / y gives the integer quotient, in which the fraction has been truncated and x % y gives the remainder. If x is divisible by y, the remainder is zero. The modulus operator is not defined for float or double operands.
C also has the unary + and -. The precedence of unary + and - is higher than that of binary * and /. Unary + and - operators associate right to left.
9.2 Relational Operators
The relational operators are <, <=, >, >=, == and !=. The first four of these, that is, <, <=, > and >= have the same precedence. The other two, == and != have a lower precedence. The relational operators associate left to right and have a lower precedence as compared to binary arithmetic operators. A word of caution about the equality operator, ==. It is a common error to write the equality operator as =, which is obviously wrong as = is the assignment operator. This error is common and is difficult to debug.
9.3 Logical Operators
There are two logical operators, && and ||. These have a precedence less than that of relational operators. Of the two, the operator && has a higher precedence. The value of an expression involving relational and/or logical operators is zero if it is false and 1 if it is true. Expressions involving logical operators are evaluated left to right and the evaluation stops as soon as the truth or falsehood value of the expression is established. For example, consider the statement,
while (x < a || y < b || z < c) ...
In above example, if (x < a) evaluates as true, the other two conditions are not checked and the loop continues. If (x < a) evaluates as false, then (y < b) is checked. If (y < b) evaluates as true, the third condition is not checked and the loop continues. If both (x < a) and (y < b) evaluate as false then the third condition, (z < c) is checked and if it evaluates as true, the loop continues. If it evaluates as false, the loop terminates.
9.4 Unary negation Operator
The unary negation operator ! converts an operand with value non-zero to zero and zero to 1. This is quite useful in writing condition for while loops. For example,
int over = 0; while (over == 0) ...
can be written as,
int over = 0; while (!over) ...
which is more intuitive and sounds better.
10.0 Type conversion
In an expression, there may be implicit type conversions. The basic principle is that the "lower" type is promoted to the "higher" type and the expression is evaluated. For example, if there is a mix of integer and float, the integer is converted to float for evaluation. Or, if there is a mix of float and double, float is converted to double for evaluation. char types are treated as small integers; char is freely mixed with integers in expressions. As per the language specification, printable characters are guaranteed to be positive.
An important type conversion is type cast, that is, we force a variable to a particular type. For example, the function sqrt (double) expects a double argument and returns a double and we wish to find the square root of an integer. While passing the integer to the sqrt function, we type cast it to a double.
#include <stdio.h> #include <string.h> #include <math.h> int main () { int num = 99; double ret = sqrt ((double) num); printf ("square root = %f\n", ret); }
We can compile and run this program.
$ gcc try2.c -o try2 -lm $ ./try2 square root = 9.949874
In the above example, it is as if num is assigned to a variable of type double, which is passed to the sqrt function. The value of variable num is not affected.
11.0 Increment and decrement operators
C has increment (++) and decrement (--) operators. The increment operator adds 1 to its operand, while the decrement operator subtracts 1. The operator can be used as a prefix, that is, before the operand and, postfix, after the operand. For example,
// add 1 to i (prefix) ++i; // add 1 to i (postfix) i++; // subtract 1 from i (prefix) --i; // subtract 1 from i (postfix) i--;
So, what is the difference between prefix and postfix? In case of prefix, the value is incremented or decremented before use. In case of postfix, the value of the operand is incremented or decremented after use. If these operators are used in standalone mode, as in examples above, it does not matter, whether prefix or postfix mode is used. The effect is the same in both modes. But, consider the case,
k = a [j++];
First a [j] is assigned to k, and then, the index j is incremented. As another example, consider,
i = *++ptr;
which increments the pointer ptr and, then, the value pointed by ptr is assigned to i. But, if we use postfix increment,
i = *ptr**;
the value pointed by ptr is first assigned to i and, then, ptr is incremented.
12.0 Assignment operators
The assignment operator evaluates the expression on the right and the value is stored in the variable on the left. The left side of assignment operator must be a variable. Quite often, we find assignment statements such as,
i = i + 10;
In C, this can be written as,
i += 10;
which is more efficient in addition to being compact. In C,
var op= expr;is a shorthand for
var = var op (expr);
The op can be any one of the binary arithmetic operator, +, -, *, /, %, and, also, any one of the binary bitwise operator, <<, >>, &, | and ^.
It is important to note the parentheses around expr. So
x *= y + 10;
means
x = x * (y + 10);
and, not,
x = x * y + 10;
13.0 Bitwise operators
C has following binary bitwise operators: &, for bitwise AND, |, for bitwise inclusive OR, ^, for bitwise exclusive OR, <<, for left shift and >>, for right shift. Also there is the unary ~ operator for 1s complement. These bitwise operators can be applied to all integer types, char, signed short, unsigned short, signed int, unsigned int, etc. The bitwise operators cannot be applied to float, double and long double types. We should be careful in using signed integers for bitwise operations as right shift operation fills in sign bits on the left, whereas the expectation might have been that of zero. So, by default, unsigned integers should be used for bitwise operations and signed integers should be used only when they are really required.
In C programming,It is best to use bitwise operators with unsigned integers because, if you use signed operands for bitwise operations, the sign bit can bring in unexpected results. Also, bit masks are hardly a signed quantity. we often encounter flags, which are of type int and are bit masks, which means the individual bits mean some setting value. For example, consider the open system call to create and open a new a file.
int open (const char *pathname, int flags, mode_t mode);
The third parameter, mode, is for file permissions of the newly created file. The bit mask for some the permissions are,
We can set bits in the operand with the | operator. To reset bits, we use do an & operation of the operand with a bit mask having the relevant bits set as 0 and the rest as 1. For example,
mode_t mode; mode = 0; // Set read, write and execute permissions for user mode |= S_IRWXU; // Set read, write and execute permissions for group mode |= S_IRWXG; // Set read, write and execute permissions for others mode |= S_IRWXO; // remove read, write and execute permissions for others mode &= ~S_IRWXO; // Check whether read, write and execute bits are set for user if ((mode & S_IRWXU) == S_IRWXU) ....
Note the expression to check whether read, write and execute bits are set for user. A common error is to check whether (mode & S_IRWXU) is true. The correct expression is to first find (mode & S_IRWXU) and then check whether it is equal to S_IRWXU.
14.0 Conditional expressions
Consider the if statement,
if (a >= 0) x = a; else x = -a;
The if statement can be replaced with a conditional expression using the ternary operator, ?:
x = (a >= 0) ? a : -a;
The ternary operator combines three expressions
expr1 ? expr2 : expr3
First expr1 is evaluated. If it evaluates true (non-zero), expr2 is evaluated. Otherwise, expr3 is evaluated. If expr1 is true, expr2 is the value of the whole expression, otherwise, expr3 is the value of the whole expression.
15.0 Precedence and associativity table
The following table gives the precedence and associativity rules for operators in C language. | https://www.softprayog.in/programming/c-programming-tutorial-data-types-and-expressions | CC-MAIN-2021-17 | en | refinedweb |
Master Development and Supply Agreement - Apple Inc. and GTAT Corp.
APPLE INC.
MASTER DEVELOPMENT AND SUPPLY AGREEMENT
This Master Development and Supply Agreement #C56-13-02947 (the "Agreement") is entered into by and among Apple Inc., a California corporation having its principal place of business at 1 Infinite Loop, Cupertino, California 95014, United States ("Apple") and GTAT Corporation, having its principal place of business at 243 Daniel Webster Highway, Merrimack, NH 03054 ("GTAT"), effective as of October 31, 2013 (the "Effective Date").
1. Scope. This Agreement relates to goods that GTAT will develop, manufacture, sell and deliver to Authorized Purchasers (as defined below) for use in connection with Apple's products (collectively, the "Goods"). The parties may enter into statements of work (each, a "Statement of Work" or "SOW") in the future to address additional details related to specific Goods.
2. Forecast. Apple will periodically provide written forecasts indicating Apple's projected demand for each Good (each such forecast, a "Forecast"). GTAT will accept each such Forecast upon receipt provided it is consistent with the applicable Flexibility Schedule, if any, in an SOW. GTAT will timely commence the manufacture of Goods in order to deliver the Goods by the dates indicated in each Forecast. "Flexibility Schedule" means a schedule that sets forth the maximum percentage increase in units forecasted or ordered, based on when notice of such increase is given.
3. Pricing. Apple and GTAT will mutually agree on pricing for Goods. In addition to any agreed upon prices, the per unit price for a Good will not exceed [***]of the [***]GTAT offers to any other customer for similar Goods, net of rebates, discounts and other payments, and regardless of volume.
4. Purchase Orders.
4.1. GTAT will accept and timely fulfill all Purchase Orders that Apple or any entity Apple authorizes to procure Goods under this Agreement (Apple and each of the foregoing entities, an "Authorized Purchaser") issues by the delivery date requested in such Purchase Order so long as the number of Goods indicated does not exceed the quantity specified in the applicable Forecast with respect to the relevant delivery period. "Purchase Order" means an Authorized Purchaser's written or electronically transmitted instruction to GTAT to deliver particular Goods pursuant to applicable delivery or performance dates and locations.
4.2. Authorized Purchasers may, [***], (i) cancel any Purchase Order, or any portion thereof; or (ii) reschedule the shipment date of undelivered Goods and/or redirect shipments of Goods to alternate locations.
4.3. Unless mutually agreed in writing otherwise, all Purchase Orders will be governed by the terms and conditions of this Agreement and any applicable SOW. As between Apple, its Related Entities and GTAT, any different or additional terms in any proposal, acknowledgement form or any other document will be of no force or effect and will not become part of the agreement between the parties. GTAT will not enter into any agreement with any Authorized Purchaser in connection with the Goods on terms less favorable to such Authorized Purchaser than those in this Agreement. Further, if GTAT or an Authorized Purchaser seeks, but fails within 90 days, to enter into such an agreement, then GTAT will promptly notify Apple of the circumstances.
4.4. GTAT may not invoice for Goods until after delivery. Payment terms are 45 days from the date an Authorized Purchaser receives an undisputed invoice. All amounts payable will be stated and paid in United States Dollars.
4.5. Authorized Purchasers are not obligated to purchase any Goods except pursuant to a Purchase Order it issues. Except for amounts due pursuant to a Purchase Order or SOW, Authorized Purchasers will not be responsible for any costs in connection with the supply or purchase of any Goods.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
1
5. Delivery. TIME IS OF THE ESSENCE as to the supply and delivery of Goods under this Agreement. If GTAT cannot meet the requirements of a Forecast or a Purchase Order, GTAT must promptly notify the Authorized Purchaser and propose a revised delivery date, and the Authorized Purchaser may, at its option, exercise any or all of the following options: (i) require GTAT to deliver the Goods using priority freight delivery (with all incremental freight charges at GTAT's expense); (ii) purchase substitute goods and hold GTAT accountable for the difference between the price of the Goods and the price paid for substitute goods, as well as all amounts paid for shipping, insurance, handling, and any taxes or duties; and (iii) seek and collect all other remedies provided at law, in equity and under this Agreement for failure to timely deliver Goods.
6. Supply Constraint. If GTAT's ability to manufacture and deliver any Goods in accordance with the then current Forecast is constrained for any reason, GTAT will promptly notify Apple of the supply constraint and GTAT's plan to resolve it, and will provide Apple daily updates regarding the steps taken to resolve the supply constraint. If the supply constraint is due to constrained resources (e.g., personnel, material, equipment, or third party components), GTAT will allocate the constrained resources to supply Goods to Authorized Purchasers before using such resources to supply goods to any other customer.
7. Acceptance. Goods delivered will be subject to Authorized Purchaser's inspection, test and rejection. Acceptance testing and inspection of Goods will be performed at the Authorized Purchaser's factory (or other applicable delivery destination specified in the Purchase Order) by GTAT's and Authorized Purchaser's personnel. Any Goods delivered (individual units or entire lots) that do not comply with the requirements of the applicable Specifications, Purchase Order or this Agreement may be rejected; provided, however, that any Goods expressly accepted upon completion of acceptance testing and inspection, but later discovered to be Defective Goods, will instead be handled according to Sections 8 and 9. Payment of invoices will not be deemed acceptance of Goods. "Specifications" means the most current version of all specifications and requirements that Apple provides in writing (which may include Apple notifying GTAT that such specifications and/or requirements are available for electronic download along with providing necessary access and download instructions), including any documents referenced in any bill of materials, SOW, and any relevant specifications, drawings, samples or other descriptions that GTAT provides and Apple approves in writing.
8. Warranties. GTAT represents and warrants that: (i) it has the right to enter into this Agreement and its performance of this Agreement will be free and clear of liens and encumbrances; (ii) entering into this Agreement will not cause GTAT to breach any other agreements to which it is a party; (iii) the Goods will be new and comprised of new materials when delivered; (iv) the Goods, or any portion thereof, do not infringe any patent, copyright, trademark, trade secret, or other proprietary right of a third party; and (v) for a period of 3 years (unless agreed otherwise in an SOW) the Goods will conform to all applicable Specifications, be free from any defects and be merchantable (as defined in CA Civil Code Section 1791.1). For all Goods that an Authorized Purchaser other than Apple purchases, GTAT agrees that Apple may enforce against GTAT any and all applicable warranties in the same manner as if Apple was the actual purchaser of the Goods. Further, Apple's rights and remedies under this Agreement with respect to the Goods (including all warranties) remain in full force and effect even if Apple sells, consigns or otherwise transfers the Goods to any of Apple's contract manufacturers, GTATs and other subcontractors.
9. Remedies for Defective Goods.
9.1. "Defective Goods" means Goods or individual[***]that are cut from a Good that (a) fail (or because of a known issue or defect Apple reasonably expects to fail) to conform with or operate according to the warranties set forth in Section 8, applicable Specifications, or a consumer's reasonable expectations; (b) fail to comply with any applicable law or regulation; or (c) create a risk of bodily injury or property damage. [***]means any object of [***] that GTAT cuts from a sapphire [***]from which [***]can be fabricated for use in an [***] means any object of [***]size that is cut from a [***]for use in an [***].
9.2. If, after acceptance, Apple or an Authorized Purchaser gives notice to GTAT that any Goods or individual [***]that are cut from a Good are Defective Goods, the parties will promptly convene a [***]to determine the[***]. Apple, the Authorized Purchaser, and GTAT may each designate[***]of their respective qualified personnel to participate in the[***]; however, each of Apple, the Authorized Purchaser, and GTAT may only designate [***]. GTAT will provide a sufficient number of qualified manufacturing,
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
2
materials and quality engineers to effectively conduct the necessary inspections and analyses and to document the [***] findings. The [***]will conduct [***] meetings until the[***] has been determined and documented to Apple's reasonable satisfaction. For each investigation, [***]will determine the primary cause of the Defective Goods' condition to be either: (i) GTAT's failure to manufacture, test or package the Goods in accordance with the Specifications, or any other fault (including negligence) of GTAT that caused the Goods to be Defective Goods ("GTAT Fault"), (ii) the fault (including negligence) of any entity other than GTAT and its Related Entities and their respective agents and representatives ("Third Party Fault"), or (iii) none of the above ("No Fault"). [***]determination will be based on a majority vote of its designated voting representatives. If the[***]fails to meet [***], diligently pursue its responsibilities herein or render a final determination within[***]of the original notice to GTAT, then it will be presumed that[***].
9.3. The consequence of the [***] in each instance will be as follows:
9.3.1. In case of a GTAT Fault finding, then Apple may in its sole discretion select one or more of the following remedies: (i) have GTAT compensate Apple for [***]and any resulting impact on any Apple product, including any recall of impacted Apple products; (ii) have GTAT accept the return of such Defective Goods pursuant to Section 10, below; (iii) have GTAT, or an Apple-designated third party, repair the Defective Goods and recover from GTAT all reasonable repair-related costs and expenses; (iv) procure similar goods in substitution and charge GTAT for any costs arising from the procurement and use of such substitutes in connection with Apple products; and (v) have GTAT provide a written issue or defect analysis report and a correction plan. GTAT must promptly and diligently implement corrective actions to resolve the root cause(s) of the condition giving rise to the GTAT Fault finding.
9.3.2. In case of a Third Party Fault finding, then GTAT will have no liability to Apple or the Authorized Purchaser on account of such Defective Goods.
9.3.3. In case of a No Fault finding, then Apple may specify, and GTAT will comply with, any corrective measures and other remedies, including without limitation any of the remedies available in Sections 9.3.1 or 10 herein, that Apple deems appropriate in its sole and reasonable discretion. If GTAT [***] GTAT may address its concerns using the[***] contained in this Agreement [***].
10. Return of Goods. At its expense, GTAT will accept the return of any Defective Goods that an Authorized Purchaser returns, other than Goods found by [***] to be Defective Goods due to Third Party Fault, and will thereupon (i) ship replacement Goods on the same day the Authorized Purchaser returns the Goods or (ii) upon request, credit the Authorized Purchaser the original purchase price of the Goods (or, in the case of individual [***], credit an amount equal to [***] that are returned to GTAT) /[***] multiplied by [***]. For example, if Apple paid[***] for a [***] from which [***] were cut but then[***] of those [***] were returned to GTAT as Defective Goods, then GTAT would issue a credit of [***] to Apple. For the avoidance of doubt, GTAT will, upon Apple's request pursuant to Section 9.3.3, comply with the obligations in this Section 10 with respect to Goods found by [***] to be Defective Goods due to No Fault.
11. Modifications. GTAT may not modify any Equipment, Specifications, manufacturing process or materials without first obtaining Apple's prior consent. Whenever Apple modifies the Specification for a Good, and notwithstanding any disagreement over the cost to implement such Specification modification, GTAT will immediately implement all such modifications and manufacture and timely deliver all such Goods pursuant to the applicable Forecast. The parties will make good faith efforts to promptly resolve any such disagreement. During any period of disagreement, GTAT will charge Authorized Purchasers, and Authorized Purchasers will pay: (i) the applicable price for the Good set forth in an SOW; (ii) in the absence of an SOW, the last price Authorized Purchasers paid for the applicable Good; or (iii) if Authorized Purchasers have not yet purchased the applicable Good, the last mutually agreed price for the applicable Good. When the parties resolve any disagreement over the amount to be charged for such Goods, they will reconcile any amounts an Authorized Purchaser or GTAT owes.
12. Service and Support. GTAT will provide the service and support services set forth on Attachment 3.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
3
13. Hubs. As agreed in any SOW, GTAT will store Goods in Hubs before their Forecast delivery date to support just-in-time delivery of the Goods. GTAT will: (i) bear all costs associated with warehousing Goods in Hubs; (ii) maintain a sufficient inventory of Goods in the Hubs to satisfy the requirements of the then current Forecast; (iii) ensure that the Authorized Purchaser or its carrier(s) may withdraw Goods from the Hubs as needed; (iv) fully insure, or require the Hub operator to fully insure, all Goods in transit to or stored at a Hub against all risk of loss or damage until such time as the Authorized Purchaser takes title to them; and (v) require that the Hub operator take all steps necessary to protect all Goods in a Hub consistent with good commercial warehousing practice. "Hub" means an Apple-approved facility located at or near Apple-specified manufacturing or distribution facilities, or other Apple-specified location.
14. Logistics. When an Authorized Purchaser is the "Importer of Record," GTAT will, at no charge, promptly forward to the Authorized Purchaser any documents the Authorized Purchaser may reasonably require to allow the Authorized Purchaser to clear the Goods through customs and/or obtain possession of the Goods at the port of entry. GTAT will use the freight carriers that Apple selects or approves. Apple is solely responsible for specifying any labeling of the Goods. GTAT may not print any of its own trade names, trademarks, or logos on the Goods without Apple's prior written consent. GTAT will package all Goods in accordance with applicable Specifications using the best commercial practices.
15. Terms of Sale. GTAT will deliver Goods DDU (INCOTERMS 2010) delivery location designated in the applicable SOW, or if not so designated, in the applicable Purchase Order) with title and risk of loss transferring from GTAT to the Authorized Purchaser at the designated delivery location. If Goods are delivered via Hubs, GTAT will deliver them DDP (delivery location designated in the applicable SOW or applicable Purchase Order) with title and risk of loss remaining with GTAT until the Authorized Purchaser or its designated carrier withdraws the Goods from the Hub.
16. Manufacturing Commitment. Regardless of initial manufacturing yields or any other circumstance, GTAT will always timely start the manufacture of the Goods in order to fully and timely meet Apple's Forecasts. For example, if GTAT is experiencing undesirable manufacturing yields during the initial ramp of a Good, GTAT will nevertheless continue to manufacture the Goods to meet Apple's Forecast.
17. Right to Manufacture.
17.1. If GTAT materially breaches its supply obligations under this Agreement or any SOW and fails to cure such breach within 10 Business Days of Apple's written notice of such breach as set forth below, then GTAT, on behalf of itself and its Related Entities, hereby grants and conveys to Apple a fully paid-up, royalty-free, worldwide, nonexclusive, irrevocable, perpetual license under any Intellectual Property Rights owned, controlled or licensable by GTAT or its Related Entities to make, have made, use, have used, purchase, have purchased, sell, have sold, offer for sale, license, lease, import, have imported, export, or otherwise distribute or dispose of Sapphire Technology in Consumer Electronic Products (including components thereof), and to practice and have practiced any method in connection with the same by or for Apple or Apple's Related Entities; provided, however, that Apple may exercise such license rights only: (i) for a period of up to 10 Business Days beginning upon Apple's written notice to GTAT of any breach of GTAT's supply obligations under this Agreement or any SOW, so long as such breach remains uncured, solely to prepare for but not to engage in commercial production of Goods; and (ii) indefinitely, and without the foregoing limitation on engaging in commercial production, if such breach remains uncured at the end of such 10 Business Day period.
17.2. As reasonably necessary to assist Apple in exercising its rights under this license, GTAT will promptly and fully provide any technical information, training, and other assistance Apple requests.
17.3. "Sapphire Technology" is defined in Section 10.3.4 of SOW #1.
18. Development. GTAT may be asked to develop new products and technology, including Goods. GTAT agrees that all such development activities and any resulting technology or Intellectual Property Rights (as defined in Attachment 2), will be governed by the terms set forth in Attachment 2.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
4
19. Indemnification. GTAT shall indemnify and hold harmless, and at Apple's request, defend or pay for the defense of Apple, Authorized Purchasers, or Apple Personnel (or any combination of Apple, Authorized Purchasers and Apple Personnel) against any claims or allegations that: (i) the Goods themselves, or any portion thereof, or any processes, Equipment or methods used to manufacture the Goods, infringe any patent, copyright, trademark, trade secret, or other proprietary right of a third party; (ii) the Goods caused injury or damage; or (iii) arise or are alleged to have arisen as a result of negligent and/or intentional acts or omissions of GTAT or GTAT Personnel or breach by GTAT of any term of the Agreement. GTAT's indemnification obligation includes the obligation to hold Apple, Authorized Purchasers and Apple Personnel harmless from and against any costs, damages and fees (including attorney and other professional fees) attributable to any such claims or allegations. Apple agrees that it will notify GTAT in writing of any claims or allegations that are covered by this Section 19. If Apple requests that GTAT defend such claim or allegation and GTAT irrevocably confirms full indemnification for the claim in writing and without exception, thereafter (1) Apple will permit GTAT to control the defense of the claim or allegation using counsel of GTAT's choice who is approved by Apple, provided that such approval is not unreasonably withheld or delayed; and (2) Apple will not settle any such claim or allegation without GTAT's permission if it requires any payment by GTAT, provided that such permission is not unreasonably withheld or delayed. Notwithstanding the foregoing, Apple may control the defense and settlement of a claim, at its own expense, if there is a reasonable risk that GTAT will not be able to cover its full obligation for the claim or if there is a significant risk of harm to Apple from a request for an injunction. GTAT agrees to provide information and assistance reasonably necessary to enable Apple to defend the claim (at GTAT's expense), and if GTAT defends at Apple's request, then Apple will do the same (at GTAT's expense). GTAT may not enter into any settlement that imposes any obligation on Apple without Apple's prior written consent. GTAT will not publicize or permit any third party to publicize any settlement of such claim or allegation without Apple's written permission. If GTAT does not agree that the claim or allegation is fully covered by this indemnity provision, then the parties agree the indemnity claim shall be tolled while the Parties negotiate in good faith an equitable arrangement regarding the defense of the claim or suit and any settlement thereof consistent with GTAT's obligations hereunder. "Personnel" means officers, directors, agents, consultants, contractors, and employees.
20. Duty to Correct. If a third party claims that any Goods infringe an Intellectual Property Right, GTAT will, in addition to its obligations under Section 19, promptly notify Apple in writing and, at its own expense, keep Apple informed of GTAT's defenses and exercise the first of the following remedies that is practicable: (i) obtain from such third party the right for Authorized Purchasers to use, import and sell such Goods in Apple products; (ii) modify the Goods so they are non-infringing and in compliance with this Agreement; (iii) replace the Goods with non-infringing versions that comply with the requirements of any Specifications and this Agreement; or (iv) at Authorized Purchaser's request, accept the return of infringing Goods and refund any amounts Authorized Purchasers paid. In any event, GTAT must exercise one of the foregoing remedies at a time and in a manner that will protect Apple from harm that could result from an injunction.
21. Resource Requirements; Access to Apple Supply Chain.
21.1. Unless agreed otherwise in an SOW, GTAT will, at its expense, purchase, install, test, maintain and operate all Equipment necessary to manufacture and deliver the development deliverables and the Goods. GTAT will also secure all materials in accordance with applicable Specifications necessary to timely manufacture and supply the development deliverables (pursuant to Attachment 2) and the Goods. Upon Apple's request, GTAT will purchase materials directly from Apple, and, at Apple's request, will provide Apple with (i) weekly reports by part number specifying demand for such materials for the immediately following 12-week period; and (ii) weekly receipt logs of any such materials. Before placing orders for or purchasing any materials for use in Goods that are comprised of multiple components, GTAT will provide Apple, for Apple's review and approval, a complete engineering bill of materials for such Goods, listing the GTAT part number(s), lead-time(s), and cost(s) of each material therein. Except for amounts due pursuant to a Letter of Authorization, the applicable SOW or Purchase Order, Apple will not be responsible for any costs associated with the materials. "Equipment" means fixtures, tooling, test equipment and any other equipment used in connection with the development, manufacturing, testing, packaging, delivery or servicing of the development deliverables or Goods. "BOM" means the engineering bill of materials that Apple creates and approves for the development deliverables or Goods.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
5
21.2. Apple may from time to time direct GTAT (or, at GTAT's request, grant written authorization to GTAT) to procure certain materials or supplies from certain third-party vendors with whom Apple has established supply agreements (each, a "Specified Vendor"). In each such instance: (i) GTAT will negotiate and execute its own purchasing agreement with the Specified Vendor; (ii) for the quantities that are to be used to produce Goods under this Agreement, Apple will request the Specified Vendor to offer those materials or supplies to GTAT on terms no less favorable than the terms on which the Specified Vendor sells, or has agreed to sell, the same materials or supplies to Apple; (iii) for all other quantities of such materials or supplies (that is, quantities that are not used to produce Goods under this Agreement), Apple will request, but need not require, that the Specified Vendor offer such materials or supplies to GTAT on the same terms; and (iv) Apple will not require the Specified Vendor to impose less favorable terms on GTAT for quantities of the materials or supplies that are not used to produce Goods under this Agreement.
22. Term and Termination. The term of this Agreement is defined in Section 13.1 of SOW #1 (the "Term"). Except as agreed in an SOW, GTAT may terminate this Agreement if Apple materially breaches this Agreement and fails to cure the breach within 30 days after receipt of written notice from GTAT of the breach. The provisions of Sections 9 through and including 23, and Attachments 1-6 will survive the termination of this Agreement.
23. Miscellaneous. The terms and conditions in Attachments 1-6, to this Agreement are incorporated herein by this reference.
IN WITNESS WHEREOF, the parties have executed this Agreement as of the effective date shown above. Each of the persons signing this Agreement affirms that he or she is duly authorized to do so and thereby to bind the indicated entity. This Agreement may be executed simultaneously in two or more counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
6
ATTACHMENT 1
General Terms and Conditions
1. Confidentiality. All disclosures of Confidential Information arising out of or related to this Agreement will be governed by the terms of the parties' existing Confidentiality Agreement, dated August 24, 2012.
2. Press Releases and Publicity. Neither Apple nor GTAT will issue press releases or other publicity regarding the Agreement or its subject matter without the prior written approval of the other.
3. Compliance with Laws. GTAT agrees that it will fully comply with all applicable laws and regulations in performing its obligations under the Agreement. GTAT agrees that it will not export, re-export, sell, resell or transfer any customer data or any export-controlled commodity, technical data or software (i) in violation of any law, regulation, order, policy or other limitation imposed by the United States (including the United States Export Administration regulations) or any other government authority with jurisdiction; or (ii) to any country for which an export license or other governmental approval is required at the time of export, without first obtaining all necessary licenses or equivalent.
4. Anti-Corruption. GTAT has reviewed and understands Apple's policies with respect to ethical business conduct and agrees to fully comply with all such policies. GTAT will comply with all applicable laws and regulations enacted to combat bribery and corruption, including the United States Foreign Corrupt Practices Act, the UK Bribery Act, the principles of the OECD Convention on Combating Bribery of Foreign Public Officials and any corresponding laws of all countries where business or services will be conducted or performed pursuant to the Agreement (collectively, the "Anti-Corruption Laws"). GTAT and, to the best of GTAT's knowledge, its subsidiaries and affiliates, have conducted their businesses in compliance with the Anti-Corruption Laws. GTAT will not Knowingly, directly or indirectly pay, offer, promise, or give anything of value (including any amounts paid or credited by Apple to GTAT) to any person or party, to influence any act or decision by such person or party for the purpose of obtaining, retaining, or directing business to Apple. "Knowingly" means (i) the actual knowledge of GTAT's executive officers or employees, or (ii) the knowledge that GTAT's executive officers and employees should reasonably be expected to have or (iii) the existence of a reasonable belief of GTAT's executive officers or employees. Any amounts paid by Apple to GTAT under the Agreement will be for services actually rendered, or Goods sold, by GTAT (as applicable). Additionally, to the extent permitted by law, GTAT will notify Apple if an owner, partner, officer, director or employee of GTAT who is assigned to a current or prospective Apple account as an account representative or account manager (or any similar such position) has been, or will become, an official or employee of a governmental entity or political party or a candidate for political office. GTAT represents and warrants that all information provided to Apple in connection with Apple's selection and approval of GTAT as an Apple vendor, or at any other time during the term of the Agreement, is complete and true.
5. Right to Offset. Apple may, from time to time, set-off or recoup any amounts due from GTAT or any GTAT Related Entity to Apple or any Apple Related Entity, against any amounts due from Apple or any Apple Related Entity to GTAT or any GTAT Related Entity. If required by applicable law, Apple will give GTAT notice that Apple has effected a set-off or recoupment, within a reasonable time thereafter via email or any other reasonable means that Apple selects, and GTAT agrees that any such notice will be effective when given, even if a receiver, custodian, trustee, examiner, liquidator or similar official has been appointed for GTAT, the applicable GTAT Related Entity, or any substantial portion of the assets thereof. The rights described in this paragraph are in addition to any other rights and remedies available under this Agreement or applicable law, including, for example, the right to deduct damages from any amount payable to GTAT or any GTAT Related Entity. "Related Entity," as applied to both Apple and GTAT, includes any subsidiary or affiliate and further includes any corporation, partnership, limited liability company, joint venture, association, trust, unincorporated organization or other business entity that controls, is controlled by, or is under common control with an entity, where "control" means that the entity possesses, directly or indirectly, the power to direct or cause the direction of the management policies of the other entity, whether through ownership of voting securities, an interest in registered capital, by contract, or otherwise.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
7
6. Insurance and Loss Prevention. GTAT will comply with the requirements specified in Attachment 4 hereto.
7. GTAT Code of Conduct. GTAT will comply with the requirements specified in Attachment 5 hereto
8. Relationship of Parties. Nothing in the Agreement creates a joint venture, partnership, franchise, employment or agency relationship or fiduciary duty of any kind. Neither party will have the power, and will not hold itself out as having the power, to act for or in the name of or to bind the other party. Except as expressly provided, the Agreement is not for the benefit of any third parties.
9. Assignment. This Agreement is personal to GTAT, and GTAT may not assign, delegate or otherwise transfer this Agreement, any SOW, any Purchase Order, and/or any right or obligation thereunder without the prior written consent of Apple. Unless otherwise defined in a SOW, a Change of Control, as defined below, will be considered an assignment of this Agreement. Any purported or attempted assignment, delegation, subcontracting or other transfer, in whole or in part, without such consent will be null and void and will constitute a breach of this Agreement. Subject to the foregoing, this Agreement will be binding upon, and inure to the benefit of, the successors, assigns, representatives, and administrators of the parties. "Change of Control" means (i) any sale or exchange of the capital stock by the shareholders of GTAT, or any GTAT Related Entity that makes, uses or sells, or offers services in connection with, sapphire production or processing equipment or sapphire goods or material, in one transaction or a series of related transactions where more than 50% of the outstanding voting power of GTAT, or of GTAT's interest in any such GTAT Related Entity, is acquired by a person or entity or group of related persons or entities; (ii) any reorganization, consolidation or merger of GTAT or any GTAT Related Entity where the outstanding voting securities of GTAT or such GTAT Related Entity immediately before the transaction represent or are converted into less than fifty percent 50% of the outstanding voting power of the surviving entity (or its parent corporation) immediately after the transaction; or (iii) the consummation of any transaction or series of related transactions that results in the sale of all or substantially all of the assets of GTAT or any GTAT Related Entity, other than where the entity acquiring shares or assets, or the surviving entity with respect to clause (ii) above, is GTAT or a wholly owned subsidiary of GTAT.
10. No Waiver. No delay or failure to act in the event of a breach of the Agreement will be deemed a waiver of that or any subsequent breach of any provision of the Agreement. Any remedies at law or equity not specifically disclaimed or modified by the Agreement remain available to both parties.
11. Audits/Inspections. During the Term and for two (2) years thereafter, Apple or its representatives may inspect GTAT facilities and audit GTAT's records to verify that GTAT has complied with its obligations under this Agreement. GTAT will provide Apple or its representatives any information and documentation that is reasonably requested in connection with such audit or inspection. GTAT will maintain all records related to the Goods during the Term and for two (2) years thereafter. GTAT will reimburse Apple within 45 days after the audit is completed for any overpayments made by Authorized Purchasers plus the maximum interest rate allowed by law. GTAT will bear the cost of the audit and inspection if the audit or inspection reveals any breach of GTAT's obligations under the Agreement. GTAT must track the date Goods are produced and make such information available to Apple upon Apple's request during the term of this Agreement and for two (2) years after the Goods are delivered.
12. Governing Law. The Agreement and the rights and obligations of the parties will be governed by and construed and enforced in accordance with the laws of the State of California as applied to agreements entered into and to be performed entirely within California between California residents, without regard to conflicts of law principles. The parties expressly agree that the provisions of the United Nations Convention on Contracts for the International Sale of Goods will not apply to the Agreement or to their relationship.
13. GTAT Affiliates. GTAT's affiliates may provide Goods or related services under this Agreement, provided that such affiliate is preapproved by Apple in writing and has executed a Contract of Adherence, joining such GTAT affiliate as a party to this Agreement, in the form attached hereto as Attachment 6. GTAT is not relieved of any of its obligations under this Agreement by virtue of joining an affiliate to this Agreement. Any breach of the Agreement by an affiliate is deemed to be a breach of this Agreement by GTAT. If GTAT knows or becomes
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
8
aware that a GTAT affiliate is providing Goods or related services under this Agreement, then GTAT must immediately (i) notify Apple in writing of the affiliate's identity and clearly explain its legal and corporate relationship with GTAT; (ii) obtain Apple's written consent to allow such affiliate to engage in such activities; and (iii) promptly cause such affiliate to sign the Contract of Adherence (unless Apple requests otherwise).
14. Remedies. If GTAT breaches any term of this Agreement in connection with the provision of Goods to an Authorized Purchaser, then GTAT agrees it owes to Apple any and all remedies under this Agreement for such breach as if Apple had been the direct purchaser of the Goods from GTAT. For example, if GTAT fails to timely deliver Goods to an Authorized Purchaser other than Apple, then GTAT will owe Apple (and Apple can seek from GTAT under this Agreement) any available remedies for failing to timely deliver such Goods. If Apple seeks remedies in such event, then the affected Authorized Purchaser cannot seek remedies for the same breach.
15. Binding Arbitration. Disputes arising under, or in connection with, this Agreement will be finally settled under the Rules of Arbitration of the International Chamber of Commerce by one arbitrator appointed in accordance with the Rules. The language of the arbitration will be English. The place of the arbitration will be San Francisco, CA. Judgment upon any award(s) rendered by the arbitrator may be entered in any court having jurisdiction thereof.
16. Equitable Relief. Notwithstanding the requirements of Section 15, above, either party may seek equitable relief in order to protect its rights, and to cause the other party to perform its obligations, hereunder at any time and in any court having jurisdiction over the parties hereto and/or the subject matter hereof. The parties hereby waive any bond requirements for obtaining equitable relief. Without limitation of the foregoing, the confidentiality provisions of the Agreement will be enforceable under the provisions of the California Uniform Trade Secrets Act, California Civil Code Section 3426, as amended.
17. Apple Requirements Documents. GTAT will comply with all Apple Requirements Documents (as may be updated by Apple from time to time), including the following: #069-0135: Specification, Regulated Substances; #069-1111: Apple RoHS Compliance Specification; #069-1857-D: Apple Specification on the Restriction of Chlorine and Bromine; #080-2503: Apple Supplier Code of Conduct (as further described in Section 7 of this Attachment 1); and #n/a: Loss Control and Loss Prevention Standards.
18. Reports. GTAT will, at GTAT's expense, provide reports requested by Apple, including reports regarding the development deliverables, Goods, Purchase Orders, Hubs, and Defective Goods.
19. Notices. Any notice required or permitted hereunder will be in writing, and will be given to the appropriate party at the address first set forth above, or at such other address as the party may hereafter specify in writing. Any notices to Apple will be sent to the attention of Apple's Corporate Procurement Department. Such notice will. "Business day" shall mean any day on which banks are open for business in San Francisco, California.
20. Force Majeure. Neither party will be liable for any failure to perform caused by circumstances beyond its reasonable control including, but not limited to, acts of God, earthquakes, hurricanes, floods, tornados, fires, acts of war, hostilities, invasions, terrorism, civil disorder, riots, labor actions (other than actions by GTAT's personnel and contractors), major upheavals, government action, government restrictions, blockade, embargo, utility disruptions, including power and water, or accident, provided: (a) it promptly notifies the other party and uses reasonable efforts to correct its failure to perform; and (b) it has taken such commercially reasonable efforts to protect against and mitigate the impact of the force majeure event if such event was reasonably foreseeable or was of a kind for which such precautionary measures are customarily taken in the applicable industry. For the avoidance of doubt, any circumstance caused primarily by one or more furnaces or any other Equipment provided by GTAT, will not constitute a Force Majeure event, and the provisions of this Section 20 will not apply.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
9
21. Construction. The section headings in the Agreement are for convenience only and are not to be considered in construing or interpreting the Agreement. References to sections, schedules, SOWs, and Purchase Orders are references to sections of, and SOWs, schedules and Purchase Orders to, the Agreement, and the word "herein" and words of similar meaning refer to the Agreement in its entirety and not to any particular section or provision. The word "party" means a party to the Agreement and the phrase "third party" means any person, partnership, corporation or other entity not a party to the Agreement. The words "will" and "shall" are used in a mandatory, not a permissive, sense, and the word "including" is intended to be exemplary, not exhaustive, and will be deemed followed by "without limitation." Any requirement to obtain a party's consent is a requirement to obtain such consent in each instance.
22. Severability. If a court of competent jurisdiction finds any provision of the Agreement unlawful or unenforceable, that provision will be enforced to the maximum extent permissible so as to effect the intent of the parties, and the remainder of the Agreement will continue in full force and effect.
23. Related Documents; Precedence. The terms and conditions of any SOW, Purchase Order, and the terms and conditions of any schedules, exhibits, attachments and other documents referenced herein or therein are incorporated into the terms and conditions of this Agreement. In the event of any conflict in the documents which constitute this Agreement, the order of precedence will be (i) the applicable SOW; (ii) this Agreement; (iii) any other schedules, exhibits, attachments and other documents referenced and incorporated herein and therein; and (iv) any Purchase Order.
24. Complete Agreement. The parties agree that the Agreement constitutes the complete and exclusive agreement between them superseding all contemporaneous and prior agreements (written and oral) and all other communications between them relating to its subject matter, excluding the confidentiality agreement referenced herein. Except as expressly provided herein, the Agreement may not be amended or modified except by a written amendment specifically referencing the Agreement, signed by authorized signatories of both parties. The parties expressly acknowledge that they have received and are in possession of a copy of any referenced item not physically attached to the Agreement and any such item will be treated as if attached.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
10
ATTACHMENT 2
Development Terms
1. Scope and Standards of Work.
1.1. During the Term of the Agreement, any services GTAT conducts in connection with the development of new products or other technology, including Goods, will be "Development Services" for the purpose of this Attachment 2. The parties may describe the Development Services in a Statement of Work. Apple has no obligation to purchase or pay for any Development Services or related deliverables except as set forth in an SOW.
1.2. GTAT warrants that its employees, agents, consultants and subcontractors, if any, involved in performance of the Development Services will have the experience and expertise necessary to perform such Development Services and will at all times be bound by appropriate agreements to vest in GTAT all of their right, title and interest in any Project Work Product (as defined below), and all Intellectual Property Rights therein or thereto, that are to be property of Apple or otherwise protected pursuant to Sections 2, 3 or 4 of this Attachment 2.
1.3. GTAT agrees to notify Apple promptly if GTAT knows or has reason to believe that the Statement of Work or any instructions from Apple would, if followed by GTAT, violate any applicable law or infringe or misappropriate any Intellectual Property Rights of any third party or be inconsistent with the Applicable Standards." [***]; and any accessory that is the same or similar (in Apple's sole discretion) to an accessory made or sold by or on behalf of Apple (regardless of when Apple sold or started to sell such accessory, including after the date of the Agreement) that is suitable for use with any Consumer Electronic Product. "Consumer Electronic Products Field" means the[***] for use in Consumer Electronics Products.
1.5. The provision of deliverables and Services in their tangible form have no intrinsic value. As such, no value added, sales, or use taxes have been assessed or are anticipated to be required as a result of the Services provided under this Agreement.
2. Apple Project Materials. Apple may provide items and materials, as specified in an SOW (the "Project Materials"). GTAT agrees that all such Project Materials will be and remain the sole and exclusive property of Apple. If Apple provides Project Materials, GTAT will only use them for the purpose described in the Statement of Work and will not be transferred to any third party without first obtaining written authorization from Apple in each instance. Upon completion of the Development Services, any unused Project Materials will be returned to Apple or destroyed at the sole discretion of Apple.
3. Communication, Visits, Results, and Reports.
3.1. All results, reports, findings, conclusions, work papers, notebooks, electronic records, samples, prototypes, deliverables, and any other information or materials in any form or format arising out of performance of the Development Services by or for GTAT (the "Project Work Product") except GTAT Background
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
11
Technology (defined below) will be the sole property of Apple and will become part of the Confidential Information to be protected under the Agreement. .
3.2. Upon receipt of any deliverable hereunder, Apple will either accept the deliverable, or if in Apple's sole discretion, the deliverable does not comply with the Specifications, including the project schedule, reject the deliverable. If Apple requests, GTAT will assist Apple with testing all deliverables without charge. Upon rejection of a deliverable, GTAT will promptly correct any failure to comply with the Specifications and re-deliver the deliverable to Apple as soon as is practicable, or such other time period agreed upon by Apple in writing.
3.3. GTAT will not destroy or dispose of any Project Work Product without Apple's prior written authorization in each instance. GTAT will, upon Apple's request from time to time, promptly deliver any and all Project Work Product and any work-in-process to Apple.
3.4. GTAT will provide Apple with a written monthly report summarizing the progress of the Development Services and any new Project Work Product developed since the last written report. In addition, GTAT will prepare and provide one or more draft and final report(s) at the intervals, and upon completion of the Development Services, as more fully described in the Statement of Work. All reports will be formatted and delivered to Apple in accordance with Apple's instructions.
3.5. Apple will be solely responsible, at its discretion in accordance with applicable law, for any reporting to appropriate government agencies any Project Work Product generated during performance of the Development Services.
3.6. GTAT will permit Apple's representatives to access all relevant GTAT facilities with reasonable frequency to perform quality assurance audits, observe progress of the Development Services, discuss the Development Services with relevant GTAT personnel, and inspect records and data relevant to the Development Services.
4. Intellectual Property..
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
12
4.2. All right, title and interest in all Project Work Product, and all Intellectual Property Rights therein or thereto, is solely owned by Apple, and GTAT hereby transfers and assigns all of GTAT's right, title and interest in all Project Work Product and all Intellectual Property Rights therein or thereto, to Apple. GTAT will communicate to Apple any of the same promptly and fully upon its creation or development. GTAT will execute all papers and take all actions that Apple reasonably deems necessary or advisable for the filing and prosecution of patent applications or copyright or other registrations and, if appropriate, maintenance of patents or other rights or properties that may issue therefrom, including without limitation execution of any assignments or other agreements further evidencing, perfecting, or recording Apple's ownership of Project Work Product and all Intellectual Property Rights therein or thereto. Inventorship will be determined under principles of U.S. patent law and practice.
4.3. Except as set forth in an SOW, or as otherwise documented in writing and provided to Apple prior to performing any Development Services, GTAT represents and warrants that all Intellectual Property Rights not owned by GTAT and that are necessary for Apple's use or exploitation of the Project Work Product are the subject of valid license or other agreements that grant to GTAT all necessary rights to sublicense or otherwise permit Apple's use or exploitation of the Project Work Product, including in Apple products and services. If any Intellectual Property Rights that are owned, controlled or licensable by GTAT or its Related Entities apply to any of Apple's use or exploitation of the Goods and/or the Project Work Product, GTAT hereby grants to Apple a nonexclusive, royalty-free, irrevocable, perpetual, worldwide license under such Intellectual Property Rights to make, have made, use, have used, sell, have sold, offer for sale, import, have imported or otherwise dispose of Apple products and services, and to practice any methods in connection therewith.
4.4. GTAT will not engage in, nor will it authorize others to engage in, reverse engineering, disassembly or decompilation of any Apple technology provided by Apple to GTAT under this Agreement (including Project Materials) except as required to perform its obligations under the Agreement. Neither Apple nor GTAT will use the other party's Confidential Information provided or developed under this Agreement for the purpose of: (i) identifying or providing evidence to support any potential patent infringement claim against the other Party or its Related Entities, or any of the other Party's direct or indirect suppliers or direct or indirect customers, (ii) filing patent applications except as otherwise provided under this Agreement; (iii) modifying its pending patent applications or the claims of patents in any post-grant proceedings; or (iv) mapping or reviewing software, hardware, and/or confidential information against patents, patent applications, claim charts or other like material.
4.5. GTAT will not use any Apple trademarks for any purpose except to comply with its obligations under this Agreement. The goodwill derived from GTAT's use of any Apple trademarks inures exclusively to the benefit of and belongs to Apple. GTAT acknowledges Apple's ownership of the Apple trademarks and agrees not do anything inconsistent with Apple's ownership of the Apple trademarks, such as filing any trademark application for an identical or similar mark anywhere in the world. Apple will not use any GTAT's trademarks for any purpose except to comply with its obligations under this Agreement. The goodwill derived from Apple's use of any GTAT trademarks inures exclusively to the benefit of and belongs to GTAT. Apple acknowledges GTAT's ownership of the GTAT trademarks and agrees not do anything inconsistent with GTAT's ownership of the GTAT trademarks, such as filing any trademark application for an identical or similar mark anywhere in the world.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
13
ATTACHMENT 3
Service and Support
1. GTAT will accept and fulfill Purchase Orders for replacement Goods ("Service Units") for seven years after the date Apple designates as end-of-life for the Apple product featuring (or manufactured using) such Good ("End-of-Life Designation Date"). To ensure that it is able to do so, GTAT agrees to (i) maintain an adequate stock of Service Units and/or (ii) maintain the equipment and materials (or the ongoing ability to timely acquire as needed) needed to produce and timely deliver Service Units throughout this seven-year period. Under no circumstances will the price of a Service Unit (including the cost of single or multi-pack packaging and handling fees) exceed the price of the corresponding Good as of the day immediately preceding the End-of-Life Designation Date. In no event will there be minimum order quantities for Service Units.
2. Furthermore, GTAT will, at GTAT's expense, provide an inventory of Service Units to Apple in accordance with the Service Unit inventory requirements set forth in document(s), if any, referenced in the applicable Apple Requirements Document(s) or applicable SOW. In absence of such requirements and upon Apple's request, GTAT will deliver an Initial Service Unit Inventory to entities designated by Apple, at no cost, at least one week before Apple first ships the applicable Apple product which incorporates the relevant Good.
"Initial Service Unit Inventory" means the number of Service Units calculated using the following formula:
Initial Service Unit Inventory = A x B, where:
A = the projected rate of return (as determined by Apple) of the Goods.
B = the cumulative number of Goods in the then current Forecast for the first three months of production
3. Authorized Purchasers will return all Goods Ex Works (place to be named by the Authorized Purchaser) and title will transfer to GTAT when placed in the carrier's possession at the named place; provided, however, that whenever Apple Sales International or Apple Operations Europe returns Goods from the Asia-Pacific region, Goods will be returned DAF (named place, freight unpaid) and title will transfer to GTAT at the named place at the frontier, but before the customs border of the destination country. GTAT will deliver all Service Units DDP (place to be named by the Authorized Purchaser) and title will transfer upon actual receipt of the Service Units at the named place of destination; provided, however, that whenever Service Units are delivered to Apple Sales International or Apple Operations Europe in the Asia-Pacific region, Goods will be delivered DAF (named place, freight paid to final destination) and title will transfer at the named place at the frontier, but before the customs border of the country of destination.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
14
ATTACHMENT 4
Insurance and Loss Prevention
1. GTAT will, at no cost to Apple or any other Authorized Purchaser, maintain the following minimum insurance in full force and effect throughout the term of the Agreement: (i) public liability or commercial general liability insurance, including coverage for products liability and products/completed operations hazard, claims by one insured against another insured, and GTAT's defense and indemnity obligations under the Agreement, with coverage of not less than $5,000,000 USD combined single limit per occurrence and $5,000,000 USD annual aggregate; (ii) automobile liability insurance in compliance with all statutory requirements and providing coverage for third party bodily injury and property damage, with limits of not less than $500,000 USD each accident, for all owned, non-owned and hired motor vehicles used in the performance of GTAT's obligations under the Agreement; (iii) workers' compensation insurance in compliance with all statutory regulations in any country, state, territory or province where any of the development deliverables or Goods are provided, manufactured or delivered; and (iv) property insurance on an all-risk of physical loss basis, subject to standard exclusions, with sufficient limits to cover GTAT's liability for risk of loss or damage to Apple property while in GTAT's care, custody or control.
2. The insurance coverage that GTAT is obligated to carry pursuant to this Attachment 4 will include either (i) an indemnity to principals clause and either a blanket interest provision, or separately note the interests of Apple, its subsidiaries and affiliates, and any other party which Apple may reasonably designate as principals for liabilities and damages for which GTAT is obligated to provide indemnity to such parties pursuant to the Agreement, or (ii) Apple, its subsidiaries and affiliates, and any other party which Apple may reasonably designate as additional insureds for liabilities arising out of the acts or omissions of the GTAT, its employees, and agents in the performance of the Agreement. The property insurance that GTAT is obligated to carry will include Apple, its subsidiaries and affiliates as loss payees, as their interests may appear. The insurance that GTAT maintains will be primary to and without a right of contribution from any insurance maintained by or otherwise afforded to Apple, its subsidiaries and affiliates.
3. GTAT will deliver to Apple's Procurement Department (1 Infinite Loop, M/S 81-2BIZ, Cupertino, California 95014) one or more certificates of insurance showing evidence of the maintenance of the coverage required above. In the event of cancellation of any required coverage, GTAT will promptly replace such coverage so that no lapse in insurance occurs. GTAT agrees to comply with the insurance and loss prevention requirements set forth in the document(s), if any, referenced in the Apple Requirements Document. Apple reserves the right to perform risk evaluations of GTAT's facilities and GTAT agrees to work with Apple to upgrade any facility that does not comply with such requirements.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
15
ATTACHMENT 5
Supplier Code of Conduct
1. GTAT will comply with the Apple Supplier Code of Conduct ("Code of Conduct") available on Apple's public website, and will implement its requirements as amended by Apple from time-to-time.
2. Notwithstanding anything to the contrary in any prior agreement between Apple and GTAT, GTAT will: (i) allow Apple and a third party auditor designated by Apple (collectively, the "Auditors") to audit and assess GTAT's practices, policies, records, and facilities without notice and to interview GTAT's personnel without monitoring solely to verify GTAT's compliance with the Code of Conduct (collectively, an "Assessment"); (ii) provide the Auditors with access to GTAT's facilities, relevant records, and knowledgeable personnel without disruption as part of any Assessment; (iii) allow the Auditors to audit and assess working hours and conditions, remuneration, personnel practices, dormitory and dining facilities, and health, safety, and environmental practices, as applicable, as part of the Assessment; (iv) not request or encourage, directly or indirectly, any GTAT personnel to furnish false or incomplete information in connection with any Assessment; (v) not take retaliatory action against any GTAT personnel interviewed during an Assessment; and (vi) promptly implement corrective action to remedy any material non-conformance with the Code of Conduct identified by an Assessment.
3. Prior to engaging any subcontractor to perform any material portion of its obligations under the Agreement, GTAT will provide Apple with the name and address of such subcontractor, and upon Apple's written request, GTAT will (a) require such subcontractor's compliance with the Code of Conduct; (b) require such subcontractor to provide the Auditors with access to its facilities, records, and personnel sufficient to enable the Auditors to assess such subcontractor's compliance with the Code of Conduct; and (c) require such subcontractor to promptly implement corrective action to remedy any material non-conformance with the Code of Conduct.
4. Notwithstanding any provision in the Agreement or any other agreement between Apple and GTAT, GTAT agrees to hold the results of any Assessment in the strictest confidence and all such information is deemed to be Apple Confidential Information and GTAT relinquishes any and all rights in and to such results and findings. GTAT will obtain all permits, consents, and authorizations necessary to enable the Auditors to audit and assess the policies, practices, records, and facilities of each subcontractor or GTAT Related Entity performing under the terms of the Agreement.
5. For purposes of Attachment 5, the term "GTAT" will include any GTAT Related Entity performing any material portion of GTAT's obligations under the Agreement, and GTAT's obligations hereunder will apply to any such GTAT Related Entity.
6. GTAT's failure to perform its obligations described in this section or to remedy any material non-conformance with the Code of Conduct after a reasonable amount of time will constitute a breach of the Agreement.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
16
ATTACHMENT 6
Contract of Adherence
This Contract of Adherence ("CoA") is between the following parties and is effective as of [DATE]:
Apple Inc., a California corporation located at 1 Infinite Loop, Cupertino, California 95014, United States ("Apple");
[GTAT Name and Address] (collectively, "Company"); and
[GTAT AFFILIATE TO BE ADDED], a [*], located at [*] (the "Covered Party").
Purpose. Reference is made to that certain Apple Inc. Supply Agreement by and between Apple and Company, effective as of [date] (together with its attachments, and any documents referenced therein, and all SOWs issued thereunder, the "Agreement"). All capitalized terms not defined herein are defined as set forth in the Agreement. Pursuant to Section [ ] of the Agreement, additional GTAT Affiliates may become a party to the Agreement by the execution of this CoA. Company and Apple would like to add Covered Party as a GTAT Affiliate.
GTAT Affiliate Obligations. Covered Party acknowledges that it has read and understands the Agreement. Covered Party hereby agrees to be a GTAT Affiliate under the Agreement and to fully comply with all terms and conditions applicable to GTAT Affiliates.
Agreement Amendments. Covered Party acknowledges and agrees that Apple and Company may amend the Agreement in accordance with its terms, without the consent of Covered Party, and that any such amendment shall apply to Covered Party unless otherwise stated in such amendment.
Representations. Covered Party represents that: (a) it has the full right and authority to enter into and carry out its obligations under this CoA and the Agreement; (b) it has obtained all private and governmental consents required to perform its obligations under this CoA; and (c) the execution and performance of this CoA does not and will not conflict with or violate any other obligation Covered Party may have, contractual or otherwise.
Entire Agreement. This CoA and the Agreement constitute the entire understanding and agreement of Covered Party, Apple and Company, whether written or oral, with respect to the subject matter of this CoA, and supersede any prior or contemporaneous agreements or understandings between Covered Party, Apple and Company with respect to its subject matter.
Joint and Several Liability. Company shall not be relieved of any of its obligations under the Agreement by virtue of this CoA and Company guarantees the performance of the terms and conditions of the Agreement by GTAT Affiliates. Any breach of this CoA is deemed to be a breach of this Agreement by Company.
Company and Covered Party agree that they will be jointly and severally liable for any claims by Apple or damages incurred by Apple under the Agreement or this CoA. Company and Covered Party's joint and several liability under this CoA includes obligations arising under successive transactions continuing, compromising, extending, increasing, modifying, releasing, or renewing obligations under the Agreement, any SOW, changing payment terms, or other terms and conditions thereof, or creating new or additional obligations after prior obligations under the Agreement have been satisfied in whole or in part under the Agreement, to the maximum extent permitted by law, Company and Covered Party hereby irrevocably waive any right to revoke their joint and several liability under this Agreement as to future obligations.
Company assumes full responsibility for keeping informed of Covered Party's financial condition and all other circumstances bearing upon the risk of nonpayment or nonperformance of the Agreement and any SOW and Apple will have no duty to report any such information known to Apple. A separate action may be brought against any of Company, Covered Party or any other guarantor.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
17
Company and Covered Party acknowledge and agree with Apple that they are jointly and severally liable for their obligations under the Agreement and this CoA. Neither Company's nor Covered Party ‘s obligations to Apple will be affected by (a) the amendment, modification, renewal, increase in the amount, waiver, surrender, compromise, settlement, release or termination of, or the acceptance of partial payment on, any or all of the obligations, covenants or agreements of the other under the Agreement or this Amendment; (b) the failure by Apple to give notice to either of the occurrence of a default by the other under the Agreement; (c) the extension of the time for performance of or the giving of any other indulgence in relation to any obligation under the Agreement; (d) proceeding against Company or Covered Party or any other person or entity in any particular order; (e) the taking of any of the actions referred to in the Agreement, including any acceleration of sums owing thereunder;(f) any failure, omission, delay or lack on the part of Apple to enforce, assert or exercise any right, power or remedy conferred on it in the Agreement or otherwise available to it in law or equity to proceed against or exhaust any such security held from Company or Covered Party or any other person;(g) the voluntary or involuntary liquidation, dissolution, sale or other disposition of all or substantially all the assets, marshaling of assets and liabilities, receivership, insolvency, bankruptcy, assignment for the benefit of creditors, reorganization, arrangement, composition with creditors or readjustment of, or other similar proceedings affecting Company or Covered Party or any of the respective assets of either of them; (h) any defense based upon any legal disability of Company or Covered Party or, any release, discharge, reduction or limitation of or with respect to any sums owing by Company or Covered Party or any other liability of Company or Covered Party to Apple; (i) the release or discharge by operation of law of Company or Covered Party from the performance or observance of any obligation, covenant or agreement contained in the Agreement or this CoA; (j) the taking and, holding, substitution, release, impairing the value, applying and directing the order or manner of sale of, or the addition to, in whole or in part, at any time or times, collateral or, guarantees or other security or support for payment under the Agreement and any change of such guaranties or collateral, guarantees or other security or support; or (k) application of payments received by Apple from Company or Covered Party to any amount owed by either to Apple, in such order as Apple shall determine in its sole discretion, whether or not such amounts are owed under this Agreement. Without limiting the generality of the foregoing, Company and Covered Party irrevocably waive (i) all notices all notices of acceptance of joint and several liability, the occurrence of any breach, default, nonperformance, protest, notice of protest or notice of dishonor or of any presentment, demand for any payment, action at any time taken or omitted, and to which each might otherwise be entitled to which it might otherwise be entitled; (ii) any claims and other rights that it now has or may hereafter acquire against Covered Party that arise from the payment or enforcement of Covered Party's obligations under CoA, including any right of subrogation, reimbursement, exoneration, contribution or indemnification and any right to participate in any claim or remedy of Apple against Covered Party; (iii) any lack of authority of any officer, director, partner, agent or any other person acting or purporting to act on behalf of Company or Covered Party which is a corporation, partnership or other type of entity, or any defect in the formation of it; and (iv) any rights and benefits that might otherwise be available to Company under any rights and benefits that might otherwise be available to Covered Party under California Civil Code Section 2799, 2808, 2809, 2810, 2815, 2819, 2820, 2821, 2822, 2838, 2839, 2845, 2847, 2848, 2849, 2850, 2855, 2899 or 3433 or California Code of Civil Procedure Sections 337.
This Contract of Adherence shall be governed by, and construed in accordance with, the laws of the State of California, without reference to principles of conflicts of law.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
18
Acknowledged and agreed by their duly authorized representatives.
[***] Portions of this exhibit have been redacted pursuant to a confidential treatment request. An unredacted version of this exhibit has been filed separately with the Commission.
19 | https://contracts.onecle.com/gt-advanced-technologies/apple-dev-2013-10-31.shtml | CC-MAIN-2021-17 | en | refinedweb |
This tutorial covers the
implement titlecase pipe example in Angular.
Angular titlecase
what is a titlecase for a string?. titlecase for a statements is to capitalize the first letter of all major words and remaining letters of the words are lowercase.
Title case for a strings can be applied in the following ways
- heading
- blog title
- essay heading
- news headings
Simple example for a string is
angular framework and output is Angular Framework
titlecase pipe Syntax
Angular pipe is an custom code which accepts input, transform it and output data.
Angular provides inbuilt pipe
titlecase. This transform each word with first letter capitalize and remaining letter lower case
{{string_expression | titlecase}}
string_expression is either string or expression value titlecase is an inbuilt angular
titleCasePipe
titlecase pipe component example
In the component,
titlecase pipe used in expression.
<div> <h2>{{ heading | titlecase }}</h2> </div>
In typescript controller, heading variable is declared, initialized with mixed case
import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'], }) export class AppComponent { heading="angular titlecase example tutorials"; heading1="angular-titlecase example tutorials"; }
And the output seen in the browser is
Angular titlecase pipe example
Important points
- Transformed first letter of each word into uppercase, remaining letters into lowercase in the statements
- statements with delimited by whitespaces,tabs are considered as words
- comma,hyphen and other special characters are not treated as delimiter.
titlecase pipe can also implemented in typescript controller with custom code
In this post, we have learned
- Angular pipe titlecase syntax
- titlecase example | https://www.cloudhadoop.com/angular-titlecase-pipe-example/ | CC-MAIN-2021-17 | en | refinedweb |
Python issubclass() function is used to check if a class is a subclass of another class or not.
Table of Contents
Python issubclass()
Python issubclass() function syntax is:
issubclass(class, classinfo)
This function returns
True if class is a subclass of classinfo.
A class is considered a subclass of itself. We can also pass a tuple of classes as the classinfo argument, in that case, the function will return True if class is a subclass of any of the classes in the tuple.
Since the
object is the base class in Python, the function will return True if classinfo is passed as
object class.
Python issubclass() example
Let’s define some classes and subclasses for our example.
class Super: pass class Child(Super): pass class GrandChild(Child): pass
Now let’s see the output of issubclass() function with different arguments.
print(issubclass(Child, Super)) # 1st level inheritance print(issubclass(GrandChild, Super)) # multilevel inheritance print(issubclass(Child, Child)) # same class print(issubclass(Super, tuple)) # no inheritance print(issubclass(Super, object)) # object is the base class
Output:
True True True False True
Python issubclass() with tuple of classes
print(issubclass(GrandChild, (str, list, Super)))
Output:
True
Let’s have a look at another example where we will check if OrderedDict is a subclass of dict or not.
from collections import OrderedDict print(issubclass(OrderedDict, dict))
Output:
True
Python issubclass() vs isinstance()
Python issubclass() and isinstance() functions are very similar, except that former works with classes whereas later works with instance of classes.
Reference: Official Documentation
very simple to understand | https://www.journaldev.com/22938/python-issubclass | CC-MAIN-2021-17 | en | refinedweb |
calculate globalmatrix for GeRayCollider
On 17/03/2016 at 03:23, xxxxxxxx wrote:
how do i calculate the correct matrixes for a geraycollide to get the option to move the target and the source? all i see is that geraycollide works with local matrixes and i presume that i need to calculate a global matrix for it. but i don't know how
import c4d from c4d.modules import mograph as mo from c4d.utils import GeRayCollider #Welcome to the world of Python def main() : md = mo.GeGetMoData(op) if md is None: return False cnt = md.GetCount() marr = md.GetArray(c4d.MODATA_MATRIX) fall = md.GetFalloffs() ray = GeRayCollider() UsrDtColl = op[c4d.ID_USERDATA,1] ray.Init(UsrDtColl, True) ml = gen.GetMl() mg = gen.GetMg() img = ~mg for i in reversed(xrange(0, cnt)) : if ray.Intersect(marr[i].off+img.off,marr[i].off+img.off, 300) == True: marr[i].off = ray.GetNearestIntersection()["hitpos"] md.SetArray(c4d.MODATA_MATRIX, marr, True) return True
On 17/03/2016 at 16:12, xxxxxxxx wrote:
status update:
now i can move the source point of the GeRayCollide, rotate won't work atm.
next step is to make the collider moveable too.
current file:
On 18/03/2016 at 03:16, xxxxxxxx wrote:
Hi,
The explanation and script source code in the following post should help you: | https://plugincafe.maxon.net/topic/9395/12579_calculate-globalmatrix-for-geraycollider | CC-MAIN-2019-13 | en | refinedweb |
Sometimes we have to remove character from String in java program. But java String class doesn’t have
remove() method. So how would you achieve this?
Table of Contents
Java Remove Character from String
If you notice String class, we have
replace() methods with different variations. Let’s see what all overloaded replace() methods String class has;
replace(char oldChar, char newChar): Returns a string resulting from replacing all occurrences of oldChar in this string with newChar.
replace(CharSequence target, CharSequence replacement): Replaces each substring of this string that matches the literal target sequence with the specified literal replacement sequence.
replaceFirst(String regex, String replacement): Replaces the first substring of this string that matches the given regular expression with the given replacement.
replaceAll(String regex, String replacement): Replaces each substring of this string that matches the given regular expression with the given replacement.
So can we use
replace('x','');? If you will try this, you will get compiler error as
Invalid character constant. So we will have to use other replace methods that take string, because we can specify “” as empty string to be replaced.
Java String Remove Character Example
Below code snippet shows how to remove all occurrences of a character from the given string.
CopyString str = "abcdDCBA123"; String strNew = str.replace("a", ""); // strNew is 'bcdDCBA123'
Java Remove substring from String
Let’s see how to remove first occurrence of “ab” from the String.
CopyString str = "abcdDCBA123"; String strNew = str.replaceFirst("ab", ""); // strNew is 'cdDCBA123'
Notice that
replaceAll and
replaceFirst methods first argument is a regular expression, we can use it to remove a pattern from string. Below code snippet will remove all small case letters from the string.
CopyString str = "abcdDCBA123"; String strNew = str.replaceAll("([a-z])", ""); // strNew is 'DCBA123'
Java Remove Spaces from String
CopyString str = "Hello World Java Users"; String strNew = str.replace(" ", ""); //strNew is 'HelloWorldJavaUsers'
Java Remove Last Character from String
There is no method to replace or remove last character from string, but we can do it using string substring method.
CopyString str = "Hello World!"; String strNew = str.substring(0, str.length()-1); //strNew is 'Hello World'
Java String Remove Character and String Example
Here is the complete java class for the examples shown above.
Copypackage com.journaldev.examples; public class JavaStringRemove { public static void main(String[] args) { String str = "abcdDCBA123"; System.out.println("String after Removing 'a' = "+str.replace("a", "")); System.out.println("String after Removing First 'a' = "+str.replaceFirst("ab", "")); System.out.println("String after replacing all small letters = "+str.replaceAll("([a-z])", "")); } }
Output produced by above program is:
CopyString after Removing 'a' = bcdDCBA123 String after Removing First 'a' = cdDCBA123 String after replacing all small letters = DCBA123
That’s all for removing character or substring from string in java program.
Indrajit Das says
I want to replace a few words from a String , Whenever a match will be founded it will remove that match . Example : “Learning java is not so easy but also” /* is not so much hard */ “. All that I need to replace the whole comment section ( /* ———-*/). In this case what I should do ?
Pankaj says
You need to use regex for that.
Nisha says
This article will provide good knowledge, who are welling to learn java. . It was great experience. Good platform to enhance our knowledge. I found a clear description in each and every topic.
abcd says
how to remove the string of characters from another string
eg: “lhe” from “hello world”. | https://www.journaldev.com/18361/java-remove-character-string | CC-MAIN-2019-13 | en | refinedweb |
[OmniFaces utilities] The
getViewMap()method returns the view scope map.
[OmniFaces utilities] The
getViewAttribute()method returns the view scope attribute value associated with the given name.
[OmniFaces utilities] The
setViewAttribute()method sets the view scope attribute value associated with the given name.
[OmniFaces utilities] The
removeViewAttribute()method removes the view scope attribute value associated with the given name. This method return the view scope attribute value previously associated with the given name, or
nullif there is no such attribute.
Method Faces#getViewMap()- returns the view scope map
See also: Faces#getContext()
Method Faces#getViewAttribute()- returns the view scope attribute value associated with the given name
Method Faces#setViewAttribute() - sets the view scope attribute value associated with the given name
Method Faces#removeViewAttribute() - removes the view scope attribute value associated with the given name
See also: Faces#getContext()Usage:
Below you can see an example of listing the content of the view map (view scope):
import org.omnifaces.util.Faces;
...
Map<String, Object> viewmap = Faces.getViewMap();
for (Map.Entry<String, Object> entry : viewmap.entrySet()) {
System.out.println(entry.getKey() + "/" + entry.getValue());
}
You can add an entry (view scope attribute) in the view map (scope) via Faces#setViewAttribute() method. For example, you may need to store something under a key representing a variable name provided by the JSF page author using a var like attribute.
...
private enum PropertyKeys {
var;
}
...
public String getVar() {
// return "var" value from state
}
Faces.setViewAttribute(getVar(), something);
If you know the name of the view scope attribute then you can collect it easily via Faces#getViewAttribute() method. Suppose that the variable name (returned by getVar()), is t. Then, we can obtain the something stored under t like this:
// e.g. something
Object something = Faces.getViewAttribute("t");
Finally, you can remove a view scope attribute via Faces#removeViewAttribute() as below (this method returns the view scope attribute value previously associated with the given name, or null if there is no such attribute):
// e.g. something
Object something = Faces.removeViewAttribute("t");
Among others, the view map will contain instances of managed beans that are declared under the view scope (@ViewScoped (JSF/JSF 2.2 CDI compatible)). For example, let's suppose that we have this managed bean:
@Named / @ManagedBean
@ViewScoped (from javax.faces.view.ViewScoped) / @ViewScoped (from javax.faces.bean.ViewScoped)
public class LoginBean implements Serializable {
private String email;
private String password;
// getters and setters
}
You can easily identify such beans by their names which becomes keys in the request map. Therefore you will be able to locate an instance of this JSF managed bean in the view map under the key, loginBean. If you specify the bean name via @Named(value="some_name") or @ManagedBean(name="some_name"), then some_name will be the key in the view map. So, via the view map, you can access a view scoped bean property, like this:
String email = ((LoginBean)(Faces.getViewAttribute("loginBean/some_name"))).getEmail();
Is perfectly legal to do this also (this refers to the current bean):
@Named(value="some_name")
...
String bean_name = getClass().getAnnotation(Named.class).value();
String email = ((LoginBean)(Faces.getViewAttribute(bean_name))).getEmail();
Or like (this refers to the current bean):
@ManagedBean(name="some_name")
...
String bean_name = getClass().getAnnotation(ManagedBean.class).name();
String email = ((LoginBean)(Faces.getViewAttribute(bean_name))).getEmail();
Now, you can easily intuit how to work with managed beans stored in the view map.
Niciun comentariu :
Trimiteți un comentariu | http://omnifaces-fans.blogspot.com/2015/04/omnifaces-utilities-20-get-view-scope.html | CC-MAIN-2019-13 | en | refinedweb |
Opened 13 years ago
Closed 12 years ago
Last modified 12 years ago
#1808 closed defect (worksforme)
ForeignKey fields produce errors when subobjects do not have Admin interfaces themselves
Description
If a model contains other models using ForeignKey fields, and the submodels don't have have their own explicit Admin interface (ie, have "class Admin" in their class definitions), then creating those subobjects will produce an the following kind of error:
Traceback (most recent call last): File "/usr/local/src/django/svn-trunk/django/core/handlers/base.py" in get_response 74. response = callback(request, *callback_args, **callback_kwargs) File "/usr/local/src/django/svn-trunk/django/contrib/admin/views/decorators.py" in _checklogin 54. return view_func(request, *args, **kwargs) File "/usr/local/src/django/svn-trunk/django/views/decorators/cache.py" in _wrapped_view_func 40. response = view_func(request, *args, **kwargs) File "/usr/local/src/django/svn-trunk/django/contrib/admin/views/main.py" in add_stage 299. return render_change_form(model, manipulator, c, add=True) File "/usr/local/src/django/svn-trunk/django/contrib/admin/views/main.py" in render_change_form 196. field_sets = opts.admin.get_field_sets(opts) AttributeError at /admin/polls/license/add/ 'NoneType' object has no attribute 'get_field_sets'
This error was generated for the following model. In this case, adding an "Institution" will work, but adding a "License" will produce the above error. Also, there's no way to add an "Operating System" for the same reason. All work correctly if the subobjects are explicitly declared to have Admin interfaces.
from django.db import models class Institution(models.Model): name = models.CharField(maxlength=200, core=True) reference_url = models.CharField(maxlength=200, core=True) class Admin: # because of this field, no error here pass def __repr__(self): return self.name class License(models.Model): # no Admin class will produce an error when added title = models.CharField(maxlength=200) reference_url = models.URLField() description = models.TextField() def __repr__(self): return self.title class OperatingSystem(models.Model): # no Admin class: no way to add this object as a subpart of SoftwarePackage name = models.CharField(maxlength=200, core=True) def __repr__(self): return self.name class SoftwarePackage(models.Model): title = models.CharField(maxlength=200) institution = models.ForeignKey(Institution, null=True) license = models.ForeignKey(License, null=True) operating_system = models.ManyToManyField(OperatingSystem, null=True) open_source = models.BooleanField() def __repr__(self): return self.title class Admin: pass
I have no problem adding objects in the admin interface or in the shell. If no class Admin is specified, then then those type of objects cannot be added through the admin. | https://code.djangoproject.com/ticket/1808 | CC-MAIN-2019-13 | en | refinedweb |
A Snippet of Dotty
In a recent PR, Martin highlighted the following code as “a great showcase how opaque types, implied instances, extension methods and inline can work together to give something beautiful.”: _*)
...
}
This is showing off a lot of new concepts at once, so let’s take it apart, bit by bit. (I’ve talked about much of this before, so this post is less new-and-different and more about showing how it works together.)
Usual caveats: this reflects the current state of a pretty rapidly-evolving design. None of these features are guaranteed to make it into Scala 3, although all of them are now described in the Dotty documentation. Details are still likely to change. And keep in mind that I am not a member of the Dotty team, just a outside observer who is drinking the firehose from the repo, trying to keep up, and passing the interesting-looking bits on. (So don’t take any of this as gospel.)
Opaque Types
Let’s begin at the top:
opaque type IArray[T] = Array[T]
Opaque types are the incoming replacement for AnyVal. By now, we’re all used to this idiom:
case class Foo(b: Bar) extends AnyVal
In theory, this creates a low-overhead wrapper that hides the concept of
Bar behind the concept of
Foo. In practice, it has never worked very well. The wrapper is very permeable unless you take steps to avoid that — you can just say
myFoo.b to get the
Bar. And while in theory this doesn’t box, in practice it does so pretty often, so the “low-overhead” fails on a pretty regular basis.
opaque replaces that with a first-class notion of type hiding. In this case, the
IArray type is a much stronger barrier — outside code really can’t get at the underlying
Array except through methods you explicitly put into the companion object. For example, here’s one of the constructors:
def apply[T: ClassTag](xs: T*): IArray[T] = Array(xs: _*)
This is inside the companion object, so it can see the underlying relationship between
IArray and
Array. It is constructing an
Array, and returning that as an
IArray, with no need for
asInstanceOf — in here, we know they’re the same type. But anywhere outside the companion object, they’re unrelated.
Also, the low-overhead guarantees are much stronger: this doesn’t box, it is simply a really strong alias for the underlying type.
Of all the features proposed for Scala 3, this is perhaps the one I am surest is going to go in, in roughly its current form. It was proposed quite a while ago by now, and just about everyone has agreed that it’s a big help, providing Scala with a powerful approach to type aliasing. It will help Scala 3 provide stronger and more appropriate types than ever before.
Implied Instances
Inside the companion object, we find:
object IArray {
implied arrayOps {
inline def (arr: IArray[T]) apply[T] (n: Int): T =
Implied instances are a new mechanism for specifying the functionality of a type. One of the goals being examined in Scala 3 is replacing the many uses of the word
implicit with more-precise terminology and syntax.
In particular, this is replacing the heavily-used
implicit class, which is currently used for extension methods. In Scala 2, the above would be stated as something like:
object IArray {
implicit class arrayOps(arr: IArray[T]) {
def apply[T](n: Int): T = ...
}
}
In the new model, this gets broken into two concepts, Implied Instances and Extension Methods.
Implied Instances are actually much more powerful than what you see here — this is basically the degenerate case of a new tool designed to make it easier to define typeclass instances. See the documentation for more details.
The syntax around implicits is probably one of the most under-discussion aspects of Dotty, and has changed a lot in the past few months. So watch this space — further changes wouldn’t surprise me.
Extension Methods
Now, let’s look at the innards of
arrayOps:
inline def (arr: IArray[T]) apply[T] (n: Int): T =
(arr: Array[T]).apply(n)
inline def (arr: IArray[T]) length[T] : Int =
(arr: Array[T]).length
Notice the weird syntax of
apply? Let’s strip away the stuff around it:
def (arr: IArray[T]) apply[T] (n: Int): T
This is the new extension method syntax. I can now put a parameter before the name of the function, which means, “this function should be used like a method on that first parameter”. So the above is effectively a new method on
IArray[T], defined from the outside.
Taking the second of those, I can now say
myIArray.length, just as if
length was a method defined in the conventional way. The syntax reflects the way that you call it, with the type being operated on to the left of the name of the method you call, the same way that the value being operated on is to the left of the dot.
Inline
Finally, let’s look at those functions again:
inline def (arr: IArray[T]) apply[T] (n: Int): T =
(arr: Array[T]).apply(n)
inline def (arr: IArray[T]) length[T] : Int =
(arr: Array[T]).length
Note that they are declared as
inline. This tells the compiler that it should aggressively inline the function body, instead of building and calling the function normally. The contents of the function will be placed at the call site.
That helps us achieve the low-overhead goal of our opaque type. We don’t even have a real function call here: the calls to
myIArray.apply and
myIArray.length are rewritten as calls to the underlying functions on
Array. Since the only thing that happens inside of our functions are casts (which are free) and calls to the underlying functions, these functions wind up with no overhead at all — they are literally just calls to our underlying type.
Wrapping it Up
In this little code snippet, we find:
opaquelets you easily create type aliases that really hide their underlying types, so you can write more strongly-typed code.
- Those aliases are inherently low-overhead, with no boxing.
- You can use
inlineto reduce the overhead of functions on those opaque types, sometimes to zero.
- You can use
impliedto make functions available for a type.
- The extension-method syntax lets you add new “methods” to existing types from the outside, without the syntactic complexity of the old
implicit classmechanism.
So yeah — while the details may change going forward, we are gradually moving towards tools that will make Scala 3 code more robust, easier to write, and more readable. “Beautiful” seems warranted… | https://medium.com/@jducoeur/a-snippet-of-dotty-27eadcee72e3 | CC-MAIN-2019-13 | en | refinedweb |
by Tim Park.
Luckily, these days we have Google to help out. “How to build a website using python,” you search. You stumble across this nifty little web framework and decide to give it a shot—not that you actually know what a framework is. You try reading the documentation from the beginning, don’t understand any of it (“database abstraction layer?”), and skip ahead to the installation step.
Finally, some actionable steps to follow! You copy the commands into your terminal. There’s something about something called pip, and virtualenv, whatever those are. You take a minute to look those up, just so you have an idea of what you’re doing to your machine..
Okay, looks like pip is a utility to install packages so you can
import them, just like we
import the built-in
math package. Virtualenv seems like a way to isolate package installations to a specific project. You still don’t fully understand any of the commands besides
mkdir and
cd, but you‘re not totally clueless either. Time to move on to the quick start guide.
You follow the initial instructions, and there’s a burst of elation when you run your program and see “Hello, world” in your browser for the first time. You think to yourself, now this is what I’m talking about! No more boring old command line programs for me! It’s a small step, to be sure, but that’s your code in the browser, and that’s significant.
You continue, but start running into a lot of concepts that you don’t fully understand. Routes? HTTP? GET? POST? And what are rendering templates? Some of these terms seem vaguely familiar, but you don’t actually know what they mean, so you stop along the way to look them up. Just like you learned about pip and virtualenv, you start to assemble a fuzzy picture of how this application is working..
You can use templates to display content like you would with any other HTML/CSS page, but the content is dynamic. You can control what content is displayed at different URLs by defining routes. You can even fetch content from other websites to use in your own application!
As you get more and more comfortable, you start to feel a degree of control and pride in your independent learning that you’ve never felt while working on any of your class projects. You feel empowered. Who even needs school, anyway?
As you gain confidence, the guide starts to feel a little…basic. It is, after all, a quick start guide. I could do something cooler than this, you think.
So you strike out on your own. You set up your project the same way you did before, with pip and virtualenv, because that’s the only way you know how to set up a project. You know how to set up the templates. You know how to set up the routes, the HTTP requests, the GETs and the POSTs. You might even dare to use a third party API and practice reading some more documentation. You still don’t understand everything, but you’re learning, bit by bit.
And finally, you build your very first independent side project. Not the result of a tutorial that hundreds of budding developers before you have followed. Not a class assignment that thousands of students before you have done. A real, tangible, honest-to-goodness application that would not exist had you not built it. And that’s an amazing feeling.
So what’s next? Now that you’ve built an application, how do you put it online so everyone can use it? What really is a “back end?” A front end?
Here’s the great thing about fostering a desire to learn: there’s always more. It’s like an amazing television series that you can never get enough of—and it never ends. There’s no such thing as learning too much. Every discipline in software development is an entire new world to dive into, and the more you learn, the more worlds are opened up to you.
At some point, you realize that there is no single canonical path to being a “real” software developer. There’s not enough time to master every single thing. Everyone’s path to success is totally unique. It’s entirely up to you to forge a path to the destination you want to reach.
This was the story of my own first step into the world of software development. If you want to see the very first side project that I ever built, you can check it out here.
And if you feel like sharing your own, feel free to post a link in a response! I’m always curious to see what kind of code people start out writing.
If you like what you read, there’s more where that came from!
As a guy in the early stages of his tech career, I write about — you guessed it — the early stages of a tech career. | https://www.freecodecamp.org/news/my-first-steps-into-the-world-of-software-development-and-what-they-taught-me-6ee748cffb8f/ | CC-MAIN-2021-04 | en | refinedweb |
Overview
This is part two of a five-part series of tutorials about making games with Python 3 and Pygame. In part one, I introduced the series, covered the basics of game programming, introduced Pygame, and examined the game architecture.
In this part, we'll look at the
TextObject class used to render text on the screen. We'll create the main window, including a background image, and then we'll learn how to draw objects like bricks, the ball, and the paddle.
The TextObject Class
The
TextObject class is designed to display text on the screen. The case could be made from a design point of view that it should be a sub-class of
GameObject as it is also a visual object and you may want to move it. But I didn't want to introduce deep class hierarchies, when all the text that Breakout displays on the screen stays put.
The
TextObject creates a font object. It renders the text into a separate text surface that is then blitted (rendered) onto the main surface. An interesting aspect of the
TextObject is that it doesn't have any fixed text. Instead, it gets a function called
text_func() that is called every time it renders.
This allows us to update the display of lives and score in Breakout just by providing a function that returns the current lives and current score, instead of keeping track of which text objects display lives and score and updating their text every time they change. This is a neat trick from functional programming, and for larger games it can help you keep everything nice and tidy.
import pygame class TextObject: def __init__(self, x, y, text_func, color, font_name, font_size): self.pos = (x, y) self.text_func = text_func self.color = color self.font = pygame.font.SysFont(font_name, font_size) self.bounds = self.get_surface(text_func()) def draw(self, surface, centralized=False): text_surface, self.bounds = \ self.get_surface(self.text_func()) if centralized: pos = (self.pos[0] - self.bounds.width // 2, self.pos[1]) else: pos = self.pos surface.blit(text_surface, pos) def get_surface(self, text): text_surface = self.font.render(text, False, self.color) return text_surface, text_surface.get_rect() def update(self): pass
Creating the Main Window
Pygame games run in windows. You can make them run fullscreen too. Here is how you display an empty Pygame window. You can already see many of the elements I discussed earlier. First, Pygame
init() is called, and then the main drawing surface and the clock are created.
Next is the main loop, which consistently fills the screen with uniform gray and calls the clock
tick() method with the frame rate.
import pygame pygame.init() screen = pygame.display.set_mode((800, 600)) clock = pygame.time.Clock() while True: screen.fill((192, 192, 192)) pygame.display.update() clock.tick(60)
Using a Background Image
Usually, a uniform color background is not very exciting. Pygame does images very well. For Breakout, I splurged and went for a fancy real space image from NASA. The code is very similar. First, just before the main loop, it loads the background image using the
pygame.image.load() function. Then, instead of filling the screen with color, it "blits" (copy the bits) the image to the screen at position (0,0). The effect is that the image is displayed on the screen.
import pygame pygame.init() screen = pygame.display.set_mode((800, 600)) clock = pygame.time.Clock() background_image = pygame.image.load('images/background.jpg') while True: screen.blit(background_image, (0, 0)) pygame.display.update() clock.tick(60)
Drawing Shapes
Pygame can draw anything. The
pygame.draw module has functions for drawing the following shapes:
- rect
- polygon
- circle
- ellipse
- arc
- line
- lines
- anti-aliased line
- anti-aliased lines
In Breakout, all the objects (except the text) are just shapes. Let's look at the draw() method of the various Breakout objects.
Drawing Bricks
Bricks are bricks. They are just rectangles. Pygame provides the
pygame.draw.rect() function, which takes a surface, a color, and a Rect object (left, top, width and height) and renders a rectangle. If the optional width parameter is greater than zero, it draws the outline. If the width is zero (which is the default), it draws a solid rectangle.
Note that the
Brick class is a subclass of
GameObject and gets all its properties, but it also has a color it manages itself (because there may be game objects that don't have a single color). Ignore the
special_effect field for now.
import pygame from game_object import GameObject class Brick(GameObject): def __init__(self, x, y, w, h, color, special_effect=None): GameObject.__init__(self, x, y, w, h) self.color = color self.special_effect = special_effect def draw(self, surface): pygame.draw.rect(surface, self.color, self.bounds)
Drawing the Ball
The ball in Breakout is just a circle. Pygame provides the
pygame.draw.circle() function that takes the color, center, radius and the options width parameter that defaults to zero. As with the
pygame.draw.rect() function, if the width is zero then a solid circle is drawn. The Ball is also a derived class of GameObject.
Since the ball is always moving (unlike the bricks), it also has a speed that is passed on the
GameObject base class to be managed. The Ball class has a little twist because its x and y parameters denote its center, while the x and y parameters passed to the
GameObject base class are the top-left corner of the bounding box. To convert from center to top-left corner, all it takes is subtracting the radius.
import pygame from game_object import GameObject class Ball(GameObject): def __init__(self, x, y, r, color, speed): GameObject.__init__(self, x - r, y - r, r * 2, r * 2, speed) self.radius = r self.diameter = r * 2 self.color = color def draw(self, surface): pygame.draw.circle(surface, self.color, self.center, self.radius)
Drawing the Paddle
The paddle is yet another rectangle that is indeed moving left and right in response to the player's pressing the arrow keys. That means that the position of the paddle may change from one frame to the next, but as far as drawing goes, it is just a rectangle that has to be rendered at the current position, whatever that is. Here is the relevant code:
import pygame import config as c from game_object import GameObject class Paddle(GameObject): def __init__(self, x, y, w, h, color, offset): GameObject.__init__(self, x, y, w, h) self.color = color self.offset = offset self.moving_left = False self.moving_right = False def draw(self, surface): pygame.draw.rect(surface, self.color, self.bounds)
Conclusion
In this part, you've learned about the TextObject class and how to render text on the screen. You've also got familiar with drawing objects like bricks, the ball, and the paddle.
In the meantime, remember we have plenty of Python content available for sale and for study in the Envato Market.
In part three, you'll see how event handling works and how Pygame lets you intercept and react to events like key presses, mouse movement, and mouse clicks. Then, we'll cover gameplay topics like moving the ball, setting the ball's speed, and moving the paddle.
><< | https://code.tutsplus.com/tutorials/building-games-with-python-3-and-pygame-part-2--cms-30082 | CC-MAIN-2021-04 | en | refinedweb |
This last section of the chapter deals with a feature that many EDI translators have: giving the user an option to call their own routines. These are known by various names such as user exits, escape routines, and user functions. In XSLT this feature is referred to as an extension function . The exact method for supporting extension functions isn't standard among all XSLT processors, so you can expect that there are variations. In this section I'll present a simple example of one way it can be done with Xalan, and I'll briefly discuss some approaches for MSXML.
Even when just considering Xalan there are several approaches and variations for supporting extension functions. I present here a fairly simple, basic approach that involves calling a static method in a class that isn't part of a package.
As a sample scenario, let's imagine that we have placed an order and that we want to determine the status of the order. XSLT doesn't know anything about orders or their status, so we'll implement this functionality with an extension function. Our simplified source document looks like this.
<?xml version="1.0" encoding="UTF-8"?> <ExtensionSource> <Cust>Mike</Cust> <Bevg>Espresso</Bevg> </ExtensionSource>
And here's the result we want to produce.
<?xml version="1.0" encoding="UTF-8"?> <ExtensionResult> <Customer>Mike</Customer> <Beverage>Espresso</Beverage> <OrderStatus>Your espresso is ready, sir.</OrderStatus> </ExtensionResult>
(May I please have a croissant to go with that?)
The Java method we want to invoke from our stylesheet is getOrderStatus. It resides in a MyExtensions class, as shown here.
public class MyExtensions { public static String getOrderStatus() { return "Your espresso is ready, sir."; } }
The function returns a Java String object that gets mapped to an XSLT string.
The approach involves three basic steps. We first have to tell Xalan about the extension function. This involves making it part of a specific namespace that Xalan associates with Java. In the stylesheet's root Element we declare the namespace. We assign it a namespace prefix of java, although any syntactically valid prefix would work. Since we've declared this namespace in the stylesheet we'll have to take special action to prevent its declaration from appearing in the result tree. We do this by using the exclude-result-prefixes Attribute discussed earlier in the chapter.
Secondly, we call the function in the appropriate place in the stylesheet. We use it in an XPath expression in the same way that we call a built-in XPath function. In our example we prefix the function name getOrderStatus with the java namespace prefix and the MyExtensions class name . The XPath expression is the value of the select Attribute of an xsl:value-of Element. Through these mechanisms the string returned by the function becomes the content of the Order Status Element.
Finally, when we actually run Xalan, we must be sure that the compiled Java class file (or jar archive if we have it in one) is in our class path .
Here's the stylesheet.
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet <xsl:output <xsl:template <ExtensionResult> <Customer> <xsl:value-of </Customer> <Beverage> <xsl:value-of </Beverage> <OrderStatus> <xsl:value-of </OrderStatus> </ExtensionResult> </xsl:template> </xsl:stylesheet>
The XSLT processor in MSXML 4.0 doesn't support extension functions or elements per se. Support for extensions is provided through embedded script implementations using msxsl:script and external objects using addObject. Consult the MSXML reference for details. If you're using a processor other than Xalan or MSXML's XSLT processor (which msxsl calls), consult its documentation if you want to add your own extensions. | https://flylib.com/books/en/4.381.1.129/1/ | CC-MAIN-2021-04 | en | refinedweb |
<stdio.h>
#include <readline/readline.h>
#include <readline/history.h>
char *
readline (const char *prompt);.
If that file does not exist or cannot be read, the ultimate default is
/etc.
The name and key sequence are separated by a colon. There can be no
whitespace between the name and the colon.
Chet Ramey, Case Western Reserve University
chet.ramey@case.
this manual page should be directed to
chet.ramey@case.edu.
It's too big and too slow. | http://www.linuxhowtos.org/manpages/3/readline.htm | CC-MAIN-2021-04 | en | refinedweb |
Update: I've shipped https-forward (you can install it on most Linuxes with snap) which transparently provides HTTPS certificates for internal 'dumb' services.
Like many folks, I'm incredibly pleased with the adoption of HTTPS/SSL everywhere on the web. But it's not an accident—free tools like Let's Encrypt have driven forward the adoption of certificates, and PAAS (Platform-As-A-Service) like App Engine now just give out certificates automagically.
Let's say you're writing your own server though, in Go. There's a package and idiom which will give you that same experience in your own code.
If you're running a webserver behind a frontend which handles HTTPS for you—like App Engine Flex does, as it just asks you to listen on :8080—this blog post isn't for you, your provider is handling your cert.
Stop reading now!
The package you need is
golang.org/x/crypto/acme/autocert, and it's so amazingly simple to use. Let's see how:
// add your listeners via http.Handle("/path", handlerObject) listener := autocert.NewListener("yourdomain.com") log.Fatal(http.Serve(listener, nil))
The Longer Version
But there's a few reasons you might want to specify the configuration yourself. The slightly longer setup looks something like:
certManager := autocert.Manager{ Prompt: autocert.AcceptTOS, HostPolicy: autocert.HostWhitelist("yourdomainname.com"), Cache: autocert.DirCache("cache-path"), } server := &http.Server{ Addr: ":https", TLSConfig: &tls.Config{ GetCertificate: certManager.GetCertificate, NextProtos: []string{acme.ALPNProto}, }, } // add your listeners via http.Handle("/path", handlerObject) log.Fatal(server.ListenAndServeTLS("", ""))
(For complete source you can download and run, see here ⤵️💻)
Regardless of the approach, your server will run on port 443 (this has to happen: the process calls you back on this port), and automagically talk to Let's Encrypt to provide certificates.
If you're having trouble:
- make sure the domain is correctly configured to point to your server, and remember you can't just run
wget localhost—you need to specify the full domain
- for additional domains (e.g., a "www." prefix), just add them to
NewListeneror
HostWhitelist
For bonus points, you should also listen on plain old HTTP. The
autocert package provides a built-in helper which redirects users to HTTPS:
go func() { h := certManager.HTTPHandler(nil) log.Fatal(http.ListenAndServe(":http", h)) }()
These two handlers are how I serve my test domain, affoga.to. ☕🍨
⚠️ Caveats
If you're directly hosting your own software on your own machines (virtualized or not), it's worth listing some caveats and thoughts about web servers generally.
Building with an old Go version
Ubuntu 16.04 ships with Go version 1.6. As of April 2018,
autocert needs a later version (you'll get errors about missing
context).
The instructions to install a later Go are here.
Listening on system ports
On *nix, if you want to listen on ports 80 and 443, your Go binary naïvely needs to run as a privileged user (e.g.
root). This is typically a Bad Idea™.
You can use
setcap to privilege your binary. Every time you build
server, you'll need to grant the
CAP_NET_BIND_SERVICE capability, which allows the binary to listen on system ports (0-1024):
sudo setcap CAP_NET_BIND_SERVICE+ep server
Any user who runs this binary will now be permitted to listen on the correct ports, and e.g. you can run your binary as
nobody.
Cache needs to be writable
The cache folder used by
autocert.Manager can't be shared between users (which is a challenge for testing), and its internal error messages about this aren't great.
My preference is to just use a consistent cache path per-user. So generate a path based on the current username, rather than hard-coding it:
import ( "path/filepath" "os" "os/user" ) func cacheDir() (dir string) { if u, _ := user.Current(); u != nil { dir = filepath.Join(os.TempDir(), "cache-golang-autocert-" + u.Username) if err := os.MkdirAll(dir, 0700); err == nil { return dir } } return "" }
You don't need to provide a cache, but removing it will slow down startup and your server won't be resilient to Let's Encrypt being down.
Sending a HSTS header
While the example at the top of this post includes a pure HTTP handler to redirect users to your HTTPS listener, ideally, you'd like to instruct a user's browser to do this for you and avoid the delay (and/or security implications).
By returning a HSTS header on every request, you instruct the client's browser to only talk to you over HTTPS. To ensure this for the next six months, add:
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Strict-Transport-Security", "max-age=15768000 ; includeSubDomains") // ... rest of handler here })
⚙️ Using SystemD to run at startup
If you're not using a helper service like Snap, then you'll need it to start up at boot. You can use SystemD for this.
Let's create a service file which you can add to
/etc/systemd/system. Here's my
httpd.service:
[Unit] Description=Go webserver After=network.target [Service] ExecStart=/home/sam/http/server # path to binary WorkingDirectory=/home/sam/http # folder for binary User=nobody Group=nogroup ProtectSystem=yes AmbientCapabilities=CAP_NET_BIND_SERVICE # lets `nobody` user bind ports 80, 443 [Install] WantedBy=multi-user.target
Once you've installed the service file, you can run:
sudo systemctl start httpd # and see its output: sudo journalctl -f -u httpd # and enable on boot: sudo systemctl enable httpd
And that's it.
I hope this has been useful, at least as a reference guide for folks learning how to get started with certs! If this post has been useful, click one of those heart 👉❤️ buttons below, or let me know on Twitter.
Posted on by:
Sam Thorogood
Developer Relations for Web at Google.
Discussion
Great Post, but I'm using Centos 7 & I'm unsure if such as setcap & AmbientCapabilities are available in Centos 7.
If anybody could show me a centos 7 alternative to the service file I'd very much appreciate it.
Thanks.
Jonathan | https://dev.to/samthor/-magic-http-certs-in-go-14n8 | CC-MAIN-2020-40 | en | refinedweb |
TIMERFD_CREATE(2) Linux Programmer's Manual TIMERFD_CREATE(2)
timerfd_create, timerfd_settime, timerfd_gettime - timers that notify via file descriptors
. See clock_getres(2) for some further details on the above clocks. open file description (see open(2)) referred to by the new file descriptor‐ time() follow‐ ing additional inte‐ g.) If the associated clock is either CLOCK_REALTIME or CLOCK_REALTIME_ALARM, the timer is absolute (TFD_TIMER_ABSTIME), and the flag TFD_TIMER_CANCEL_ON_SET was not specified when calling timerfd_settime(), then a discon‐ tinuous negative change to the clock (e.g., clock_settime(2)) may cause read(2) to unblock, but return a value of 0 (i.e., no bytes read), if the clock change occurs after the time expired, but before the read(2) on the file descriptor. is not valid.. EPERM clockid was CLOCK_REALTIME_ALARM or CLOCK_BOOTTIME_ALARM but the caller did not have the CAP_WAKE_ALARM capability.: ECANCELED See NOTES. EINVAL new_value is not properly initialized (one of the tv_nsec falls outside the range zero to 999,999,999). EINVAL flags is invalid.
These system calls are available on Linux since kernel 2.6.25. Library support is provided by glibc since version 2.8.
These system calls are Linux-specific.
Suppose the following scenario for CLOCK_REALTIME or CLOCK_REALTIME_ALARM timer that was created with timerfd_create(): (a) The timer has been started (timerfd_settime()) with the TFD_TIMER_ABSTIME and TFD_TIMER_CANCEL_ON_SET flags; (b) A discontinuous change (e.g., settimeofday(2)) is subsequently made to the CLOCK_REALTIME clock; and (c) the caller once more calls timerfd_settime() to rearm the timer (without first doing a read(2) on the file descriptor). In this case the following occurs: · The timerfd_settime() returns -1 with errno set to ECANCELED. (This enables the caller to know that the previous timer was affected by a discontinuous change to the clock.) · The timer is successfully rearmed with the settings provided in the second timerfd_settime() call. (This was probably an implementation accident, but won't be fixed now, in case there are applications that depend on this behaviour.) 5.08 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2020-08-13 TIMERFD_CREATE(2)
Pages that refer to this page: syscalls(2), proc(5), procfs(5), time_namespaces(7) | https://man7.org/linux/man-pages/man2/timerfd_settime.2.html | CC-MAIN-2020-40 | en | refinedweb |
In a previous CODE Magazine article (), I described an open-source light scripting language that can be easily customized. I called this language CSCS (Customized Scripting in C#), because it's implemented in C# and its functionality can be tweaked and extended in C#. In another CODE Magazine article (), you can read how this language can be used on top of Xamarin to create cross-platform native mobile apps in a scripting language.
In this article, I'm going to show how you can use CSCS scripting in Unity to change a game or an app functionality on the fly. CSCS can be used to add possibilities for the game designer or for the game users. It's called “modding.” Modding is slang derived from the verb modify. It refers to performing a function not originally intended by the designer of the game. Mods can be quests, items, game elements (houses for the player, towns, shops, factions), or altering technical things (scripts, textures, meshes).
“Modding” refers to adding, altering, or purging the content of a game to perform a function not originally intended by the designer.
The main idea is to enable customizing your app or game as much as possible without recompilation. Not only that, the customization can also take place at runtime, after the game or app has already started.
All of the custom scripting and modding functionality is done using the CSCS scripting language. The CSCS full implementation in C# is on GitHub (the link is in the sidebar). Also, you're going to see how to use Visual Studio Code to debug custom scripts running in Unity.
In this article, I'm going to show how you can add customized scripting to a Unity project, taking the Microsoft Maquette Unity project as an example. Even though this example is an application, adding scripting to a game is similar.
Microsoft Maquette
Microsoft Maquette () is a brand-new Microsoft Windows Mixed Reality tool for creating immersive prototypes using a virtual reality (VR) headset and hand controllers. Maquette is implemented in Unity and it's also very easy to export the content you create with Maquette into your Unity projects. At the time of this writing, the tool is still in beta, with no scheduled release date. This tool makes it especially easy to create a spatial prototype in 3D. See Figure 1 for an example of content created with Microsoft Maquette.
It's a lot of fun to create different objects in 3D using a VR Headset and hand controllers. However, it could also be useful to be able to create some common scenes and objects in a script file and then to add these objects to an existing (or a new) Maquette project on the fly. This is where customized scripting can be used. So, the Microsoft Maquette team has decided to use CSCS to investigate scripting to extend its functionality and give their users access to scripted extensions.
In the next section, you'll see how you can add a CSCS scripting module to a Unity project using a Microsoft Maquette project as an example.
If our lives are already written, it would take a courageous man to change the script.
– Alan Wake
General Structure of a Unity Project
After downloading the CSCS parsing module from the GitHub, you can include it in the Assets area, as shown in Figure 2.
The CSCS folder has identical files as the C# files under the CSCS folder in GitHub. This folder contains all of the necessary files to parse CSCS scripts. Because everything is open source, you're free to do any additions and modifications there.
Next, you need a script controller object. It initializes the CSCS scripting and makes sure the scripts are run when required and on the correct thread.
Unity has a few special methods that are called from the main thread by the Unity framework. This happens for objects derived from the special Unity MonoBehaviour class. MonoBehaviour.Awake() method is called only once when the game is starting. MonoBehaviour.Update() method is called every frame from the Unity Main thread. You're going to use these two methods to add the custom scripting functionality to your game.
Create your class deriving from MonoBehaviour as follows:
public class MaquetteScriptController: MonoBehaviour { void Awake() { // Code here will be executed once. } void Update() { // Code here will be executed each frame. } }
You'll be adding some muscle to this class in the next sections.
You can add the new controller either directly from Unity or using the GameObject.AddComponent() method from any other real game object that already exists in a Unity scene as follows:
MaquetteScriptController myScriptController = gameObject. AddComponent<MaquetteScriptController>();
Note that in either case, MaquetteScriptController class should be initialized only once and used exclusively as a singleton.
Running the Whole Script on the Unity Main Thread
In Unity, all of the GUI related functionality happens on the main thread, including creating and modifying different game objects. If you try calling some GUI related functions other than from the main thread, you get an exception like this: “get_gameObject can only be called from the main thread.”
If scripting triggers execution of a custom code that can modify Unity game objects, that code must be run on the Unity main thread. There are different ways of doing this; here, I'm going to propose one of them that's relatively common, but you're free to choose any other way.
A C# unit containing the script to be run will be the ScriptCommand structure. It will be also responsible to call the CSCS core scripting classes for parsing and execution of a CSCS script. Check out the implementation of the ScriptCommand structure in Listing 1.
You collect all of the incoming requests to run custom scripts in a ConcurrentQueue object consisting of the ScriptCommand objects. You use a concurrent queue because it's thread safe: the scripts can be queued and dequeued from different threads. You define this queue in the MaquetteScriptController class as follows:
static ConcurrentQueue<ScriptCommand> m_scriptQueue = new ConcurrentQueue<ScriptCommand>();
You add each incoming request to this queue as follows:
public static void AddScriptToQueue(string code) { ScriptCommand command = new ScriptCommand(code); m_scriptQueue.Enqueue(command); }
To consume this queue, there are two possibilities. If custom scripts must be executed on the main thread, modify the Update() method that you defined in the MaquetteScripController as follows:
void Update() { while (m_scriptQueue.Count != 0) { ScriptCommand next; if (m_scriptQueue.TryDequeue(out next)) { next.Execute(); } } }
Note that you don't have to use any locks here because they are taken care of by the .NET Framework.
If a custom script modifies the GUI, the code must be evaluated on the Unity main thread.
Another possibility to consume the queue is when you don't have to run the CSCS script on the Unity main thread. Or maybe you don't have to run the whole script on the main thread, but just some parts of it - this can be customized in the C# implementation of a CSCS function - you'll see some examples of this later on.
To consume the queue and execute the scripts not on the Unity main thread, you need to start a separate thread:
public void OnStartup() { Task.Run(() => { RunScriptingEngineThread(); }); }
Where the implementation of the RunScriptingEngineThread() is as follows:
public static void RunScriptingEngineThread() { while (!m_ScriptQuitEvent.WaitOne(0)) { while (m_scriptQueue.Count != 0) { ScriptCommand next; if (m_scriptQueue.TryDequeue(out next)) { next.Execute(); } m_ScriptLoopEvent.WaitOne(1000); } } }
The m_ScriptLoopEvent and m_ScriptQuitEvent are auto reset event handlers:
static AutoResetEvent m_ScriptLoopEvent = new AutoResetEvent (false); static AutoResetEvent m_ScriptQuitEvent = new AutoResetEvent (false);
In this case, the implementation of the Update() method is simpler - it just signals that the processing may take place now (in case there are pending requests). It also makes sure that the script processing doesn't occur more often than every frame:
void Update() { m_ScriptLoopEvent.Set(); }
The m_ScriptQuitEvent makes sure that the scripting thread is finished on Unity shutdown:
public void OnShutdown() { m_ScriptQuitEvent.Set(); m_ScriptLoopEvent.Set(); }
You've seen how to process incoming scripting requests. But how do they get into Unity? This is a static auxiliary method to add a file with a CSCS script to the execution queue:
public static void ExecuteScript( string scriptFile){ if (File.Exists(scriptFile)) { string sCode = "include(\"" + scriptFile + "\");"; AddScriptToQueue( sCode ); } }
You can call this method from anywhere in your Unity code. In particular, you can call it from an initialization routine, so that a custom script for setting up initial scenes and game objects can be called every time a Unity game or app is started.
In the next section, you're going to see another way of triggering custom script execution in Unity using Visual Studio Code CSCS Debugger.
Connecting to Unity from Visual Studio Code
In a previous CODE Magazine article () you read how to create a Visual Studio Code Debugger and a REPL Extension for any language. As an example, I used CSCS. You don't have to re-implement the extensions, but just take the ones in the Visual Studio Marketplace (they're free to use; see the links in the sidebar).
Using a debugger extension, you can connect from Visual Studio Code (let's call it VS Code for brevity) to Unity and execute any CSCS script in Unity, set breakpoints, check variable values, go through the call stack, etc. Using the REPL extension, you can execute any code selected in the VS Code editor.
Using the REPL VS Code extension, you can add new Game Objects to a running Unity instance on the fly.
The code for the CSCS receiving part on the Unity side is mostly in Breakpoints.cs, DebuggerServer.cs, and Debugger.cs files. All of these files are already in the CSCS core directory (see Figure 2). You can start the Debugger server in the Awake() method of the MaquetteScriptController like this:
void Awake() { SplitAndMerge.Interpreter.Instance.Init(); SplitAndMerge.DebuggerServer. StartServer(13337); }
The Awake() method also initializes the CSCS main scripting functions.
The port 13337 is the default port to where the VS Code CSCS Debugger extension connects (note that both Unity and VS Code are supposed to run on the same computer). If you want to change the port number, don't forget to change it in the VS Code CSCS Debugger configuration settings as well (in the launch.json file).
The CSCS Debugger server keeps an internal queue of requests received from the VS Code. To process this queue in Unity, there are the same two possibilities I discussed earlier: Either run the CSCS scripts on the Unity main thread or in a separate thread. In the case of processing on the Unity main thread, add the following code to the MaquetteScriptController.Update() method:
if (SplitAndMerge.DebuggerServer. DebuggerAttached) { SplitAndMerge.DebuggerServer.ProcessQueue(); }
In the case of processing CSCS scripts on a separate thread, add the code above to the RunScriptingEngineThread() (see Listing 2).
That's it! Before you see real examples, you need to see how to execute only a part of the script on the Unity main thread.
Running Parts of the Code on the Unity Main Thread
First, define a static concurrent queue in the MaquetteScriptController class. It will contain the C# code to be executed on the Unity main thread:
static ConcurrentQueue<Action> m_actionQueue = new ConcurrentQueue<Action>();
Then you can add requests for the C# code to be executed on the main thread in the following method in the MaquetteScriptController class:
public static void ExecuteInUpdate( Action action) { m_actionQueue.Enqueue(action); }
Now, in order to execute the code on the Unity main thread, add this to the MaquetteScriptController.Update() method:
while (m_actionQueue.Count != 0) { Action action; if (m_actionQueue.TryDequeue(out action)) { action.Invoke(); } }
An example of executing some code on the Unity main thread is the following:
// Not on the Unity Main Thread ManualResetEvent mre = new ManualResetEvent(false); ScriptController.ExecuteInUpdate(() => ( () => { // C# code here executed on the Main Thread mre.Set(); }); // Not on the Unity Main Thread mre.WaitOne();
Check out the GetProperty() and SetProperty() methods in Listing 3 to see how parts of the code are scheduled on the main thread.
Now let's see an example of running custom scripts in Unity at runtime using the techniques you've developed so far and using Microsoft Maquette as an example Unity project.
Adding Objects to Maquette from Visual Studio Code at Runtime
The VS Code CSCS debugger in action is shown in Figure 3. It shows a script that adds a cube, a sphere, a capsule, a cylinder, and a tube to the current Microsoft Maquette scene. In this figure, the VS Code Debugger is connected to the CSCS Debugger server on the Microsoft Maquette Unity side.
The result of running this script in Microsoft Unity is shown in Figure 4.
As you can see, all five figures were added to the current scene at the place I was looking with my VR Headset.
Now let's see how it all worked.
To add new CSCS functions to the parser, the following statements are used in the initialization phase:
public static void DefineScriptFunctions() { ParserFunction.RegisterFunction( "CreateCube", new CreateCubeFunction()); ParserFunction.RegisterFunction( "CreateSphere", new CreateSphereFunction()); ParserFunction.RegisterFunction( "CreateCapsule", new CreateCapsuleFunction()); ParserFunction.RegisterFunction( "CreateTube", new CreateTubeFunction()); }
Each of the registered functions must be a class deriving from the SplitAndMerge.ParserFunction class.
A fragment of the implementation of the CreateCubeFunction class is shown in Listing 4. I provided a skeleton but omitted a few lengthy details of building a Cube because they are out of the scope of this article.
Implementing Scripting Objects in an Object-Oriented style
You probably noticed in Figure 3 that you can perform a few operations on an object passed to the PutInFrontOfUser() CSCS method:
object.position=user.PositionInFront(0.6); object.rotation=user.RotationToFace( object); object.scale =V3( 0.1, 0.1 , 0.1 ); object.color =Color( r, g, b ); object.translate( V3( x, y, 0.0 ) ); return object;
The CSCS object variable above can be any shape, such as a cube, a sphere, etc. How can you implement such CSCS objects in C#?
All of the CSCS variables and objects correspond to a SplitAndMerge.Variable C# object. Each Variable object has a type (string, number, array, etc.). There's a special Variable type called OBJECT. Using this Variable type, you implement CSCS objects.
You initialize this SplitAndMerge.Variable object with another object that implements the ScriptObject interface:
public interface ScriptObject { // T riggered by "a.name = value;" Variable SetProperty(string name, Variable value); // T riggered by "x = a.name;" // If args are null, tr iggered by Debugger // If args are not empty, triggered by a // function call: "y = a.name(arg1, ...);" Variable GetProperty(string name, List<Variable> args = null, ParsingScript script = null); // Returns all properties that it implements List<string> GetProperties(); }
So, in order to implement an object in CSCS, you create a C# class implementing the ScriptObject interface and then pass it to the SplitAndMerge.Variable constructor. See an example of this in Listing 3 (in the CreateEntityOfType() method):
EntityScriptObject myObject = new EntityScriptObject(); Variable newValue = new Variable (myObject);
The EntityScriptObject class implements the ScriptObject interface and you can check out a fragment of its implementation in Listing 4. (I also omitted the lengthy Unity and Maquette related details that aren't relevant to this article).
Wrapping Up
Using Microsoft Maquette as an example, you saw how you can do modding in Unity - altering game (or app) functionality either at runtime or just before starting the game without the need of a recompilation.
I hope you enjoyed the Microsoft Maquette example and are now ready to use scripting in your own projects, binding Unity functionality to custom scripting functions.
All of the CSCS code is open source. See the accompanying CSCS source code download and the GitHub links in the sidebar for the most up-to date developments. Note that Microsoft Maquette is proprietary software and therefore its source code is not available for download.
For manipulating Unity games by debugging a CSCS script in Visual Studio Code, install the Visual Studio Code CSCS Debugger and CSCS REPL extensions. See the links in the sidebar as well.
I'd be happy to hear back from you about how you're using customized scripting with Unity.
I'd like to give special thanks to Stefan Landvogt from the Microsoft Maquette team for providing me with priceless tips and suggestions.
Listing 1: Implementation of the ScriptCommand C# Class
public struct ScriptCommand { public string command; public SplitAndMerge.Variable result; public string output; public string errorMessage; public ScriptCommand(string sCommand) { command = sCommand; } public void Execute() { output = ""; try { result = SplitAndMerge.Interpreter.Instance.Process(command); output = SplitAndMerge.Interpreter.Instance.Output; errorMessage = ""; } catch (Exception exception) { errorMessage = exception.Message; SplitAndMerge.ParserFunction. InvalidateStacksAfterLevel(0); } } }
Listing 2: A Fragment of the Implementation of the MaquetteScriptController Class
public class MaquetteScriptController : MonoBehaviour { static ConcurrentQueue<ScriptCommand> m_scriptQueue = new ConcurrentQueue <ScriptCommand>(); static AutoResetEvent m_ScriptLoopEvent = new AutoResetEvent (false); static AutoResetEvent m_ScriptQuitEvent = new AutoResetEvent (false); static ConcurrentQueue<Action> m_actionQueue = new ConcurrentQueue<Action>(); void Awake() { SplitAndMerge.Interpreter.Instance.Init(); SplitAndMerge.DebuggerServer.StartServer(13337); } public void OnStartup() { MaquetteFunctions.DefineScriptFunctions(); Task.Run(() => { RunScriptingEngineThread(); }); } public void OnShutdown() { m_ScriptQuitEvent.Set(); m_ScriptLoopEvent.Set(); } public static void ExecuteScript(string scriptFile) { if (File.Exists(scriptFile)) { string sCode = "include(\"" + scriptFile + "\");"; AddScriptToQueue( sCode ); } } public static void AddScriptToQueue( string sCode ) { ScriptCommand command = new ScriptCommand(sCode); m_scriptQueue.Enqueue(command); } public void Update() { m_ScriptLoopEvent.Set(); while (m_actionQueue.Count != 0) { Action action; if (m_actionQueue.TryDequeue(out action)) { action.Invoke(); } } } public static void ExecuteInUpdate(Action action) { m_actionQueue.Enqueue(action); } public static void RunScriptingEngineThread() { while (!m_ScriptQuitEvent.WaitOne(0)) { if (SplitAndMerge.DebuggerServer.DebuggerAttached) { SplitAndMerge.DebuggerServer.ProcessQueue(); } while (m_scriptQueue.Count != 0) { ScriptCommand next; if (m_scriptQueue.TryDequeue(out next)) { next.Execute(); } } m_ScriptLoopEvent.WaitOne(500); } } }
Listing 3: A fragment of the Implementation of the EntityScriptObject Class
public class EntityScriptObject : ScriptObject { static List<string> s_properties = new List<string> { "color", "position", "rotation", "scale", "translate" }; public virtual List<string> GetProperties() { return s_properties; } public Variable GetProperty(string sPropertyName, List<Variable> args = null, ParsingScript script = null) { Variable newValue = Variable.EmptyInstance; ManualResetEvent mre = new ManualResetEvent (false); MaquetteScriptController.ExecuteInUpdate(() => ( () => { // W ork on the Unity Main Thread ... switch (sPropertyName) { case "color": newValue = GetColorProperty(); case "position": newValue = GetPositionProperty(); case "rotation": newValue = GetRotationProperty(); case "scale": newValue = GetScaleProperty(); case "translate": newValue = args != null && args.Count > 0 ? Translate(args[0]) : Variable.EmptyInstance; } mre.Set(); }); mre.WaitOne(); return newValue; } public virtual Variable SetProperty(string sPropertyName, Variable argValue) { Variable newValue = Variable.EmptyInstance; ManualResetEvent mre = new ManualResetEvent (false); MaquetteScriptController.ExecuteInUpdate(() => ( () => { // W ork on the Unity Main Thread ... switch (sPropertyName) { case "color": newValue = SetColorProperty(GetColorFromVariable(argValue)); case "position": newValue = SetPositionProperty(GetVector3FromVariable(argValue)); case "rotation": newValue = SetRotationProperty(GetVector3FromVariable(argValue)); case "scale": newValue = SetScaleProperty(GetVector3FromVariable(argValue)); case "translate": newValue = Translate(argValue); } mre.Set(); }); mre.WaitOne(); return newValue; } public Variable GetPositionProperty() { Vector3 myVector3 = m_EntityObject.transform.position; return CreateVector3Variable(myVector3); } public Variable SetPositionProperty(Vector3 aVector3) { m_EntityObject.transform.position = aVector3; m_EntityObject.SerializeState(); return Variable.EmptyInstance; } public Variable Translate(Variable vectorVariable) { Vector3 aVector3 = GetVector3FromVariable(vectorVariable); m_EntityObject.transform.Translate(aVector3); m_EntityObject.SerializeState(); return Variable.EmptyInstance; } public static Variable CreateVector3Variable(Vector3 aVector) { Variable newValue = new Variable(Variable.VarType.ARRAY); newValue.AddVariable(new Variable(aVector.x)); newValue.AddVariable(new Variable(aVector.y)); newValue.AddVariable(new Variable(aVector.z)); return newValue; } MqEntity m_EntityObject = null; }
Listing 4: A Fragment of the Implementation of the CreateCubeFunction Class
class CreateCubeFunction: ParserFunction { static Variable CreateEntityOfType(string sPrimitiveType, List<Variable> args = null) { EntityScriptObject myObject = new EntityScriptObject(); Variable newValue = new Variable (myObject); ManualResetEvent mre = new ManualResetEvent (false); MaquetteScriptController.ExecuteInUpdate(() => ( () => { // Some work on the Unity Main Thread ... mre.Set(); }); mre.WaitOne(); return newValue; } protected override Variable Evaluate(ParsingScript script) { List <Variable> args = script.GetFunctionArgs(); string sPrimitiveType = Utils.GetSafeString(args, 0, "Cube"); Variable newValue = CreateEntityOfType(sPrimitiveType); return newValue; } } | https://www.codemag.com/article/1903081 | CC-MAIN-2020-40 | en | refinedweb |
BleSerialPeripheralRK (community library)
Summary
Library to simplify using the BLE UART peripheral
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
BleSerialPeripheralRK
Library to simplify using BLE UART peripheral mode on Gen 3 devices
Introduction
Particle Gen 3 devices (Argon, Boron, Xenon) running Device OS 1.3.0-rc.1 and later have support for BLE (Bluetooth Low Entergy) in central and peripheral roles.
Nordic Semiconductor created a UART peripheral protocol to allow central devices (like mobile phones) to connect to a BLE device and read UART-like data streams. This is supported not only by the nRF Toolbox app, but also some other apps like the Adafruit Bluefruit app.
There is a code example in the docs, however this class encapsulates the BLE stuff and makes a Serial-like interface to it. Among the benefits:
- Reading is easy using standard functions like read(), readUntil(), readString(), etc. like you can from Serial, Serial1, etc..
- Writing is easy and buffered, allowing not only write() to write a byte, but also all of the variations of print(), println(), printf(), printlnf(), etc. that are available when using Serial, etc..
- All of the BLE stuff is encapsulated so you don't have to worry about it.
Documentation can be found at:
Github repository:
License: MIT
Example
There is one example in 1-simple-BleSerialPeripheralRK:
#include "BleSerialPeripheralRK.h" SerialLogHandler logHandler; SYSTEM_THREAD(ENABLED); // First parameter is the transmit buffer size, second parameter is the receive buffer size BleSerialPeripheralStatic<32, 256> bleSerial; const unsigned long TRANSMIT_PERIOD_MS = 2000; unsigned long lastTransmit = 0; int counter = 0; void setup() { Serial.begin(); // This must be called from setup()! bleSerial.setup(); // If you don't have any other services to advertise, just call advertise(). // Otherwise, call getServiceUuid() to get the serial service UUID and add that to your // custom advertising data payload and call BLE.advertise() yourself with all of your necessary // services added. bleSerial.advertise(); } void loop() { // This must be called from loop() on every call to loop. bleSerial.loop(); // Print out anything we receive if(bleSerial.available()) { String s = bleSerial.readString(); Log.info("received: %s", s.c_str()); } if (millis() - lastTransmit >= TRANSMIT_PERIOD_MS) { lastTransmit = millis(); // Every two seconds, send something to the other side bleSerial.printlnf("testing %d", ++counter); Log.info("counter=%d", counter); } }
Among the important things:
You normally instantiate a BleSerialPeripheralStatic object as a global variable. The first number in the <> is the transmit buffer size and the second is the receive buffer size.
// First parameter is the transmit buffer size, second parameter is the receive buffer size BleSerialPeripheralStatic<256, 256> bleSerial;
Because the data is buffered and only sent from loop() the transmit buffer must be larger than the amount of data you intend to sent at once, or the maximum data that will accumulate between calls to loop.
Likewise, since data is read from loop but received asynchronously by BLE, you must have a receive buffer that is large enough to hold any data between times you will be processing it from your loop function.
If there is a data overflow situation, the data is discarded.
Be sure to call this from your setup() function.
bleSerial.setup();
If you are only using BLE UART you can call:
bleSerial.advertise();
If you are advertising multiple services, instead call
bleSerial.getServiceUuid() to get the UART serial UUID and add it with your own services:
BleAdvertisingData data; data.appendServiceUUID(bleSerial.getServiceUuid()); // append your own service UUIDs here BLE.advertise(&data);
Be sure to call this from loop, as often as possible.
bleSerial.loop();
In this example we used
bleSerial.readString() but there are many method of the Stream class to read. Beware of blocking, however. If you are waiting to read a string, you won't be calling
bleSerial.loop() and data won't be transmitted during that time.
Finally, you can print to BLE serial using all of the standard Stream methods. The example uses
bleSerial.printlnf() to print a line using
sprintf formatting.
Browse Library Files | https://docs.particle.io/reference/device-os/libraries/b/BleSerialPeripheralRK/ | CC-MAIN-2022-27 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.