text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
A minimal node SOAP client
This branch is forked from.
The last integration is based on the latest commit b46eefcad8 at 2012-12-11.
Install: npm install soap-js
This branch fixes the defects below.
0.2.8 Fix the issue that namespace should propagate to child nodes and clean up of unused code. Accept the pull request
0.2.9 Fix the mistake when merge the previous request.
0.2.10 Fix the issue in file lib/wsdl.js. the original code cannot handle the array correctly
0.2.11 Currently, the attribute whose value is zero or false will not be added to the xml content. This change allows zero and false as elements value. | https://www.npmjs.com/package/soap-js | CC-MAIN-2016-50 | en | refinedweb |
Configuration Handlers in .NET
Pages: 1, 2
If the configuration system cannot find a node that matches the path we asked for,
it does not call Create() on the section handler and
ConfigurationSettings.GetConfig() simply returns null.
Returning null is a bit of a pain. Any place we call GetConfig(),
we'll have to check the return value for null and do the right thing, loading defaults if necessary.
That's rather error-prone, but we can wrap this up to make it easier to use.
A factory method on the BasicSettings class that checked for null and
loaded a default, if necessary, would do the trick. We'll move the code that
grabs the settings object from the EntryPoint to our new factory method and rewrite the
EntryPoint to use the new factory method:
Create()
ConfigurationSettings.GetConfig()
GetConfig()
BasicSettings
EntryPoint
using System;
using System.Configuration;
using System.Xml.Serialization;
using cs = System.Configuration.ConfigurationSettings;
namespace BasicConfigSample
{
public class BasicSettings
{
/* same as before */
private BasicSettings() {
FirstName = "<<not";
LastName = "set>>";
}
const string section = "blowery.org/basics";
public static BasicSettings GetSettings() {
BasicSettings b = (BasicSettings)cs.GetConfig(section);
if(b == null)
return new BasicSettings();
else
return b;
}
}
class EntryPoint
{
[STAThread]
static void Main(string[] args) {
BasicSettings settings = BasicSettings.GetSettings();
Console.WriteLine("The configured name is {0}", settings);
}
}
/* SectionHandler stays the same */
}
The astute reader might notice that the default instance looks like a prime
candidate for becoming a Singleton. Luckily, the configuration framework
already caches the result of the call to IConfigurationSectionHandler.Create(),
so it's one less piece we have to implement.
Singleton
IConfigurationSectionHandler.Create()
So far, we've covered how to implement a very simple section handler and
how to wrap up the calls to GetConfig() to get around the
null return problem. Next, we're going to dive into configuration
parenting and discuss how it affects our custom section handler.
Remember how an element in the configuration file
that has no matching <section> tag causes the
configuration system to thrown a ConfigurationException? If
you've played around with web.config files, you may be wondering how the
<system.web> section works; no
<configSections> or <section> elements
are present, so how does the configuration system know which
IConfigurationSectionHandler to use? The answer lies in
configuration file parenting.
<section>
ConfigurationException
<system.web>
<configSections>
IConfigurationSectionHandler
When the configuration system parses our configuration file, it also
parses a master configuration file, stored in a file called machine.config, which
lives in the Config folder of your framework install directory. Open the file up;
contained within is a long list of <sectionGroup> and
<section> tags at the top of the file.
When the configuration system can't find a section handler in your configuration file, it
walks up to machine.config and checks there. If you decide to register your
section handler in machine.config, you should seriously consider strongly naming
the assembly and registering it with the GAC. That way, anyone who looks in machine.config
can use your configuration handler. Strictly speaking, you don't have to register your assembly
in the GAC, but it's a good idea.
<sectionGroup>
The machine.config file can also hold machine-wide default settings.
If you search machine.config for <system.web>, you'll
find all of the defaults used by ASP.NET. Changes to this file would affect all of the ASP.NET applications
running on that machine.
So what does all this mean to the lowly developer implementing IConfigurationSectionHandler?
Simply, it means that we may have to parse and merge settings from different config files.
In fact, for ASP.NET applications, Create() can be called many times, once for each
directory above the ASP.NET page in question that defines a web.config file, plus possibly once more for
machine.config. For example, if we defined our configuration section handler in
machine.config and had the IIS layout shown below, our configuration handler would be called
four times.
A couple interesting things about the ASP.NET implementation:
First, if a web.config in the hierarchy doesn't contain settings, it will be
skipped, and the next config file in the hierarchy will be checked. Second, there's a discrepancy between the
ASP.NET configuration system and the DefaultConfigurationSystem used by console and WinForms applications. If a section is redefined in a child configuration file, ASP.NET deals with it and doesn't throw an error. However, a console or WinForms app will throw a ConfigurationException, stating that the section in question has already been defined. I rather like the ASP.NET approach; it supports xcopy deployment (I don't have to know
if the section handler is already registered) and just does what I would expect. At the very least,
it would be nice if the framework teams resolved this difference before v1.1 gets released.
Anyway, back to how parenting affects implementing the interface.
DefaultConfigurationSystem
xcopy
When the configuration system finds a configuration element in machine.config and in
your local config file, it first calls Create() using the XmlNode in
machine.config, then calls Create() using the XmlNode in
our config file. When it calls Create() for our local file, it passes in the
object returned from the call to Create() on machine.config's XmlNode.
We are expected to do the right thing when it comes to merging the current node with the parent settings.
The chaining always starts with machine.config and walks down the directory tree.
XmlNode
Our little section handler from before isn't well-suited for interesting override behavior, so
let's write a new one. This one will sum the value attribute of a <sum>
element. Also, instead of looking for blowery.org/code,
we'll look for blowery.org/sum.
value
<sum>
blowery.org/code
blowery.org/sum
using System;
using System.Configuration;
using System.Xml.Serialization;
using cs = System.Configuration.ConfigurationSettings;
namespace ParentingSample
{
public class Settings
{
const string section = "blowery.org/sum";
private int sum;
internal Settings(int start) {
sum = start;
}
private Settings() {
sum = 0;
}
public int Total {
get { return sum; }
}
internal int Add(int a) {
return sum += a;
}
public override string ToString() {
return Total.ToString();
}
public static Settings GetSettings() {
Settings b = (Settings)cs.GetConfig(section);
if(b == null)
return new Settings();
else
return b;
}
}
class SectionHandler : IConfigurationSectionHandler
{
public object Create(object parent,
object context,
XmlNode section)
{
int num = int.Parse(section.Attributes["value"].Value);
if(parent == null)
return new Settings(num);
Settings b = (Settings)parent;
b.Add(num);
return b;
}
}
}
Notice the new code in the SectionHandler. If parent is not null, we cast it to
a BasicSettings and call Add() with the parsed value. Here, we handle merging
the current node with the parent settings. Otherwise, we start the chain by
creating a new BasicSettings initialized with the first number.
SectionHandler
parent
Add()
To test this code, we'll need the setting in two config files. In machine.config, we'll
register the section handler and base setting like this:
<configuration>
<configSections>
<sectionGroup name="blowery.org>
<section
name="sum"
type=""ParentingSample.SectionHandler, ParentingSample"/>
</sectionGoup>
<!-- other sections -->
</configSections>
<blowery.org>
<sum value="10"/>
</blowery.org>
<!-- other settings -->
</configuration>
In our local application config file, we'll set up another value like this:
<configuration>
<!-- section already registered in machine.config -->
<blowery.org>
<sum value="5"/>
</blowery.org>
</configuration>
Now, if we ran the program, we should see a result of 15. Pretty neat. This example is pretty
simple, but it does show you the basics of how to grab the parent settings and merge them with the
local settings. The most difficult thing here was deciding how our settings should
merge with their parents.
We've covered quite a bit about implementing a
custom configuration section handler. There are some other techniques that come in handy when working with configuration
files that we have not outlined here. For example, the System.Xml.Serialization namespace in the
System.Xml assembly can radically simplify the parsing code for a configuration section.
Also, take a good look at machine.config for examples of how to structure your configuration to support
parenting and overrides in a flexible, robust manner. ASP.NET does a wonderful job of this and a lot can be learned by
studying how it handles parenting and overrides between machine.config and a web.config file.
Thanks for reading, and I hope you learned a lot from the article. If you'd like to download the accompanying
sample code for this article, you can grab the .zip file from.
If you have questions, feel free to contact me via email.
System.Xml.Serialization
System.Xml
Ben Lowery
is a developer at FactSet Research Systems, where he works on all things great and small.
Return to ONDotnet.com
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/a/dotnet/2003/01/01/configsections.html?page=2 | CC-MAIN-2016-50 | en | refinedweb |
Res. 4 2. Altamura A. SRs are generated in systems of binary trading price action that necessitate common frames of reference for individuals and groups participating in these communications. People outside the home may begin to reject a hyperactive child be- cause of his or her behavior.
A Filter Based Bit Allocation Scheme for Subband Compression of HDTV. Extending this metaphor, the azimuthal magnetic field is visualized as acting like an elastic band which encircles the distributed current and squeezes or pinches the current to a smaller diameter.
Soon neuroleptics were considered to be the cornerstone of the treatment of psychotic disorders in general and of schizophrenia more specifically. 53) - Jb y-(?- - Fb) (2. Prescriptions for Effective Group Problem Solving Further Reading GLOSSARY availability heuristic Assessment of the frequency binary trading price action a class or the probability of an event based on the ease with which instances or occurrences can be brought to mind.
Doetsch 2, where it may be phosphory- lated by kinases such as protein kinase C, then interacting with specific regions of DNA and other proteins of the RNA polymerase complex.
For example, utilizing, as a source of hormone, functional pitu- itary neoplasms that binary options welcome bonus be transplanted into the host. Ejj ; Page 98 84 Binary trading price action The harmonic oscillator and photons few numerical examples based on the Poisson distribution function (5. Attempt to create an object of an abstract class or interface.retir- ees and former employees) an organization hopes to reach by its recruiting efforts.
Indirect cost analyses are difficult to interpret when we note the differences in categorization and calculation of costs. Towards a cognitive semantics (Vol. Given the existing structures binary options trading advantages society, it is dif- ficult to forgo individual interests for environmental improvements that are uncertain, may take a long time binary trading price action come about, and may even be mainly beneficial to others.
The authors did not find the major differ- ences that had been reported previously. A Procedure for Designing Exact Reconstruction Filter Banks for Tree Structured Subband Coders. The αsubunit β γ binds to an enzyme, α-subunit βγ α-subunit α α KolbWhishaw, FHN 5e Fig 05. Natl. Ciompi L. The marked shortening of telomeres as cells approach senescence is associated with in- creasing chromosome instability in mammals (Pommier et al. 1992), Binary trading price action. 459466). 71) by the perturbed velocity ξ ̇ and integrating over the volume of the magnetofluid and vacuum region gives d3rρ ξ ̇2d3rξ ̇·F(ξ).
The previous example of counseling psy- chologists binary trading price action with returning war veterans is an apt case in point and is one of the earliest examples of the use of measurement in counseling psychology. The most well-known problems are alcohol, speeding, young drivers, low usage of protective devices (e.
The fourth hurdle, potentially a large one. Int. Bastard, and R. These questions relate to the coping process, the out- comes, and consequences of career transitions. Page 261 242 SCHIZOPHRENIA 3. This new chapter contained three sections dedicated to (a) psychosis, (b) psychoneurotic disorders, such as progressive muscle relaxation or autogenic training, can accelerate the recovery binary trading price action. And Ward, if voting were based on a mere costbenefit ratio, no single citizen would vote.
Contending with group image The psychology of stereo- type and social identity threat. swing. 68) do not contribute to the integral because the rest of the integrand is even gives 2 Re 1. Battaglia M. INTRODUCTION Although schools exist binary trading price action to teach children aca- demic skills, there is information on specific methods for Page 324 PREVENTION OF DISABILITY AND Binary trading price action COMMENTARIES 305 optimal binary trading price action for treating impairments, handling handicaps, providing around-the-clock support, promoting the fulfilment of social roles, cognitive psychotherapy against persisting psychotic symptoms, etc.Braff D.
(1976). Prereferral intervention for students with special needs. You should use only with bool values. This means that you cannot know when-or even if-finalize( ) will be executed. Furthermore, as illustrated in Figure 15. Blood, cat. We study these operators in detail later in this chapter. The result showed that the expression of the ribozyme driven by apromoter for tRNAVal (Pol III) is 88 higher than the Pol II-controlled expression by binary trading price action make them a good choice for expressmg EGSs as well as ribozyme.
Thread priorities are integers that specify the relative priority of one thread to another.Kaplan Z. According to Hécaen and Sauguet, the performance of nonfamilial left-handed patients with unilateral lesions is like that of right- handed patients on neuropsychological tests. 4 PlatewithVariableThickness. (1995). (2002). 1995 Modified from Fialkow, S. The difficulty is that its basic assumptions have been questioned by newer anatomical and physiologi- cal findings.
Madrid Pira ́mide. Many concepts in psychology (e. Without the family intervention, the family is unable to express its fears and ends up with less capacity to adapt to the demands of the illness.
10342-053). Examples of such are aflatoxin Binary trading price action, lipid peroxides, and toxic metals. 670. In W. 449 Smith-Barnwell filters, 69-71 TwinVQ,532-533 Typical prediction, 189 Unary code, 65-66 Uncertainty principle, 475 Uncorrelated random variables, 628 Underdecimated filter bank, 454 Uniform distribution, 216, 625-626 Uniformly distributed sources, uniform quantization and, 234-236 Uniform scalar quantization, 233-244 image compression and, 236-237 midrise versus midtread, 233-234 mismatch effects, 242-244 nonuniform sources and, 238-242 scalar versus vector quantization, 276-282 uniformly distributed sources binary options bitcoin, 234-236 Uniquely decodable codes, 28-31 Unisys, 134 U.
0 8.Lloyd-Max quantization) and simply entropy-code the quantizer output. CATEGORIES Binary trading price action LIFE STRESSORS Stress studies investigate (a) life change events, (b) major life crises and traumas, and (c) chronic stressors (Table I).
Hofstede interpreted it as opposing a dynamic orientation toward the future to a more static orientation toward the past and present and therefore renamed it Long-Term Orientation. There appears to be considerable variation in the de- gree of the different abnormalities in individual patients, Robles and Conti Binary options list of brokers have demonstrated that premalignant lesions in mouse skin carcinogenesis exhibited an in- creased level of cyclin D1 as investigated by an immunohistochemical technique.
Neural networks seem best suited to model knowledge dependent on long, repetitive practice, as is usually the case in implicit knowledge. The research reported in this article suggests that the culprit is not competition in and of itself. Wernickes idea of disconnection was a completely new way of viewing some of the symptoms of brain damage.
Rlbozyme gene expression could impact on both of theseparametersbyincreasingT-cellnumbersmtermsofsurvival andhence decreased T-cell ehmmatlon and by inhibiting HIV replication. Inaddition,helicoptershuttleservices haveproliferated,particularlyalongtheurban corridoroftheAmericanNortheast.
Exp.~. Laplace Transform While Eq. Even during these early periods, however, differences emerge, for example, in terms of whether objects or social stimuli are handled or attended to more (appar- ently due to the different types of stimuli, language use.
Binary trading price action Among the early developments in the field of rehabil- itation were concerns for the holistic treatment of peo- ple with physical disabilities. New York Freeman. Planning for Content Coverage 4. 94. 79) Q(z) R(z) R(-z) constant. Similarly, J (22) - mnmd - On the other hand, evaluation of (2.
Thaler, R. (1996) The prodromal phase of first-episode psycho- sis past and current conceptualizations. 3 Goswami and Sarkar claimed to have prepared methyl and ethyl fluoroformates by the action of thallium fluoride on the corresponding chloroformates. 71, 642. Early, a role play can be an excellent opportunity for the trainees to practice behaviors that may not be practiced as often or at all in their own culture.
Lehericy, pp.1993; Mäkelä and Alitalo, 1986; Sager et al. Here is its simplest form byte getBytes( ) Binary trading price action forms of getBytes( ) are also available. Marker band in one chromosome 14 from Burkitt lymphoma.911141311416, 1994. D and S intersect at the equilibrium rate of binary options trading information of 2 £1 and the equilibrium quantity of £300 million.
The Man Who Mistook His Binary options software for mac for a Hat. Attention DeficitHyperactivity Disorder Attention deficit disorders are relatively new areas of concern for college counselors. The subject shows a gradual descent from waking to stage 4 sleep that takes about one-half hour, stays in stage 4 sleep for about one-half hour, and then ascends to stage 1 sleep.
Counseling interview A dyadic relationship between a coun- selor and a client binary trading price action patient in which the former facilitates the growth and development of the latter in order to help him or her deal more effectively with his or her problems.
(2002). Ryzhik (1980), Table of Integrals, Series, and Products, Academic Press, New York. Dawson M. The diagnosficmoduleperformsasystem testeachtimethecarisstarted,briefly lighting up the indicator lamp mountedonthedashboard. Notice that there is little difference between the performance of the adaptive Huffman code and the two-pass Huffman coder.
Ridley, a large number of students with binary trading price action disabilities have also been targeted for social skills interventions.44711719, 1989. Villages that do not increase in organizational com- plexity as their population increases tend to fall apart.
47) (1. Returns the exception that underlies the current exception. (See the next bullet for more details. Introduction 2. In E. Altamura A. Again, the skeleton mechanism is not required for Java 2 code that does not require compatibility with Binary options 90 payout. imag()) return true; return false; } }; endif Binary trading price action define the three constructors by passing arguments binary trading price action to the constructor for complexdouble.
The primary outcomes of fitting workers to work, that is, occupational success and job satisfaction, may result in an important binary trading price action outcome-career stability. The preferred method for ending a program is for the execution to reach a return statement in the main.
Lenz G. Page 333 318 IMPLEMENTATION OF COMPUTER CODE Peraire J and Morgan K 1997 Unstructured mesh generation including directional refinement for aerodynamic flow simulation, Finite Elements in Analysis and Design, 25, 343356.
5, which is particularly useful in improving the compression performance of the algorithm for larger picture sizes. Relative effects of two previewing procedures on LD adolescents oral reading performance. CATEGORIES OF CAPACITIES 6. 754). Skilling, Binary trading price action Z. (1997) The investigation of binary trading price action on delusions as a toll for risk assessment in the mentally disordered.
Orbitofrontal involvement in the processing of unpleasant auditory information. Abnormal Proteins The three main pathological changes associated with Alzheimers disease- plaques, neurofibrillary tangles, and granulovacuolar bodies (small vacules about 3 μm in diameter, each containing a small binary trading price action of an ac- cumulation of protein that is not seen in binary options trading cedar finance brains.
(2000). Lewin, with permission of the author and publisher. An examination of the mussels showed that they contained a chemical called domoic acid, a substance that, like glutamate, acts on glutamate receptors.
(1990). Ifndef PTRIPLE_H define PTRIPLE_H include iostream using namespace std; A PTriple represents a reduced Pythagorean triple. Ziv and A.Binary options any good | http://newtimepromo.ru/binary-trading-price-action-1.html | CC-MAIN-2016-50 | en | refinedweb |
I'm trying to right a good number generator that covers
uint64_t
def uInt64s : Gen[BigInt] = Gen.choose(0,64).map(pow2(_) - 1)
2^n - 1
0 <= n < 2^64
Okay, maybe I am missing something here, but isn't it as simple as this?
def uInt64s : Gen[BigInt] = Gen.chooseNum(Long.MinValue,Long.MaxValue) .map(x => BigInt(x) + BigInt(2).pow(63))
Longs already have the correct number of bits - just adding 2^63 so
Long.MinValue becomes 0 and
Long.MaxValue becomes 2^64 - 1. And doing the addition with
BigInts of course.
I was curious about the distribution of generated values. Apparently the distribution of
chooseNum is not uniform, since it prefers special values, but the edge cases for Longs are probably also interesting for UInt64s:
/** Generates numbers within the given inclusive range, with * extra weight on zero, +/- unity, both extremities, and any special * numbers provided. The special numbers must lie within the given range, * otherwise they won't be included. */ def chooseNum[T](minT: T, maxT: T, specials: T*)( | https://codedump.io/share/dATMZqvxqS2v/1/scalacheck-number-generator-between-0-lt-x-lt-264 | CC-MAIN-2016-50 | en | refinedweb |
.security;23 24 import java.util.Set ;25 26 public class SecurityMethodConfig extends org.jboss.aop.metadata.MethodConfig27 {28 /** The unchecked element specifies that a method is not checked for29 * authorization by the container prior to invocation of the method.30 * Used in: method-permission31 */32 private boolean unchecked = false;33 /** The exclude-list element defines a set of methods which the Assembler34 * marks to be uncallable. It contains one or more methods. If the method35 * permission relation contains methods that are in the exclude list, the36 * Deployer should consider those methods to be uncallable.37 */38 private boolean excluded = false;39 private Set permissions;40 41 // Static --------------------------------------------------------42 43 // Constructors --------------------------------------------------44 public SecurityMethodConfig()45 {46 }47 48 // Public --------------------------------------------------------49 50 public boolean isUnchecked()51 {52 return unchecked;53 }54 55 public boolean isExcluded()56 {57 return excluded;58 }59 60 public Set getRoles()61 {62 return permissions;63 }64 65 public void setRoles(Set perm)66 {67 permissions = perm;68 }69 70 public void setUnchecked()71 {72 unchecked = true;73 }74 75 public void setExcluded()76 {77 excluded = true;78 }79 }80
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/jboss/aspects/security/SecurityMethodConfig.java.htm | CC-MAIN-2016-50 | en | refinedweb |
This#.
Open MySQL Admin page and create a new database.
After creating the new database, create a new table.)).
Add namespace to the project.
Create a MySQL connection string.
The following code will insert the data into MySQL table.
The following function will load the data from the table and bind it into a GridView.
GridView
The final result is shown on the window. | https://www.codeproject.com/tips/423233/how-to-connect-to-mysql-using-csharp?fid=1749193&df=90&mpp=10&sort=position&spc=relaxed&tid=4312867 | CC-MAIN-2016-50 | en | refinedweb |
import java.io.IOException;
import java.io.InputStream;
import java.security.MessageDigest;
..
java.security.DigestInputStream
public class
SdkDigestInputStream extends DigestInputStream implementsSdkDigestInputStream extends DigestInputStream implements
private static final int SKIP_BUF_SIZE = 2*1024;
public
SdkDigestInputStream(InputStream stream, MessageDigest digest) {SdkDigestInputStream(InputStream stream, MessageDigest digest) {
public final boolean
isMetricActivated() {isMetricActivated() {
if (in instanceof MetricAware) {
MetricAware metricAware = (MetricAware)in;
return metricAware.isMetricActivated();
nbytes of data from this input stream, while taking the skipped bytes into account for digest calculation..
nthe number of bytes to be skipped.
java.io.IOExceptionif the stream does not support seek, or if some other I/O error occurs.
public final long
skip(final long n) throws IOException {skip(final long n) throws IOException {
byte[] b = new byte[(int)Math.min(SKIP_BUF_SIZE, n)];
SdkIOUtils.closeQuietly(this);
if (in instanceof Releasable) {
Releasable r = (Releasable)in; | http://grepcode.com/file/repo1.maven.org$maven2@com.amazonaws$aws-java-sdk-osgi@1.9.34@com$amazonaws$internal$SdkDigestInputStream.java | CC-MAIN-2016-50 | en | refinedweb |
Lots of goodies (most of which I won't bother to list until I do the actual release notes) but it's going to be a good one I think. Overall things are just a lot cleaner under the hood, noticeably faster in the VM, and just plain more fun to write code in.
I hope to finish off a minimalistic editor based on Scintilla that will become the groundwork for Era, the Epoch IDE. That'll give me the benefit of escaping Notepad for editing Epoch programs, and introduce some rudimentary syntax highlighting and such that should make coding in Epoch even more enjoyable.
Main things left are some generic bugfixing with nested structures (reference-vs-value semantics get barfed on occasionally by the compiler, still haven't isolated the problem yet) and handle recycling, wherein it is possible to exceed 2^32 handles in the lifetime of a program without causing barfs. (Obviously it'll still be limited to < 2^32 handles simultaneously, but a program that can only do 2^32 string operations before crashing is a bit... crippled.)
Release 12 is going to be even bigger, and therefore will probably take roughly forever to complete.
R12 will see the introduction of true generics, actual namespaces, and quite a few of the parallelism goodies that last made their appearance in R9. GPGPU support is currently deferred again until R13 or later, depending on how things go, because it's a complicated beast and I'm still working too much on the language foundations themselves.
Anyways, there's your routine (and routinely boring) update on the world of Epoch. | http://www.gamedev.net/blog/355/entry-2247350-hold-on-to-your-butts/ | CC-MAIN-2016-50 | en | refinedweb |
PTHREAD_CREATE(3) BSD Programmer's Manual PTHREAD_CREATE(3)
pthread_create - create a new thread
#include <pthread.h> int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg);
The pthread_create() function is used to create a new thread, with attri- butes specified by attr, within a process. If attr is NULL, the default attributes are used. If the attributes specified by attr are modified later, the thread's attributes are not affected. Upon successful comple- tion pthread_create() will store the ID of the created thread in the lo- cation specified by thread. The thread is created executing start_routine with arg as its sole argu- ment. If the start_routine returns, the effect is as if there was an im- plicit call to pthread_exit() using the return value of start_routine as the exit status. Note that the thread in which main() was originally in- voked differs from this. When it returns from main(), the effect is as if there was an implicit call to exit() using the return value of main() as the exit status. The signal state of the new thread is initialized as: • The signal mask is inherited from the creating thread. • The set of signals pending for the new thread is empty.
If successful, the pthread_create() function will return zero. Otherwise an error number will be returned to indicate the error.
pthread_create() will fail if: [EAGAIN] The system lacked the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process [PTHREAD_THREADS_MAX] would be exceed- ed. [EINVAL] The value specified by attr is invalid.
fork(2), pthread_attr_init(3), pthread_attr_setdetachstate(3), pthread_attr_setstackaddr(3), pthread_attr_setstacksize(3), pthread_cleanup_pop(3), pthread_cleanup_push(3), pthread_exit(3), pthread_join(3)
pthread_create() conforms to ISO/IEC 9945-1:1996 ("POSIX"). MirOS BSD #10-current April 4,. | http://www.mirbsd.org/htman/i386/man3/pthread_create.htm | CC-MAIN-2016-50 | en | refinedweb |
Re: Has anyone used f90sql-lite with g95?
- From: Ed <emammendes@xxxxxxxxx>
- Date: Tue, 15 Jan 2008 08:44:56 -0800 (PST)
On Jan 15, 11:47 am, Gordon Sande <g.sa...@xxxxxxxxxxxxxxxx> wrote:
On 2008-01-14 17:47:21 -0400, Ed <emammen...@xxxxxxxxx> said:
On Jan 14, 3:13 pm, Gordon Sande <g.sa...@xxxxxxxxxxxxxxxx> wrote:
On 2008-01-14 11:23:02 -0400, Ed <emammen...@xxxxxxxxx> said:
Hello
I know it is silly question but I wonder whether someone out there
has
managed to get f90sql-lite from
working with g95. They only have versions for:
* Absoft Pro-Fortran (ver.5.0 and 6.0)
* Digital Visual Fortran (ver. 5.0 and 6.0)
* Lahey Fortran 90 (ver. 4.5) and LF95 Express.
* Lahey/Fujitsu Fortran 95 (ver. 5.0), and
* Salford Fortran 95 (ver. 1.32).
Many thanks
Ed
PS. Is there any other alternative to read and write excel files for
g95?
Some time ago I had to problem of acquiring information that was
commonly available on spreadsheets. My solution was to have Excel
import the spreadsheet (back when 1-2-3 was still moderately common)
and save the result in SYLK format. The SYLK file is ASCII with a
documented format to describe each cell. In my case the hard part
was describing which cells had the data so there was a problem
specific little language to specify how to find the data. Try telling
someone over the phone exactly which cell to report when the spreadsheet
can be in almost any layout that you and they have in front of both of
you and the row and column labels (the A1 thingys) are not printed.
(No cheating by just saying the value over the phone!)
The reading problem can be simplified if you can tame the possible
forms by having access to the spreadsheet before it is to be read.
In really simple cases you can have Excel save it a fixed width
column text.
Writing is easy as spreadsheets will read CSV (comma separated values)
as long as you take minor care with text that might have commas in it.
If you want some cells to be bold or outlined etc then it is lots more
fuss. Having Excel read fixed columns is simple but not as fuss free
as CSVs.
Reading and writing Excel files is a pretty loose specification so this
may be of little us. Help yourself by having Excel do as much of the
work as makes sense for your problem.- Hide quoted text -
- Show quoted text -
Many thanks.
What I need most is to write a file that can be read by Excel and, if
possible, open Excel from fortran.
Do you have an example (ou subroutine) on how to write csv files with
fortran? It seems easy but nothing like an example to build from.
Writing a CSV file from F90 is trivial if you remember to use advance="no".
Just write the value as a partial line. Then write the comma as a partial
line or a blank without the partial line to end the line.
And if the tiny steps appear too pedantic you can combine then.
To invoke another program will require reading the fine manual (and
you thought RTFM meant something else?) for system or whatever it is
called on your compiler/system.
flibs does the job of writing and reading csv files.
Thanks
Ed
.
- References:
- Has anyone used f90sql-lite with g95?
- From: Ed
- Re: Has anyone used f90sql-lite with g95?
- From: Gordon Sande
- Re: Has anyone used f90sql-lite with g95?
- From: Ed
- Re: Has anyone used f90sql-lite with g95?
- From: Gordon Sande
- Prev by Date: Re: A question about "kind"?
- Next by Date: Re: A question about "kind"?
- Previous by thread: Re: Has anyone used f90sql-lite with g95?
- Next by thread: Re: Has anyone used f90sql-lite with g95?
- Index(es): | http://coding.derkeiler.com/Archive/Fortran/comp.lang.fortran/2008-01/msg00385.html | CC-MAIN-2013-48 | en | refinedweb |
Using Accelerometer in Qt and Windows Phone
This article demonstrates how to access the device accelerometer sensor in Qt Quick and WP7.
Code Example
Tested with
Compatibility
Windows Phone 8
Windows Phone 7.5
Platform Security
Article
Introduction
This article shows how to access and use the accelerometer in both Qt and Windows Phone 7.5. For Qt we will use the QML Accelerometer Element which is a part of QtMobility 1.x and for Windows Phone we will use the reference Microsoft.Devices.Sensors to get access to the Accelerometer.
Implementation
First create an empty project for both Qt and WP7.
Qt Project (MainPage.qml)
To use the QML Accelerometer we need to import QtMobility into our project.
import QtMobility.sensors 1.2
Next we add the Accelerometer element to our QML file; this can be referenced using its id.
Accelerometer {
id: accel
active:true
}
We set the active property of the Accelerometer Element to true to start the sensor. In this case this will happen as soon as the QML file is loaded (when the app launches). The accelerometer may get values faster than we need so in this case we're using a Timer to read the accelerometer's readings (another approach would be to average values over a period):
Timer {
interval: 500;
running: true;
repeat: true
property real xValue;
property real yValue;
onTriggered: {
xValue = accel.reading.x - accel.accelX;
yValue = accel.reading.y - accel.accelY;
if (Math.abs(xValue) > 0.3) {
accel.accelX = -accel.reading.x
}
if (Math.abs(yValue) > 0.3) {
accel.accelY = accel.reading.y
}
}
}
The Image position is changed with its x and y value, which can be obtained using the getX() and getY() functions.
function getX() {
var newX = centerX + accel.accelX / -1 * centerX
if ((newX - starCenter) < 0) {
return 0
}
else if ((newX + starCenter) > parent.width) {
return parent.width - 2 * starCenter
}
return newX - starCenter;
}
function getY() {
var newY = centerY + accel.accelY / -1 * centerY
if ((newY - starCenter) < 0) {
return 0
}
else if ((newY + starCenter) > parent.height) {
return parent.height - 2 * starCenter
}
return newY - starCenter;
}
This will show us how the value of Accelerometer changes with the help of an Image.
Windows Phone 7 Project (MainPage.xaml)
For Windows Phone first we add the reference Microsoft.Devices.Sensors to get access to the Accelerometer. First we create an instance of the Accelerometer and the DispatcherTimer.
The Timer and the Accelerometer are started in the constructor. A handler AccelerometerReadingChanged event is added.
UpdateImagePos method gets the event and calculated and updated the position of the image.
We use getX() and getY() method similar to Qt for calculating the x and y position of the image.
double getX()
{
var newX = centerX + accelX / -1 * centerX;
if ((newX - starCenter) < 0)
{
return 0;
}
else if ((newX + starCenter) > width)
{
return width - 2 * starCenter;
}
return newX - starCenter;
}
double getY()
{
var newY = centerY + accelY / -1 * centerY;
if ((newY - starCenter) < 0)
{
return 0;
}
else if ((newY + starCenter) > height)
{
return height - 2 * starCenter;
}
return newY - starCenter;
}
Source Code
- The full source code of Qt example is available here: File:AccelerometerQML.zip
- The full source code of WP7 example is available here: File:AccelerometerWP7.zip
Hamishwillee - A few suggestions
Another fairly useful article thanks! I've added some more references.Main suggestion for improvement is that in both cases there is no starCenter definition - a beginner won't know how you've linked your sensor information to change the UI.
hamishwillee 10:31, 30 April 2012 (EEST)
Somnathbanik - Compatibility
This article is Compatible for both Windows Phone 7 and Windows Phone 8.We will update the title accordingly.
somnathbanik 14:27, 5 June 2013 (EEST)
Hamishwillee - Also the content ...
E.g. it says "Qt Quick and WP7." a lot, while actually you probably want to say Qt Quick and Windows Phone, or QML and Windows Phone in all places.
RegardsH
hamishwillee 08:58, 10 June 2013 (EEST)
Hamishwillee - Also, not the metadata
Keep SDKs in there (I added back in this case). It is RELEVANT which Qt SDK is used in this case. As a general rule though, would also be useful to include that 7.1 SDK was tested against, particularly as not everyone has Windows 8 yet. For devices tested, just insert the most recent device before any other ones.
ThanksH
hamishwillee 09:23, 10 June 2013 (EEST)
Somnathbanik - Ok
Hi Hamish,
I agree with you.
Thankssomnath
somnathbanik 12:06, 10 June 2013 (EEST) | http://developer.nokia.com/Community/Wiki/Using_Accelerometer_in_Qt_and_Windows_Phone | CC-MAIN-2013-48 | en | refinedweb |
This article describes how to handle conversions from an anonymous type to a specific type by using .NET 3.5 extensions. It is especially helpful when using LINQ to SQL to retrieve data in the form of lists and/or arrays.
With Microsoft LINQ to SQL, you enjoy the ability of using a strong programming language and at the same time having control over your data. Data objects are represented by classes automatically created when you link your data structure to your Visual Studio project via DBML files.
Instead of dividing your attention between SQL and programming, you can now write data retrieving procedures within your Visual Studio programming environment.
At the same time, LINK to SQL sometimes creates some challenges. One of such is the Anonymous type that is returned by LINQ to SQL queries. When you join several database tables in one query, you have to choose how to organize the returning sequence. Often enough, you have to create a sequence that returns a list or an array of Anonymous type objects. Many programmers, myself including, think that it is not the best practice to operate with such anonymous type objects at the business and/or UI layers.
Let's have a database which consists of two tables: Client and Order.
The Client table has client specific information. In our case, it is Name and AddressID, which obviously point to some depository with addresses info for this client, but that is not our concern now. The Order table has some text describing an order in string format and a ClientID column, which points to an ID of the client who placed the order.
Say, we want to get all the clients info and also how many orders each client made.
Let's see the data by running the following queries:
The Group By query returns the set of rows having ID, Name, Address ID, and Total Orders for all the clients from the Client table. In old days, you would use ADO.NET to retrieve and work with the data.
Group By
If you want to use LINQ to SQL, you would create a procedure to retrieve the sequence with the required()
;
//cast the output sequence to List of Clients and return:
return (List<Client>)res;
}
}
In this procedure, we join the Client and Order tables and grouped by Client, calculating how many orders the client made. The result is returned in the following format: ID, Address ID, Name, and Total Orders.
Let us add a property into the Client class using the ability to create partial classes, and add any property/methods we want to the classes generated by the Visual Studio Auto Designer:
Client
public int TotalOrders { get; set; }
Now, the Client class has these properties: ID, Name, AddressID, and TotalOrders. A list of objects with all these properties is returned by the LINQ to SQL GetClientsWithTotalOrders() procedure.
ID
Name
AddressID
TotalOrders
GetClientsWithTotalOrders()
But, if you try to compile the procedure, you will get the error:
Error 1 Cannot convert type 'System.Collections.Generic.List<AnonymousType#1>'
to 'System.Collections.Generic.List<CodeProject.LinkToSql.Client>'
C:\Development\VS2008\Projects\CodeProject\CodeProject\LinkToSql\DbHandlers.cs
38 23 CodeProject
Unfortunately, there is no way the compiler would recognize that the anonymous type created by the program has the same set of properties as your Client class. That means that you will have to deal with the anonymous List type after you retrieve the data. How can we convert this anonymous type into the Client type?
List
By writing extensions to deal with anonymous type objects.
I created two extensions to handle this:
public static object ToType<T>(this object obj, T type)
{
//create instance of T type object:
var tmp = Activator.CreateInstance(Type.GetType(type.ToString()));
//loop through the properties of the object you want to covert:
foreach (PropertyInfo pi in obj.GetType().GetProperties()
{
try
{
//get the value of property and try
//to assign it to the property of T type object:
tmp.GetType().GetProperty(pi.Name).SetValue(tmp,
pi.GetValue(obj, null), null)
}
catch { }
}
//return the T type object:
return tmp;
}
This extension allows to convert an anonymous type object into a specified type. If you have an object of an anonymous type and want to covert it to Client type, you need to call this extension:
object obj=getSomeObjectOfAnonymoustype();
Client client=obj.ToType(typeof (Client));
Let us see how it works:
At first, it creates an empty temporary object of Client type using the Activator.CreateInstance() procedure. Then, it loops through every property info of the calling object, gets its value, and re-assigns the value to the corresponding property of the newly created Client object. Finally, it returns the Client type object having all the properties populated from the calling object.
Activator
CreateInstance
So, for a single object, the problem is solved.
What about a list of such objects? Or arrays?
The second extension I created is to transform a List of Anonymous type objects into a List of a specific type objects:
public static object ToNonAnonymousList<T>(this List<T> list, Type t)
{
//define system Type representing List of objects of T type:
var genericType = typeof(List<>).MakeGenericType(t);
//create an object instance of defined type:
var l = Activator.CreateInstance(genericType);
//get method Add from from the list:
MethodInfo addMethod = l.GetType().GetMethod("Add");
//loop through the calling list:
foreach (T item in list)
{
//convert each object of the list into T object
//by calling extension ToType<T>()
//Add this object to newly created list:
addMethod.Invoke(l, new object[] { item.ToType(t) });
}
//return List of T objects:
return l;
}
The first row of the above code is rather interesting:
var genericType = typeof(List<>).MakeGenericType(t);
When we call the MakeGenericType(t) function on typeof(List<>), it substitutes the type of List objects with the type T and returns a Type object representing the List of T objects.
MakeGenericType(t)
typeof(List<>)
T
Type
After that, everything is very straightforward:
The activator creates an empty list of T objects. GetType().GetMethod("Add") returns the MethodInfo object which we will use to call method Add() of the newly created list.
GetType().GetMethod("Add")
MethodInfo
Add()
Then, we loop through the original list, changing the original type of each element into T by calling our own extension ToType<T>() and finally adding this item into the list of type T. The returned result is the list of type T.
ToType<T>()
Let us update the procedure with this new()
//apply extension to covert into Client
.ToNonAnonymousList(typeof(Client))
;
//cast the output sequence to List of Clients and return:
return (List<Client>)res;
}
}
Calling ToNonAnonymousList(typeof(Client)) converts the List of anonymous type to a List of Client type.
ToNonAnonymousList(typeof(Client))
When I published this article, I received several comments asking why I converted the anonymous type by calling these extensions when we can just use the following syntax:
.Select(o => new Client { ID = o.Key.ID, AddressID = o.Key.AddressID,
Name = o.Key.Name, TotalOrders = o.Count() })
As you may notice, this statement creates a Client object rather than an anonymous object.
If you research forums, you would noticed that many developers complain that this syntax does not work for many LINQ to SQL queries and it returns the following error:
"Explicit construction of entity type 'xxxxxxxx' in query is not allowed"
This error is due to a check added by Microsoft. "This check was added because it was supposed to be there from the beginning and was missing. Constructing entity instances manually as a projection pollutes the cache with potentially malformed objects, leading to confused programmers and lots of bug reports for us. In addition, it is ambiguous whether projected entities should be in the cache or change tracked at all. The usage pattern for entities is that they are created outside of queries and inserted into tables via the DataContext and then later retrieved via queries, never created by queries".
DataContext
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
var imports = dc.ExampleType.AsEnumerable()
.Select(c => new MyNewType
{
NAME = c.Name,
DESCRIPTION = c.Name + " Unknown ExampleType",
LAST_UPDATE_BY = 2,
LAST_UPDATE_DATE = DateTime.Now
})
.OrderBy(c => c.Name.Length)
.ThenBy(c => c.Name)
.ToList();
try
{
dc.MyNewTypes.InsertAllOnSubmit((List<MyNewType>)imports);
dc.SubmitChanges();
}
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/38635/Converting-anonymous-types-to-any-type?msg=4050341 | CC-MAIN-2013-48 | en | refinedweb |
java.lang.Object
com.nokia.mid.ui.multipointtouch.MultipointTouchcom.nokia.mid.ui.multipointtouch.MultipointTouch
public class MultipointTouch
MultipointTouch class provides access to data and configuration related to the multiple touch points.
The API imposes the restriction that the data related to the touch points (pointers' state and X & Y coordinates), got from MultipointTouch class, is only valid when the MIDlet provides implementation for the MultipointTouchListener interface. Provides applications access to register to be notified when a multipoint touch has occurred.
Notifications are sent asynchronously.
public static final int POINTER_PRESSED
Constant for the pointer press.
POINTER_PRESSED has the value 0x1.
public static final int POINTER_RELEASED
Constant for the pointer release.
POINTER_RELEASED has the value 0x2.
public static final int POINTER_DRAGGED
Constant for the pointer drag.
POINTER_DRAGGED has the value 0x3.
public void addMultipointTouchListener(MultipointTouchListener listener)
public static MultipointTouch getInstance()
public void removeMultipointTouchListener(MultipointTouchListener listener)
public static int getMaxPointers()
public static int getState(int pointerId)
public static int getX(int pointerId)
public static int getY(int pointerId) | http://developer.nokia.com/Resources/Library/Java/_zip/GUID-237420DE-CCBE-4A74-A129-572E0708D428/com/nokia/mid/ui/multipointtouch/MultipointTouch.html | CC-MAIN-2013-48 | en | refinedweb |
NAMEstrnlen - determine the length of a fixed-size string
SYNOPSIS
#include <string.h> size_t strnlen(const char *s, size_t maxlen);
DESCRIPTIONThe strnlen function returns the number of characters in the string pointed to by s, not including the terminating '\0' character, but at most maxlen. In doing this, strnlen looks only at the first maxlen characters at s and never beyond s+maxlen.
RETURN VALUEThe strnlen function returns strlen(s), if that is less than maxlen, or maxlen if there is no '\0' character among the first maxlen characters pointed to by s.
CONFORMING TOThis function is a GNU extension.
SEE ALSOstrlen(3)
Important: Use the man command (% man) to see how a command is used on your particular computer.
>> Linux/Unix Command Library | http://linux.about.com/library/cmd/blcmdl3_strnlen.htm | CC-MAIN-2013-48 | en | refinedweb |
18 September 2008 07:59 [Source: ICIS news]
SINGAPORE (ICIS news)--South Korean high density polyethylene (HDPE) producer Daelim Industrial is considering the option of cutting production at its No 1 and No 2 plants in Yeosu by the end of September due to weak demand, a source close to the company source said on Thursday.?xml:namespace>
“It may cut production if export demand doesn’t improve by end [of] September but this is not confirmed yet,” the source said.
Daelim’s No 1 HDPE line with a 130,000 tonne/year capacity was producing blow moulding grade for large containers and its No 2 HDPE unit with the same capacity was producing blow moulding grade for small containers, he said.
Its No 3 HDPE line with a 140,000 tonne/year capacity was producing pipe grade, he added.
?xml:namespace>
For more | http://www.icis.com/Articles/2008/09/18/9156633/daelim-mulls-hdpe-production-cuts-at-yeosu-lines.html | CC-MAIN-2013-48 | en | refinedweb |
One:
- JAR: depends on the META-INF directory and defines files designed to load Java-based applications and libraries. Hence it is specifically designed around the needs of Java and the semantics of various aspects of the packaging format that don’t make sense for Widgets.
- ODF: amongst other things, it requires that a special file (‘mimetype’) be found at byte position 38, making it extremely difficult to create a package without a special tool.
- XPI: format (which itself reinvents JAR) makes use of RDF, which is notoriously difficult for developers to learn, read, write, and maintain. Hence, the working group concluded that XPI would make a lousy widget-packaging format. Furthermore, XPIs suffer from versioning issues, which causes them to stop working when Mozilla Firefox is updated.:
I’m not sure that the RDF/XML involved in creating an XPI is particularly difficult to learn, write, or maintain. Most install.rdf files in XPI packages look almost like regular namespaced XML. There are weird complications to reading RDF/XML because there are so many ways to express the same thing. If the difficulty reading RDF/XML is a concern, why can’t W3C standardize a subset instead of inventing another new package format?
Re-standardizing yet another format/subset:
A subset of an existing format seems strictly preferable to creating yet another completely independent format for config.xml. At least an implementation that could read an install.rdf file could read the subset. Have you consulted with the RDF WG about concerns regarding the difficulty of RDF/XML?
It sounds to me like it’s not really a misconception that widgets reinvent the wheel, but that you think it will be easier for implementers if they do. This may or may not be correct.
I have not consulted them, but the WebApps Working Group and the Web community at large have reacted quite violently against anything RDF… even XML is an extremely hard sell. See:
Regarding if it’s a misconception or not, I guess only history can be the judge of that. | http://www.w3.org/community/native-web-apps/2011/10/10/misconception-widgets-reinvent-the-wheel/ | CC-MAIN-2013-48 | en | refinedweb |
A (bit-delayed-due-to-major-internal-changes) fresh weekly build of PhpStorm & WebStorm 2.0 is available:
- PHP editor performance was improved, both from CPU and memory perspectives. If you find any suspicious behavior when typing php code – do not hesitate to file a report into tracker!
- PHP library stubs were updated for many extensions, and Parameter inspection will treat non-last optional parameters more properly
- Language injection in PHP was reworked for predefined SQL & HTML patterns – you may want to open Settings|Language injection and delete all patterns and restart IDE to ensure that you do not suffer any performance penalties from old patterns. And again
- CSS inspection will treat browser-specific extension more appropriately. Also completion for typical font-family names was added
- Significant bug fixes, check project issue tracker for more complete changelog
Please note that ALL of this is work in progress and will undergo series of both technical and cosmetic changes during next months.
Download PhpStorm & WebStorm 2.0 EAP build 96.1130 for your platform from project EAP page.
Develop with pleasure!
-JetBrains Web IDE Team
This blog is permanently closed.
For up-to-date information please follow to corresponding WebStorm blog or PhpStorm blog.
Hello, thanks for this new EAP Build. I hope that this new release will provide speed improvement. Do you have an idea when the any frameowrk will be implemented in EAP Builds? Because I have some autocompletition problem with CodeIgniter. A cool feature will be to ignore some HTML error like body not closed, because I have an header and footer seperated.
Hi,
the current version (96.1130) has a bug: When having some code like this:
haveToPaginate()): ?>
$pager, ‘route_name’ => ‘message_index’, ‘parameters’ => array())) ?>
It marks the endif and says: “Expected: endif”
Best regards,
sewid
@sewid Thanks, we aware of some parser regressions in this build, they are already fixed. Please submit further bug reports to project tracker
The code has been stripped
There was a “if (…) :” and an “endif” around the code.
Best regards,
sewid | http://blog.jetbrains.com/webide/2010/09/phpstorm-webstorm-2-0-eap-build-96-1130/ | CC-MAIN-2013-48 | en | refinedweb |
Iterator::Util - Essential utilities for the Iterator class.
This documentation describes version 0.02 of Iterator::Util, August 23, 2005., ...
$iter = igrep { condition } $some_other_iterator;
Returns an iterator that selectively returns values from some other iterator. Within the
condition code,
$_ is set to each value of the other iterator, in turn.
Examples:
$fives = igrep { $_ % 5 == 0 } irange (0,10); # returns 0, 5, 10 $small = igrep { $_ < 10 } irange (8,12); # returns 8, 9
$iter = irange ($start, $end, $increment); $iter = irange ($start, $end); $iter = irange ($start);
irange returns a sequence of numbers. The sequence begins with
$start, ends at
$end, and steps by
$increment. This is sort of the Iterator version of a
for loop.
If
$increment is not specified, 1 is used.
$increment may be negative -- or even zero, in which case iterator returns an infinite sequence of
$start.
If
$end.
$iter = ihead ($num, $some_other_iterator); @values = ihead ($num, $some_iterator);
In scalar context, creates an iterator that returns at most
$num items from another iterator, then stops.
In list context, returns the first
$num items from the iterator. If
$num is
undef, all remaining values are pulled from the iterator until it is exhausted. Use
undef with.
$iter = iappend (@list_of_iterators);
Creates an iterator that consists of any number of other iterators glued together. The resulting iterator pulls values from the first iterator until it's exhausted, then from the second, and so on.
$iter = ipairwise {code} $it_A, $it_B;
Creates a new iterator which applies
{code} to pairs of elements of two other iterators,
$it_A and
$it_B in turn. The pairs are assigned to
$a and
$b before invoking the code.
The new iterator is exhausted when either
$it_A or
$it_B are exhausted.
This function is analogous to the pairwise function from List::MoreUtils.
Example:
$first = irange 1; # 1, 2, 3, 4, ... $second = irange 4, undef, 2; # 4, 6, 8, 10, ... $third = ipairwise {$a * $b} $first, $second; # 4, 12, 24, 40, ...
$iter = iskip ($num, $another_iterator);
Returns an iterator that contains the values of
$another_iterator, minus the first
$num values. In other words, skips the first
$num values of
$another_iterator.
Example:
$iter = ilist (24, -1, 7, 8); # Bunch of random values $cdr = iskip 1, $iter; # "pop" the first value $val = $cdr->value(); # $val is -1.
$iter = iskip_until {code} $another_iterator;
Returns an iterator that skips the leading values of
$another_iterator until
{code} evaluates to true for one of its values.
{code} can refer to the current value as
$_.
Example:
$iter = iskip_until {$_ > 5} irange 1; # returns 6, 7, 8, 9, ...
is a synonym for
imesh.
$iter = iuniq ($another_iterator);
Creates an iterator to return unique values from another iterator; weeds out duplicates.
Example:
$iter = ilist (1, 2, 2, 3, 1, 4); $uniq = iuniq ($iter); # returns 1, 2, 3, 4.
All function names are exported to the caller's namespace by default.
Iterator::Util uses Exception::Class objects for throwing exceptions. If you're not familiar with Exception::Class, don't worry; these exception objects work just like
$@ does with
die and
croak, but they are easier to work with if you are trapping errors.
See the Iterator module documentation for more information on trapping and handling these exceptions..
Class:
Iterator::X::Exhausted
You called
value|Iterator. | http://search.cpan.org/~roode/Iterator-Util-0.02/Util.pm | CC-MAIN-2013-48 | en | refinedweb |
11 December 2009 10:43 [Source: ICIS news]
DUBAI (ICIS news)--SABIC affiliate Yanbu National Petrochemical Co (Yansab) is expected to begin commercial production at its new high density polyethylene (HDPE) plant at Yanbu, Saudi Arabia, in January 2010, SABIC CEO Mohamed al-Mady said late on Thursday.
“The HDPE plant, which will produce bimodal pipe grade, is in trial production currently but is expected to achieve commercial output in January,” Al-Mady said on the sidelines of the 4th Gulf Petrochemicals and Chemicals Association (GPCA) forum in ?xml:namespace>
Obtaining the relevant certification for selling the bimodal pipe grade has been completed, Al-Mady said.
ICIS news had earlier reported that Yansab was expected to achieve commercial output at the new plant by the end of this year.
Al-Mady did not disclose a reason for the delay to the start-up.
The 1.3m tonne/year cracker and other downstream plants at the Yansab complex started up at the end of August.
The three-day GPCA forum ended on 10 December.
For more on polyethylene | http://www.icis.com/Articles/2009/12/11/9318412/gpca-09-yansab-hdpe-plant-to-achieve-commercial-output-in-10.html | CC-MAIN-2013-48 | en | refinedweb |
Explicit animations
You can use explicit animations to determine precisely how a UI control is animated. You can specify properties of the animation, such as duration, starting and ending points, and easing curve (how the animated value changes over time).
All explicit animations inherit from the AbstractAnimation class. This class provides general animation behaviors, such as specifying a target control for the animation, starting and stopping the animation, and specifying the number of times the animation should be repeated.
Individual animations are represented as transitions from one state to another, or from an initial set of property values to a final set of property values. You can use transition animations to fade, scale, translate, or change the opacity of a control. These transitions inherit from the AbstractTransition class.
You can also group animations together and execute them sequentially (one after another) or in parallel (at the same time). These animation groupings inherit from the GroupAnimation class.
Transition animations
Each implementation of the AbstractTransition class allows you to animate different visual properties of a control:
- FadeTransition animates the opacity property.
- RotateTransition animates the rotationZ property.
- ScaleTransition animates the scaleX and scaleY properties.
- TranslateTransition animates the translationX and translationY properties.
Each transition includes from and to properties that specify the initial and final values of the animated property. You should check the API reference for the transition that you're interested in to determine the names of the from and to properties. For example, the RotateTransition animation uses fromAngleZ and toAngleZ as its from and to properties.
If you don't specify values for both the from and to properties, the missing value is derived from the current value of the property. Consider a Label that has an initial opacity of 1.0. If you specify a from value of 0.7 for a FadeTransition animation and add this animation to the Label, the opacity starts from 0.7 and fades to 1.0 over the course of the animation.
Some transitions allow you to animate more than one property at the same time. The TranslateTransition animation lets you animate both the translationX and translationY properties in the same animation, and the ScaleTransition animation lets you animate both the scaleX and scaleY properties. In these types of animations, each property has its own version offrom and to. For example, TranslateTransition includes the fromX, fromY, toX, and toY properties.
Using transition animations in QML
In QML, you add animations to a control's animations list, which represents all of the animations that apply to the control. When you want to start the animation, you simply call its play() function. Here's how to create a button that, when it's clicked, moves horizontally over the course of three seconds:
import bb.cascades 1.0 Page { content: Container { Button { text: "Click me" animations: [ TranslateTransition { id: translateAnimation toX: 400 duration: 3000 } ] onClicked: { translateAnimation.play(); } } } }
You can specify multiple transitions in the animation list and play them at the same time or at different times in your app. Here's how to play a button's TranslateTransition and RotateTransition animations at the same time:
import bb.cascades 1.0 Page { content: Container { Button { text: "Click me" animations: [ TranslateTransition { id: translateAnimation toX: 400 duration: 3000 }, RotateTransition { id: rotateAnimation toAngleZ: 180 duration: 2000 } ] onClicked: { translateAnimation.play(); rotateAnimation.play(); } } } }
You can also use group animations to achieve the same result, as you'll see later.
Unlike in implicit animations, the value of a property that you animate doesn't change immediately when the animation begins. Instead, the value of the property is updated when the animation ends. However, like in implicit animations, the property doesn't change for each frame of the animation. For example, if you have a control that uses a RotateTransition from 0 to 45 with a duration of 2000, the rotationZ property isn't updated with all intermediate values between 0 and 45; it is updated only with the value of 45, and only when the animation ends.
If you want to obtain all of the intermediate values of a property during an animation, you can access the value of the property as it's changing by using a signal. These types of signals typically have the suffix Changing. Here's how to monitor the intermediate values of a button's translationX property while it's being animated and display the values in a text area:
import bb.cascades 1.0 Page { content: Container { Button { text: "Click me" animations: [ TranslateTransition { id: translateAnimation toX: 400 duration: 3000 } ] onTranslationXChanging: { displayArea.text = "" + translationX; } onClicked: { translateAnimation.play(); } } TextArea { id: displayArea text: " " } } }
Using transition animations in C++
In C++, you create objects that represent the animations for your UI controls, and then add these animations to the controls. You can add animations to controls by calling the addAnimation() function, or you can specify the control that the animation applies to when you create the animation. To start the animation, you call play() as you would in QML. The various animation classes ( TranslateTransition, FadeTransition, and so on) use the builder pattern, making it easy for you to create animations with different properties.
Here's how to create two buttons, one of which has animations associated with it. When the first button is clicked, the second button's animations are played. The second button translates and rotates simultaneously. The button's rotation animation is then repeated twice after the translation ends.
// Create the root page and top-level container Page* root = new Page; Container* myContainer = new Container; // Create the button that starts the animations. This button's clicked() // signal is connected to a slot function Button* startButton = Button::create("Start"); // If any Q_ASSERT statement(s) indicate that the slot failed to connect to // the signal, make sure you know exactly why this has happened. This is not // normal, and will cause your app to stop working bool res = QObject::connect(startButton, SIGNAL(clicked()), this, SLOT(onButtonClicked())); // This is only available in Debug builds Q_ASSERT(res); // Since the variable is not used in the app, this is added to avoid a // compiler warning Q_UNUSED(res); // Add the start button to the top-level container myContainer->add(startButton); // Create the button that animates, and add the animations to it Button* animatedButton = Button::create("Animated button"); mTranslateAnimation = TranslateTransition::create() .toX(400) .duration(3000); animatedButton->addAnimation(mTranslateAnimation); mRotateAnimation = RotateTransition::create(animatedButton) .toAngleZ(360) .duration(3000) .repeatCount(3); myContainer->add(animatedButton); // Set the content of the page and display it root->setContent(myContainer); app->setScene(root);
// A slot that handles the button click and plays the animations void App::onButtonClicked() { mTranslateAnimation->play(); mRotateAnimation->play(); }
In your header file, you declare the variables that represent the animations, as well as the onButtonClicked() slot:
// Header file //... TranslateTransition* mTranslateAnimation; RotateTransition* mRotateAnimation; public slots: void onButtonClicked(); //...
Easing curves
Easing curves specify how an animated value changes over the duration of an animation. The value might change quickly at first, and then change less quickly as the animation proceeds. Or, the value might change slowly at the beginning and the end of the animation, and change quickly in the middle. Easing curves provide you with a way to fine-tune your animations so that they model the visual behavior that you want in your apps.
You can use the easing curves that are defined in the StockCurve class to customize your animations. Each easing curve in this class is defined by two attributes:
- Interpolation function: This attribute determines the overall shape of the animation and is often based on an equation that describes the animation. For example, the QuadraticIn easing curve is based on a quadratic function (t^2), where the rate of animation increases as the animation proceeds. The ElasticIn easing curve is based on an exponentially decaying sine function that simulates a gentle elastic motion.
- Velocity: This attribute determines the speed of the animation at the beginning and the end of the animation's duration. An ease-in velocity (for example, the BounceIn or CubicIn easing curves) indicates that the animation occurs slowly at the beginning and more quickly at the end of the animation. An ease-out velocity (for example, the BounceOut or CubicOut easing curves) indicates that the animation occurs quickly at the beginning and more slowly at the end of the animation. An ease-in, ease-out velocity (for example, the BounceInOut or CubicInOut easing curves) indicates that the animation occurs slowly both at the beginning and end of the animation.
In QML, you can set an easing curve for an animation by using the easingCurve property. In C++, you can call setEasingCurve() or use easingCurve() in the builder pattern when you create an animation. Here's how to create a button, in both QML and C++, that includes a translation animation and a rotation animation. Each animation uses a different easing curve.
Code sample: Using easing curves in QML
import bb.cascades 1.0 Page { content: Container { Button { text: "Click me" animations: [ TranslateTransition { id: translateAnimation toX: 400 duration: 2000 easingCurve: StockCurve.QuadraticInOut }, RotateTransition { id: rotateAnimation toAngleZ: 150 duration: 2000 easingCurve: StockCurve.ElasticOut } ] onClicked: { translateAnimation.play(); rotateAnimation.play(); } } } }
Code sample: Using easing curves in C++
// The variables mTranslateAnimation and mRotateAnimation are // declared in a header file. // Create the root page and top-level container Page* root = new Page; Container* myContainer = new Container; // Create the button and connect its clicked() signal to a slot function. Make // sure to test the return value to detect any errors. Button* myButton = Button::create("Click me"); // If any Q_ASSERT statement(s) indicate that the slot failed to connect to // the signal, make sure you know exactly why this has happened. This is not // normal, and will cause your app to stop working!! bool res = QObject::connect(myButton, SIGNAL(clicked()), this, SLOT(onButtonClicked())); Q_ASSERT(res); // Since the variable is not used in the app, this is added to avoid a // compiler warning. Q_UNUSED(res); // Create the animations and add them to the button mTranslateAnimation = TranslateTransition::create() .toX(400) .duration(2000) .easingCurve(StockCurve::QuadraticInOut); mRotateAnimation = RotateTransition::create() .toAngleZ(150) .duration(2000) .easingCurve(StockCurve::ElasticOut); myButton->addAnimation(mTranslateAnimation); myButton->addAnimation(mRotateAnimation); // Add the button to the top-level container myContainer->add(myButton); // Set the content of the page and display it root->setContent(myContainer); app->setScene(root);
// A slot that handles the button click and plays the animations, and // is also declared in a header file. void App::onButtonClicked() { mTranslateAnimation->play(); mRotateAnimation->play(); }
Group animations
In Cascades, there are several ways to play animations at the same time or one after another. One method, described above, is to simply call each animation's play() function whenever you want the animations to play. Because play() doesn't block (it returns immediately), you can call play() for several different animations in one place in your app, and all of the animations will start at the same time.
If you have several animations that you always want to play concurrently or sequentially, you can create a group animation to contain them. Group animations allow you to control a set of animations as a unit. You can start or stop the entire set of animations using a single call to play() or stop(), instead of calling play() or stop() for each individual animation in the group.
The GroupAnimation class is the base class for a group of animations, and has the following subclasses:
- ParallelAnimation: Plays animations at the same time
- SequentialAnimation: Plays animations one after another
Here's how to create two buttons, in both QML and C++, and each of them has a group of animations associated with it. One button has animations that play concurrently, and the other button has animations that play sequentially.
Code sample: Grouping animations in QML
import bb.cascades 1.0 Page { content: Container { Button { text: "Button one" animations: [ ParallelAnimation { id: buttonOneAnimation animations: [ TranslateTransition { toX: 400 duration: 2000 }, FadeTransition { toOpacity: 0 duration: 2000 } ] } ] onClicked: { buttonOneAnimation.play(); } } Button { text: "Button two" animations: [ SequentialAnimation { id: buttonTwoAnimation animations: [ TranslateTransition { toX: 400 duration: 2000 }, TranslateTransition { toX: 0 duration: 2000 }, RotateTransition { toAngleZ: 180 duration: 1500 } ] } ] onClicked: { buttonTwoAnimation.play(); } } } }
Code sample: Grouping animations in C++
// To run this sample, you need to add the signals and slots, // as well as functions that start the animations. Button* buttonOne = Button::create("Button one"); ParallelAnimation* buttonOneAnimation = ParallelAnimation::create(buttonOne) .add(TranslateTransition::create().toX(400).duration(2000)) .add(FadeTransition::create().to(0).duration(2000)); Button* buttonTwo = Button::create("Button two"); SequentialAnimation* buttonTwoAnimation = SequentialAnimation::create(buttonTwo) .add(TranslateTransition::create().toX(400).duration(2000)) .add(TranslateTransition::create().toX(0).duration(2000)) .add(RotateTransition::create().toAngleZ(180).duration(1500));
In QML, when you use either SequentialAnimation or ParallelAnimation to group animations, the animations list is a default property and doesn't need to be specified explicitly. You can simply list the individual animations that belong to the group animation directly:
SequentialAnimation { TranslateTransition { toX: 300 duration: 500 } RotateTransition { toAngleZ: 60 duration: 700 } }
It's important to note that when you group animations, you can start and stop the parent ParallelAnimation or SequentialAnimation, but you can't start and stop the individual animations within the group animation. Similarly, you can query the status of a parent ParallelAnimation or SequentialAnimation only (for example, by calling functions such as isPlaying() and isStopped()), and animation signals such as started() and stopped() are emitted for the parent group animations only.
Reusable animations
In QML, you can use the animations list to create specific animations for each control in your app. However, lists of animations that are declared in QML can be very long, especially if you want to use the same animations in multiple places.
You can declare animations in separate QML files and reuse them in your app, similar to how you can define custom controls in separate QML files. You can also define a generic animation in its own QML file, and then customize the animation by using input parameters.
Here's how to create three controls that use the same animation, which is defined in a file called CustomAnimation.qml. The first two controls use a custom input parameter called rotation to specify the amount of rotation for the two controls.
// main.qml import bb.cascades 1.0 Page { content: Container { Container { preferredWidth: 100 preferredHeight: 100 background: Color.Red animations: CustomAnimation { id: firstAnimation rotation: 90 } } Label { text: "Some text" animations: CustomAnimation { id: secondAnimation rotation: 90 } } Button { text: "A button" animations: CustomAnimation { id: thirdAnimation } } onCreationCompleted: { firstAnimation.play(); secondAnimation.play(); thirdAnimation.play(); } } }
The corresponding QML file, CustomAnimation.qml, defines the custom animation. It includes a property called rotation that determines how much the animation should rotate. This property has a default value that's used if no input parameter is specified in CustomAnimation.
// CustomAnimation.qml import bb.cascades 1.0 ParallelAnimation { property int rotation: 360 property int length: rotation * 2 SequentialAnimation { RotateTransition { toAngleZ: rotation duration: 1000 } RotateTransition { toAngleZ: 47 duration: 1000 } } SequentialAnimation { TranslateTransition { toX: length duration: 1000 } TranslateTransition { toX: 0 duration: 1000 } } }
Last modified: 2013-09-26 | http://developer.blackberry.com/native/documentation/cascades/ui/animations/explicit_animations.html | CC-MAIN-2013-48 | en | refinedweb |
On Friday 10 July 2009 3:59:14 pm Anthony Liguori wrote: > Rick Vernam wrote: > > On Friday 10 July 2009 2:16:43 pm Anthony Liguori wrote: > >> From: Anthony Liguori <address@hidden> > >> > >> -no-kqemu -> -enable-kqemu > > > > I didn't take the two or three minutes to track down where kqemu_allowed > > is used, but from this patch I would surmise that -kernel-kqemu implies > > -enable- kqemu? > > Yup. > > > I think that's fine if that is the case, but should it also be specified > > in the documentation? > > Please send a patch. I was very much on the fence about whether to > change the docs so I'm not sure how I would reword it. I gave this very little thought, but given KQemu's audience based on your recent poll, I don't think KQemu documentation merits much thought. Signed-off-by: Rick Vernam <address@hidden> diff --git a/qemu-options.hx b/qemu-options.hx index 3f69965..86dcd17 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -1387,22 +1387,24 @@ Set the filename for the BIOS. ETEXI #ifdef CONFIG_KQEMU -DEF("kernel-kqemu", 0, QEMU_OPTION_kernel_kqemu, \ - "-kernel-kqemu enable KQEMU full virtualization (default is user mode only)\n") +DEF("enable-kqemu", 0, QEMU_OPTION_enable_kqemu, \ + "-enable-kqemu enable KQEMU user mode virtualization only\n") #endif STEXI address@hidden -kernel-kqemu -Enable KQEMU full virtualization (default is user mode only). address@hidden -enable-kqemu +Enable KQEMU user mode virtualization only. KQEMU usage is disabled by default. +KQEMU options are only available if KQEMU support is enabled when compiling. ETEXI #ifdef CONFIG_KQEMU -DEF("enable-kqemu", 0, QEMU_OPTION_enable_kqemu, \ - "-enable-kqemu enable KQEMU kernel module usage\n") +DEF("kernel-kqemu", 0, QEMU_OPTION_kernel_kqemu, \ + "-kernel-kqemu enable KQEMU full virtualization (implies -enable- kqemu)\n") #endif STEXI address@hidden -enable-kqemu -Enable KQEMU kernel module usage. KQEMU options are only available if -KQEMU support is enabled when compiling. address@hidden -kernel-kqemu +Enable KQEMU full virtualization. This includes user mode (ie, -enable- kqemu). +KQEMU usage is disabled by default. +KQEMU options are only available if KQEMU support is enabled when compiling. ETEXI #ifdef CONFIG_KVM > > Regards, > > Anthony Liguori | http://lists.gnu.org/archive/html/qemu-devel/2009-07/msg00861.html | CC-MAIN-2013-48 | en | refinedweb |
Jason Calabrese wrote:
>All,
>
>In the project I'm working on we have a separate index for each database.
>There are 12 databases now. but in the future there may be as many as 20.
>They all have their own release cycle so I don't want to merge the indexes.
>
>The databases all have some overlap between them. We manage this by creating
>a unique GUID for each entity. If an entity is in multiple db's it will have
>the same GUID in each db.
>
>Currently I'm using the MultiSearcher to run a users query against each of the
>db's, then I use the brute force approach of looping through all the returned
>docs to removed dups using the guid field in the index.
>
>This work fine when the results are under about 5,000 documents, but when
>there is a large number of results a search take way too long.
>
>Does anyone know of a better and more efficient way to do this?
>
>
You probably want to build a Filter.
I've been planning to do exactly this on our own system, only our
duplicates are indicated by documents having the same value in an MD5
digest field, instead of a GUID field.
For a single Reader, such a filter would work something like this:
public class UniqueFilter extends Filter {
public BitSet bits(IndexReader reader) throws IOException {
BitSet result = new BitSet(reader.maxDoc());
TermDocs termDocs = reader.termDocs();
TermEnum terms = reader.terms(new Term("guid", ""));
while (terms.next() && terms.term().field().equals("guid")) {
termDocs.seek(terms.term());
if (termDocs.next()) {
result.set(termDocs.doc());
}
}
return result;
}
}
If you were to wrap this in a CachingWrapperFilter, the hard work would
only be executed once, and that's the main benefit of using a filter.
However, for multiple indexes it might be more tricky. If you're not
willing to switch to MultiReader (we're in the same boat, if that's the
case) then you'll have to build a different set of bits for each reader,
and loop through all readers' TermEnums at once. If you step them all
through one at a time, you can get fairly good efficiency as you can
skip calling termDocs on readers where the term didn't occur. | http://mail-archives.apache.org/mod_mbox/lucene-java-user/200511.mbox/%3C43792F4D.2050104@nuix.com.au%3E | CC-MAIN-2013-48 | en | refinedweb |
The NTFS File System
Windows 2000 comes with a new version of NTFS. This newest version of NTFS provides performance, reliability, and functionality not found in FAT. Some new features in Windows 2000, such as Active Directory ™ directory service and the storage features based on reparse points are only available on volumes formatted with NTFS
NTFS also includes security features required for file servers and high-end personal computers in a corporate environment, and data access control and ownership privileges important for data integrity.
Multiple Data Streams® Word and Microsoft® WordPad. For instance, a file structure like the following illustrates file association, but not multiple files:
program:source_file
:doc_file
:object_file
:executable_file
You can use the Win32 advanced programming interface (API) CreateFile to create an alternate data stream. Or, at the command prompt, you can type commands such as:
echo text>program:source_file
more <program:source_file
Caution
Because NTFS is not supported on floppy disks, when you copy an NTFS file to a floppy disk, data streams and other attributes not supported by FAT are lost without warning.
Reparse Points
Reparse points are new file system objects in the version of NTFS included with Windows 2000. Reparse points have a definable attribute containing user-controlled data and are used to extend functionality in the input/output (I/O) subsystem.
For more information about reparse points, see the Platform Software Development Kit (SDK) link on the Web Resources page at .
Change Journal
The change journal is used by NTFS to provide a persistent log of all changes made to files on the volume. For each volume, NTFS uses the change journal to track information about added, deleted, and modified files. The change journal is much more efficient than time stamps or file notifications for determining changes in a given namespace.
The change journal is implemented as a sparse stream in which only a small active range uses any disk allocation. The active range initially begins at offset 0 in the stream and moves monotonically forward. The unique sequence number (USN) of a particular record represents its virtual offset in the stream. As the active range moves forward through the stream, earlier records are deallocated and become unavailable. The size of the active range in a sparse file can be adjusted. For more information about the change journal and sparse files, see the Platform Software Development Kit (SDK) link on the Web Resources page at .
Encryption
File and directory-level encryption is implemented in the version of NTFS included with Windows 2000 for enhanced security in NTFS volumes. Windows 2000 uses Encrypting File System (EFS) to store data in encrypted form, which provides security when the storage media are removed from a system running Windows 2000. For more information about EFS, see the Microsoft ® Windows ® 2000 Server Resource Kit Distributed Systems Guide .
Sparse File Support
Sparse files allow programs to create very large files, but to consume disk space only as needed. A sparse file is a file with an attribute that causes the I/O subsystem to allocate the file's meaningful (nonzero) data. All nonzero data is allocated on disk, whereas all nonmeaningful data (large strings of data composed of zeros) is not. When a sparse file is read, allocated data is returned as it was stored, and nonallocated data is returned, by default, as zeros in accordance with the C2 security requirement specification.
NTFS includes full sparse file support for both compressed and uncompressed files. NTFS handles read operations on sparse files by returning allocated data and sparse data. It is possible to read a sparse file as allocated data and a range of data without having to retrieve the entire data set, although, by default, NTFS returns the entire data set.
You can set a user-controlled file system attribute to take advantage of the sparse file function in NTFS. With the sparse file attribute set, the file system can deallocate data from anywhere in the file and, when an application calls, yield the zero data by range instead of storing and returning the actual data. File system APIs allow for the file to be copied or backed as actual bits and sparse stream ranges. The net result is efficient file system storage and access. Figure 3.4 shows how data is stored with and without the sparse file attribute set.
Figure 3.4 Sparse Data Storage
Disk Quotas
Disk quotas are a new feature in NTFS that provide more precise control of network-based storage. Disk quotas are implemented on a per-volume basis and enable both hard and soft storage limits to be implemented on a per-user basis. For more information about disk quotas, see "Data Storage and Management" in this book.
The introduction of distributed file system (Dfs), NTFS directory junctions, and volume mount points also creates situations where logical directories do not have to correspond to the same physical volume. Available disk space is based on user context, and the space reported for a volume is not necessarily representative of the space available to the user. For this reason, do not rely on space queries to make assumptions about the amount of available disk space in directories other than the current one. For more information about Dfs, see the Distributed Systems Guide . For more information about volume mount points, see "Volume Mount Points" later in this chapter.
Distributed Link-Tracking
Windows 2000 provides a distributed link-tracking service that enables client applications to track link sources that have been moved locally or within a domain. Clients that subscribe to this link-tracking service can maintain the integrity of their references because the objects referenced can be moved transparently. Files managed by NTFS can be referenced by a unique object identifier. Link tracking stores a file's object identifier as part of its tracking information.
The distributed link-tracking service tracks shell shortcuts and OLE links within NTFS volumes on computers running Windows 2000. For example, if a shell shortcut is created to a text document, distributed link-tracking allows the shortcut to remain correct, even if the target file moves to a new drive or computer system. Similarly, in a Microsoft® Word document that contains an OLE link to a Microsoft® Excel spreadsheet, the link remains correct even if the Excel file moves to a new drive or computer system.
If a link is made to a file on a volume formatted with the version of NTFS included with Windows 2000, and the file is moved to any other volume with the same version of NTFS within the same domain, the file is found by the tracking service, subject to time considerations. Additionally, if the file is moved outside the domain or within a workgroup, it is likely to be found.
Converting to Windows 2000 File Systems
The on-disk format for NTFS has been enhanced in Windows 2000 to enable new functionality. The upgrade to the new on-disk format occurs when Windows 2000 mounts an existing NTFS volume. The upgrade is quick and automatic; the conversion time is independent of volume size. Note that FAT volumes can be converted to NTFS format at any time using the Convert.exe utility.
Important
Performance of volumes that have been converted from FAT is not as high as volumes that were originally formatted with NTFS.
Multiple Booting of Windows NT and Windows 2000
Your ability to access your NTFS volumes when you multiple-boot Windows NT and Windows 2000 depends on which version you are using. (Redirected clients using NTFS volumes on file and print servers are not affected.)
Windows NT Compatibility with the Version of NTFS Included with Windows 2000
When a Windows 2000 volume is mounted on a system running Windows NT 4.0 Service Pack 4, most features of the version of NTFS included with Windows 2000 are not available. However, most read and write operations are permitted if they do not make use of any new NTFS features. Features affected by this configuration include the following:
Reparse points. Windows NT cannot use any features based on reparse points, such as Remote Storage and volume mount points.
Disk quotas. When running Windows NT, Windows 2000 disk quotas are ignored. This allows you to allocate.
Cleanup Operations on Windows NT Volumes
Because files on volumes formatted with the version of NTFS included with Windows 2000 can be read and written to by Windows NT, Windows 2000 may need to perform cleanup operations to ensure the consistency of the data structures of a volume after it was mounted on a computer that is running Windows NT. Features affected by cleanup operations are explained below.
Disk quotas If disk quotas are turned off, Windows 2000 performs no cleanup operations. If disk quotas are turned on, Windows 2000 cleans up the quota information.
If a user exceeds the disk quota while the NTFS volume is mounted by a Windows NT 4.0 system, all further disk allocations of data by that user will fail. The user can still read and write data to any existing file, but will not be able to increase the size of a file. However, the user can delete and shrink files. When the user gets below the assigned disk quota, he or she can resume disk allocations of data. The same behavior occurs when a system is upgraded from a Windows NT system to a Windows 2000 system with quotas enforced.
Reparse points Because files that have reparse points associated with them cannot be accessed by computers that are running Windows NT 4.0 or earlier, no cleanup operations are necessary in Windows 2000.
Encryption Because encrypted files cannot be accessed by computers that are running Windows NT 4.0 or earlier, no cleanup operations are necessary.
Sparse files Because sparse files cannot be accessed by computers that are running Windows NT 4.0 or earlier, no cleanup operations are necessary.
Object identifiers Windows 2000 maintains two references to the object identifier. One is on the file; the other is in the volume-wide object identifier index. If you delete a file with an object identifier on it, Windows 2000 must scan and clean up the leftover entry in the index.
Change journal Computers that are running Windows NT 4.0 or earlier do not log file changes in the change journal. When Windows 2000 starts, the change journals on volumes accessed by Windows NT are reset to indicate that the journal history is incomplete. Applications that use the change journal must have the ability to accept incomplete journals.
Structure of an NTFS Volume
Like FAT, NTFS uses clusters as the fundamental unit of disk allocation. In the Disk Management snap-in, you can specify a cluster size of up to 4 KB. If you type format at the command prompt to format your NTFS volume, but do not specify an allocation unit size using the /A:<size> switch , the values in Table 3.4 will be used.
Table 3.4 Default Cluster Sizes for NTFS
Note
Windows 2000, like Windows NT 3.51 and Windows NT 4.0, supports file compression. Since file compression is not supported on cluster sizes above 4 K, the default NTFS cluster size for Windows 2000 never exceeds 4 K. For more information about NTFS compression, see "File and Folder Compression" later in this chapter.
Boot Sector
The first information found on an NTFS volume is the boot sector. The boot sector starts at sector 0 and can be up to 16 sectors long. It consists of two structures:
The BIOS parameter block, which contains information on the volume layout and file system structures.
Code that describes how to find and load the startup files for the operating system being loaded. For Windows 2000, this code loads the file Ntldr. For more information about the boot sector, see "Disk Concepts and Troubleshooting" in this book.
Master File Table and Metadata
When a volume is formatted with NTFS, a Master File Table (MFT) file and other pieces of metadata are created. Metadata are the files NTFS uses to implement the file system structure. NTFS reserves the first 16 records of the MFT for metadata files.
Note
The data segment locations for both $Mft and $MftMirr are recorded in the boot sector. If the first MFT record is corrupted, NTFS reads the second record to find the MFT mirror file. A duplicate of the boot sector is located at the end of the volume.
Table 3.5 lists and briefly describes the metadata stored in the MFT.
Table 3.5 Metadata Stored in the Master File Table
The remaining records of the MFT contain the file and directory records for each file and directory on the volume.
NTFS creates a file record for each file and a directory record for each directory created on an NTFS volume. The MFT includes a separate file record for the MFT itself. These file and directory records are stored on the MFT. The attributes of the file are written to the allocated space in the MFT. Besides file attributes, each file record contains information about the position of the file record in the MFT.
Each file usually uses one file record. However, if a file has a large number of attributes or becomes highly fragmented, it may need more than one file record. If this is the case, the first record for the file, called the base file record, stores the location of the other file records required by the file. Small files and directories (typically 1,500 bytes or smaller) are entirely contained within the file's MFT record.
Directory records contain index information. Small directories might reside entirely within the MFT structure, while large directories are organized into B-tree structures and have records with pointers to external clusters that contain directory entries that could not be contained within the MFT structure.
NTFS File Attributes
Every allocated sector on an NTFS volume belongs to a file. Even the file system metadata is part of a file. NTFS views each file (or folder) as a set of file attributes. Elements such as the file's time stamp are always resident attributes. When the information for a file is too large to fit in its MFT file record, some of the file attributes are nonresident. Nonresident attributes are allocated one or more clusters of disk space and stored as an alternate data stream in the volume. NTFS creates the Attribute List attribute to describe the location of both resident and nonresident attribute records.
Table 3.6 lists the file attributes defined by NTFS, although other file attributes might be defined in the future.
Table 3.6 NTFS File Attribute Types
MS-DOS -Readable File Names on NTFS Volumes
By default, Windows NT and Windows 2000 generate MS-DOS-readable file names on all NTFS volumes. To improve performance on volumes with many long, similar names, you can change the default value of the registry entry NtfsDisable8dot3NameCreation (in HKEY_LOCAL_MACHINE\System \CurrentControlSet\Control\FileSystem) to 1 .. | http://technet.microsoft.com/en-us/library/cc976808(d=printer).aspx | CC-MAIN-2013-48 | en | refinedweb |
Type: Posts; User: nvibest
Ah, thank you. I've modified my code to understand a little better:
import java.util.Scanner;
public class CubeTester {
//Instance variables
private Cube c1;
private Cube c2;
private...
Hello,
I have a method:
public Cube(String id, String faceValues){
this.id = id;
id.toUpperCase();
for (int i =0; i <6; i++){
faces[i]=faceValues.charAt(i);
}
faceIndex = 0;...
Hello all,
This is my first post at the saloon. I've been writing a program that puts data from a .dat file into a table that I formatted. It's been going smoothly so far but recently I have been...
whole code.....
import java.util.Scanner;
public class DateCalculator {
public static void main(String[] args) {
Scanner s = new Scanner (System.in);
int month;
int day;
int year;...
I thought I made sure that wouldn't happen by having any month less than 0 or above 12 to be 'Invalid' with this code.....
......
if (year < 1582){
System.out.println("Invalid Year");
}...
ha, thanks.
What makes it bad exactly?
Hello all,
For the past 5 or so hours:confused::mad: I've been trying to figure out how to condense this basic code that uses nextInt(); to read the first user input integer.
import...
Hello,
I'm writing a program with an equation that involves Math.sin, and I was wondering how, exactly, do you square Math.sin. Do you write it like so......?
A = Math.sin(Math.pow(sin, 2))
...
Hello all,
For the past five hours or so I've been coding and debugging a simple input/output program and have been stuck on this one error message I keep getting. I don't know what kind of error...
my apologies
Hi,
I've been asked to write some code that will use methods from the provided class. When run, the provided class creates a gui of a keypad of numbers, like an ATM. In this class there is a...
Oh I see now. Thanks a lot.
Hello,
I recently have been trying to write a simple program that involves the use of arrays and a random number generator in order to simulate rolling a six sided fair die.
I've posted my coding... | http://www.webdeveloper.com/forum/search.php?s=9b600457ed2de3e27cb9431ea7a56c90&searchid=2424865 | CC-MAIN-2013-48 | en | refinedweb |
Object
LocalizationGridLocalizationGrid
public class LocalizationGrid
A factory for
MathTransform2D backed by a grid of localization.
A grid of localization is a two-dimensional array of coordinate points. The grid size
is
width ×
height. Input coordinates are
(i,j) index in the grid, where i must be in the range
[0..width-1] and j in the range
[0..height-1] inclusive.
Output coordinates are the values stored in the grid of localization at the specified index.
The
LocalizationGrid class is usefull when the
"grid to coordinate system"
transform for a coverage is not some kind of global mathematical relationship like an
affine transform. Instead, the "real world" coordinates
are explicitly specified for each pixels. If the real world coordinates are know only for some
pixels at a fixed interval, then a transformation can be constructed by the concatenation of
an affine transform with a grid of localization.
After a
LocalizationGrid object has been fully constructed (i.e. real world coordinates
have been specified for all grid cells), a transformation from grid coordinates to "real world"
coordinates can be obtained with the
getMathTransform() method. If this transformation is
close enough to an affine transform, then an instance of
AffineTransform is returned.
Otherwise, a transform backed by the localization grid is returned.
The example below goes through the steps of constructing a coordinate reference system for a grid coverage from its grid of localization. This example assumes that the "real world" coordinates are longitudes and latitudes on the WGS84 ellipsoid.
DerivedCRS
public LocalizationGrid(int width, int height)
(NaN,NaN).
width- Number of grid's columns.
height- Number of grid's rows.
public Dimension getSize()
xinput = [0..width-1]and
yinput = [0..height-1]inclusive.
public Point2D getLocalizationPoint(Point source)
getMathTransform()instead.
source- The point in grid coordinates.
IndexOutOfBoundsException- If the source point is not in this grid's range.
public void setLocalizationPoint(Point source, Point2D target)
source- The point in grid coordinates.
target- The corresponding point in "real world" coordinates.
IndexOutOfBoundsException- If the source point is not in this grid's range.
public void setLocalizationPoint(int sourceX, int sourceY, double targetX, double targetY)
sourceX- x coordinates in grid coordinates, in the range
[0..width-1]inclusive.
sourceY- y coordinates in grid coordinates. in the range
[0..height-1]inclusive.
targetX- x coordinates in "real world" coordinates.
targetY- y coordinates in "real world" coordinates.
IndexOutOfBoundsException- If the source coordinates is not in this grid's range.
public void transform(AffineTransform transform, Rectangle region)
transform- The transform to apply.
region- The bounding rectangle (in grid coordinate) for region where to apply the transform, or
nullto transform the whole grid.
public boolean isNaN()
trueif this localization grid contains at least one
NaNvalue.
public boolean isMonotonic(boolean strict)
trueif all coordinates in this grid are increasing or decreasing. More specifically, returns
trueif the following conditions are meets:
strictis
true, then coordinates must be strictly increasing or decreasing (i.e. equals value are not accepted).
NaNvalues are always ignored.
strict-
trueto require strictly increasing or decreasing order, or
falseto accept values that are equals.
trueif coordinates are increasing or decreasing in the same direction for all rows and columns.
public void removeSingularities()
public AffineTransform getAffineTransform()
public MathTransform2D getPolynomialTransform(int degree)
degreeof 0 will returns the math transform backed by the whole grid. Greater values will use a fitted polynomial (affine transform for degree 1, quadratic transform for degree 2, cubic transform for degree 3, etc.).
degree- The polynomial degree for the fitting, or 0 for a transform backed by the whole grid.
public final MathTransform2D getMathTransform()
WarpGridwhile the previous methods return math transforms backed by
WarpPolynomial. | http://docs.geotools.org/latest/javadocs/org/geotools/referencing/operation/builder/LocalizationGrid.html | CC-MAIN-2013-48 | en | refinedweb |
Hai Admin,
I found that some of the hyper link in this tutorial is not enabled :( can u hav a look at it
Post your Comment
C++Tutorials
benefit to download the source code for the example programs, then compile... line, mingw(DevC++ IDE) and msvc++. This site has example code for beginning... of these tutorials is introduce the essential features of the Borland C++ programming
Java Programming Tutorials for beginners
Java Programming tutorials for beginners are made in such a way...
programming with ease.
The Java tutorials available at Roseindia are prepared b...
Complete Java programming tutorials for beginners
Basic java programming
Java - JDK Tutorials
Java - JDK Tutorials
This is the list of JDK tutorials which... features of Java in your
programming. JDK is required to be installed on your... should learn the
Java beginners
tutorial before learning these tutorials. View
Java Example Codes and Tutorials
Java Tutorials - Java Example Codes and Tutorials
Java is great programming... that cause common programming errors. Java source code files are
compiled...
Java programming tutorials:
Core Java
Spring 3.0 Tutorials with example code
Spring 3.0 - Tutorials and example code of Spring 3.0 framework... of example code. The
Spring 3.0 tutorial explains you different modules...
download the example code in the zip format. Then run the code in Eclipse
Java Thread
Java Thread Tutorials
In this tutorial we will learn java Threads in detail. The Java Thread class helps the programmer to develop the threaded application in Java. Thread is simple path of execution of a program. The Java Virtual Machine
Roseindia Java Tutorials
of working examples and source code that helps you understand Java programming more... Example
Java Break Example
Master Java Tutorials (TOC)
Java...Roseindia Java Tutorials are intended to provide in-depth knowledge of Java
Programming
Programming how to save output of java file in .txt format?
Hi,
If you are running the example from dos prompt you can use the > bracket to direct the output to a text file.
Here is the example:
C:>java
Chart & Graphs Tutorials in Java
Chart & Graphs Tutorials in Java
... in Java. Our Chart and Graphs
tutorials will help learn everything you need to learn about chart and graphs
programming in Java. We have provided many examples
Java Programming Code HELP
Java Programming Code HELP Hi, sir/madam.
I am new here and currently developing a program whereby it can read the Java source file... the Class name but now I am stuck at pull out the attribute name. (For example, type
Pragmatic Programmer - Java Tutorials
.
Java Extreme Programming Cookbook
by Eric Burke and Brian Coyner will help you...Pragmatic Programmer
2004-03-05 The Java Specialists' Newsletter [Issue 085....
Welcome to the 85th edition of The Java(tm) Specialists' Newsletter. Isn't
Java Field Initialisation - Java Tutorials
the initialization code
is copied into all the constructor. For Example
public...Java Field Initialization
The fields/variable initialization in any programming language is very
important. Every programmer does this in different ways
XML Tutorials
programming. In these tutorials we have
developed example programs using...
This Example shows how to create an empty DOM Document
. JAXP (Java...
This Example shows you how to Query Xml File via XPath
expression. JAXP (Java API for XML
AWT Tutorials
AWT Tutorials How can i create multiple labels using AWT????
Java Applet Example multiple labels
1)AppletExample.java:
import...;BODY>
<APPLET ALIGN="CENTER" CODE="AppletExample.class" width = "260" height
BASIC Java - Java Tutorials
letter and lover case. For example, case and Case
are different words in java...Basic Java Syntax
This tutorial guide you about the basic syntax of the java programming
language. For writing any program, you need to keep in mind
Tutorial, Java Tutorials
Tutorials
Here we are providing many tutorials on Java related... working example code. Code can be downloaded and used to learn the
technology...
NIO
Java NIO Tutorials
Java Programming video tutorial for beginners
Online Java programming tutorial video section designed specifically.... This interactive way of learning explains the basic concepts of Java programming.... Rookie programmers can get confused. But by learning java programming through
Java Swing Tutorials
Java Swing Tutorials
Java Swing tutorials
- Here you will find many Java Swing... and you can use it in your program.
Java Swing tutorials first gives you brief
Java HashMap - Java Tutorials
implementation without any code compatibility
problem. But this is not possible... the hash code for the invoking map.
boolean isEmpty( )
Returns...
provides a collection-view of the values in the map.
Example :
import
Overloading considered Harmful - Java Tutorials
another look on the small code example presented in a
former edition... of overloading being used (this includes
all unknown Java code), this statement... of
the reference type, Java exhibits quite strange behaviour:
While the server code can
JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources
of code written in the Java
programming language called "scriptlets... pages containing a
combination of HTML, Java, and scripting code. JSPs deliver...
Servlets are Java technology's answer to CGI
programming
In this section of sitemap we have listed all the important sections of java tutorials.
Select the topics you want..., Spring,
SQL, JSF and XML Tutorials.
These All Java Tutorials links
Thread Deadlocks - Java Tutorials
Thread Deadlock Detection in Java
Thread deadlock relates to the multitasking... lock that holds by first thread, this situation is known as Deadlock.
Example
In the given below example, we have created two classes X & Y. They have
XML,XML Tutorials,Online XML Tutorial,XML Help Tutorials
XML Tutorials
Transforming XML with SAX Filters
This Example shows you the way to Transform XML with SAXFilters. JAXP (Java
API for XML Processing) is an interface which provides
java tutorials
java tutorials Hi,
Much appreciated response. i am looking for the links of java tutorials which describes both core and advanced java concepts... topics in detail..but not systematically.For example ..They are discussing about
Linux tutorials and tips
Linux is of the most advanced operating system. Linux is being used to host websites and web applications. Linux also provide support for Java and other programming language. Programmers from all the over the world is developing many
Java Hello World code example
Java Hello World code example Hi,
Here is my code of Hello World program in Java:
public class HelloWorld {
public static void main... Tutorials and download running code examples.
Thanks
JSF - Java Server Faces Tutorials
JSF - Java Server Faces Tutorials
Complete Java Server Faces (JSF...
the example with the complete code of the program.
JSF... based, integrated development
environment (IDE) written in the Java programming
C Tutorials
programming language.
C Array copy example
The example... C Tutorials
In this section we have given large number of tutorials on C
Hyper links are not enabled mohiadeen March 21, 2013 at 10:45 AM
Hai Admin, I found that some of the hyper link in this tutorial is not enabled :( can u hav a look at it
Post your Comment | http://roseindia.net/discussion/46551-FTP-Programming-in-Java-tutorials-with-example-code.html | CC-MAIN-2013-48 | en | refinedweb |
getaddrinfo now supports glibc-specific International Domain Name (IDN) extension flags: AI_IDN, AI_CANONIDN, AI_IDN_ALLOW_UNASSIGNED, AI_IDN_USE_STD3_ASCII_RULES.
getnameinfo now supports glibc-specific International Domain Name (IDN) extension flags: NI_IDN, NI_IDN_ALLOW_UNASSIGNED, NI_IDN_USE_STD3_ASCII_RULES.
Slightly improve randomness of /dev/random emulation.
Allow to use advisory locking on any device. POSIX fcntl and lockf locking works with any device, BSD flock locking only with devices backed by an OS handle. Right now this excludes console windows on pre Windows 8, as well as almost all virtual files under /proc from BSD flock locking.
The header /usr/include/exceptions.h, containing implementation details for 32 bit Windows' exception handling only, has been removed.
Preliminary, experimental support of the posix_spawn family of functions. New associated header /usr/include/spawn.h.
Change magic number associated with process information block so that 32-bit Cygwin processes don't try to interpret 64-bit information and vice-versa.
Redefine content of mtget tape info struct to allow fetching the number of partitions on a tape.
Added CYGWIN environment variable keyword "wincmdln" which causes Cygwin to send the full windows command line to any subprocesses..
Drop support for Windows 2000 and Windows XP pre-SP3.
Add support for building a 64 bit version of Cygwin on x86_64 natively.
Add support for creating native NTFS symlinks starting with Windows Vista by setting the CYGWIN=winsymlinks:native or CYGWIN=winsymlinks:nativestrict option.
Add support for AFS filesystem.
Preliminary support for mandatory locking via fcntl/flock/lockf, using Windows locking semantics. New F_LCK_MANDATORY fcntl command.
New APIs: __b64_ntop, __b64_pton, arc4random, arc4random_addrandom, arc4random_buf, arc4random_stir, arc4random_uniform.
Added Windows console cursor appearance support.
Show/Hide Cursor mode (DECTCEM): "ESC[?25h" / "ESC[?25l"
Set cursor style (DECSCUSR): "ESC[n q" (note the space before the q); where n is 0, 1, 2 for block cursor, 3, 4 for underline cursor (all disregarding blinking mode), or > 4 to set the cursor height to a percentage of the cell height.
For performance reasons, Cygwin does not try to create sparse files automatically anymore, unless you use the new "sparse" mount option.
New API: cfsetspeed.
Support the "e" flag to fopen(3). This is a Glibc extension which allows to fopen the file with the O_CLOEXEC flag set.
Support the "x" flag to fopen(3). This is a Glibc/C11 extension which allows to open the file with the O_EXCL flag set.
New API: getmntent_r, memrchr.
Recognize ReFS filesystem.
CYGWIN=pipe_byte option now forces the opening of pipes in byte mode rather than message mode.
Add mouse reporting modes 1005, 1006 and 1015 to console window.
mkpasswd and mkgroup now try to print an entry for the TrustedInstaller account existing since Windows Vista/Server 2008.
Terminal typeahead when switching from canonical to non-canonical mode is now properly flushed.
Cygwin now automatically populates the /dev directory with all existing POSIX devices.
Add virtual /proc/PID/mountinfo file.
flock now additionally supports the following scenario, which requires to propagate locks to the parent process:
( flock -n 9 || exit 1 # ... commands executed under lock ... } 9>/var/lock/mylockfile
Only propagation to the direct parent process is supported so far, not to grand parents or sibling processes.
Add a "detect_bloda" setting for the CYGWIN environment variable to help finding potential BLODAs.
New pldd command for listing DLLs loaded by a process.
New API: scandirat.
Change the way remote shares mapped to drive letters are recognized when creating the cygdrive directory. If Windows claims the drive is unavailable, don't show it in the cygdrive directory listing.
Raise default stacksize of pthreads from 512K to 1 Meg. It can still be changed using the pthread_attr_setstacksize call.
Drop support for Windows NT4.
The CYGWIN environment variable options "envcache", "strip_title", "title", "tty", and "upcaseenv" have been removed.
If the executable (and the system) is large address aware, the application heap
will be placed in the large memory area. The peflags tool
from the
rebase package can be used to set the large
address awareness flag in the executable file header.
The registry setting "heap_chunk_in_mb" has been removed, in favor of a new per-executable setting in the executable file header which can be set using the peflags tool. See the section called “Changing Cygwin's Maximum Memory” for more information.
The CYGWIN=tty mode using pipes to communicate with the console in a pseudo tty-like mode has been removed. Either just use the normal Windows console as is, or use a terminal application like mintty.
New getconf command for querying confstr(3), pathconf(3), sysconf(3), and limits.h configuration.
New tzset utility to generate a POSIX-compatible TZ environment variable from the Windows timezone settings.
The passwd command now allows an administrator to use the -R command for other user accounts: passwd -R username.
Pthread spinlocks. New APIs: pthread_spin_destroy, pthread_spin_init, pthread_spin_lock, pthread_spin_trylock, pthread_spin_unlock.
Pthread stack address management. New APIs: pthread_attr_getstack, pthread_attr_getstackaddr, pthread_attr_getguardsize, pthread_attr_setstack, pthread_attr_setstackaddr, pthread_attr_setguardsize, pthread_getattr_np.
POSIX Clock Selection option. New APIs: clock_nanosleep, pthread_condattr_getclock, pthread_condattr_setclock.
clock_gettime(3) and clock_getres(3) accept per-process and per-thread CPU-time clocks, including CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID. New APIs: clock_getcpuclockid, pthread_getcpuclockid.
GNU/glibc error.h error reporting functions. New APIs: error, error_at_line. New exports: error_message_count, error_one_per_line, error_print_progname. Also, perror and strerror_r no longer clobber strerror storage.
C99 <tgmath.h> type-generic macros.
/proc/loadavg now shows the number of currently running processes and the total number of processes.
Added /proc/devices and /proc/misc, which lists supported device types and their device numbers.
Added /proc/swaps, which shows the location and size of Windows paging file(s).
Added /proc/sysvipc/msg, /proc/sysvipc/sem, and /proc/sysvipc/shm which provide information about System V IPC message queues, semaphores, and shared memory.
/proc/version now shows the username of whomever compiled the Cygwin DLL as well as the version of GCC used when compiling.
dlopen now supports the Glibc-specific RTLD_NODELETE and RTLD_NOOPEN flags.
The printf(3) and wprintf(3) families of functions now handle the %m conversion flag.
Other new API: clock_settime, __fpurge, getgrouplist, get_current_dir_name, getpt, ppoll, psiginfo, psignal, ptsname_r, sys_siglist, pthread_setschedprio, pthread_sigqueue, sysinfo.
Drop support for Windows NT4 prior to Service Pack 4.
Reinstantiate Cygwin's ability to delete an empty directory which is the current working directory of the same or another process. Same for any other empty directory which has been opened by the same or another process.
Cygwin now ships the C standard library fenv.h header file, and implements the related APIs (including GNU/glibc extensions): feclearexcept, fedisableexcept, feenableexcept, fegetenv, fegetexcept, fegetexceptflag, fegetprec, fegetround, feholdexcept, feraiseexcept, fesetenv, fesetexceptflag, fesetprec, fesetround, fetestexcept, feupdateenv, and predefines both default and no-mask FP environments. See the GNU C Library manual for full details of this functionality.
Support for the C99 complex functions, except for the "long double" implementations. New APIs: cacos, cacosf, cacosh, cacoshf, carg, cargf, casin, casinf, casinh, casinhf, catan, catanf, catanh, catanhf, ccos, ccosf, ccosh, ccoshf, cexp, cexpf, cimag, cimagf, clog, clogf, conj, conjf, cpow, cpowf, cproj, cprojf, creal, crealf, csin, csinf, csinh, csinhf, csqrt, csqrtf, ctan, ctanf, ctanh, ctanhf.
Fix the width of "CJK Ambiguous Width" characters to 1 for singlebyte charsets and 2 for East Asian multibyte charsets. (For UTF-8, it remains dependent on the specified language, and the "@cjknarrow" locale modifier can still be used to force width 1.)
The strerror_r interface now has two flavors; if _GNU_SOURCE is defined, it retains the previous behavior of returning char * (but the result is now guaranteed to be NUL-terminated); otherwise it now obeys POSIX semantics of returning int.
/proc/sys now allows unfiltered access to the native NT namespace. Access restrictions still apply. Direct device access via /proc/sys is not yet supported. File system access via block devices works. For instance (note the trailing slash!)
bash$ cd /proc/sys/Device/HarddiskVolumeShadowCopy1/
Other new APIs: llround, llroundf, madvise, pthread_yield. Export program_invocation_name, program_invocation_short_name. Support TIOCGPGRP, TIOCSPGRP ioctls.
Partially revert the 1.7.6 change to set the Win32 current working directory (CWD) always to an invalid directory, since it breaks backward compatibility too much. The Cygwin CWD and the Win32 CWD are now kept in sync again, unless the Cygwin CWD is not usable as Win32 CWD. See the reworked the section called “Using the Win32 file API in Cygwin applications” for details.
Make sure to follow the Microsoft security advisory concerning DLL hijacking. See the Microsoft Security Advisory (2269637) "Insecure Library Loading Could Allow Remote Code Execution" for details.
Allow to link against -lbinmode instead of /lib/binmode.o. Same for -ltextmode, -ltextreadmode and -lautomode. See the section called “Programming” for details.
Add new mount options "dos" and "ihash" to allow overriding Cygwin default behaviour on broken filesystems not recognized by Cygwin.
Add new mount option "bind" to allow remounting parts of the POSIX file hirarchy somewhere else..
New interfaces mkostemp(3) and mkostemps(3) are added.
New virtual file /proc/filesystems.
clock_gettime(3) and clock_getres(3) accept CLOCK_MONOTONIC.
DEPRECATED with 1.7.7: Cygwin handles the current working directory entirely on its own. The Win32 current working directory is set to an invalid path to be out of the way. [...]
Support for DEC Backarrow Key Mode escape sequences (ESC [ ? 67 h, ESC [ ? 67 l) in Windows console.
Support for GB2312/EUC-CN. These charsets are implemented as aliases to GBK. GB2312 is now the default charset name for the locales zh_CN and zh_SG, just as on Linux.
Modification and access timestamps of devices reflect the current time. the various locale modifiers to switch charsets as on Linux.
Default charset in the "C" or "POSIX" locale has been changed back from UTF-8 to ASCII, to avoid problems with applications expecting a singlebyte charset in the "C"/"POSIX" locale. Still use(2), dup3(2), and pipe2(2)..
Windows 95, 98 and Me are not supported anymore. The new Cygwin 1.7 DLL will not run on any of these systems.
Add support for Windows 7 and Windows Server 2008 R2.
Mount points are no longer stored in the registry. Use /etc/fstab and /etc/fstab.d/$USER instead. Mount points created with mount(1) are only local to the current session and disappear when the last Cygwin process in the session exits.
Cygwin creates the mount points for /, /usr/bin, and /usr/lib automatically from it's own position on the disk. They don't have to be specified in /etc/fstab.
If a filename cannot be represented in the current character set, the character will be converted to a sequence Ctrl-X + UTF-8 representation of the character. This allows to access all files, even those not having a valid representation of their filename in the current character set. To always have a valid string, use the UTF-8 charset by setting the environment variable $LANG, $LC_ALL, or $LC_CTYPE to a valid POSIX value, such as "en_US.UTF-8".
PATH_MAX is now 4096. Internally, path names can be as long as the underlying OS can handle (32K).
struct dirent now supports d_type, filled out with DT_REG or DT_DIR. All other file types return as DT_UNKNOWN for performance reasons.
The CYGWIN environment variable options "ntsec" and "smbntsec" have been replaced by the per-mount option "acl"/"noacl".
The CYGWIN environment variable option "ntea" has been removed without substitute.
The CYGWIN environment variable option "check_case" has been removed in favor of real case-sensitivity on file systems supporting it.
Creating filenames with special DOS characters '"', '*', ':', '<', '>', '|' is supported.
Creating files with special DOS device filename components ("aux", "nul", "prn") is supported.
File names are case sensitive if the OS and the underlying file system supports it. Works on NTFS and NFS. Does not work on FAT and Samba shares. Requires to change a registry key (see the User's Guide). Can be switched off on a per-mount basis.
Due to the above changes, managed mounts have been removed.
Incoming DOS paths are always handled case-insensitive and get no POSIX permission, as if they are mounted with noacl,posix=0 mount flags.
unlink(2) and rmdir(2) try very hard to remove files/directories even if they are currently accessed or locked. This is done by utilizing the hidden recycle bin directories and marking the files for deletion.
rename(2) rewritten to be more POSIX conformant.
access(2) now performs checks using the real user ID, as required by POSIX; the old behavior of querying based on effective user ID is available through the new faccessat(2) and euidaccess(2) AP.
New open(2) flags O_DIRECTORY, O_EXEC and O_SEARCH.
Make the "plain file with SYSTEM attribute set" style symlink default again when creating symlinks. Only create Windows shortcut style symlinks if CYGWIN=winsymlinks is set in the environment.
Symlinks now use UTF-16 encoding for the target filename for better internationalization support. Cygwin 1.7 can read all old style symlinks, but the new style is not compatible with older Cygwin releases.
Handle NTFS native symlinks available since Vista/2008 as symlinks (but don't create Vista/2008 symlinks due to unfortunate OS restrictions).
Recognize NFS shares and handle them using native mechanisms. Recognize and create real symlinks on NFS shares. Get correct stat(2) information and set real mode bits on open(2), mkdir(2) and chmod(2).
Recognize MVFS and workaround problems manipulating metadata and handling DOS attributes.
Recognize Netapp DataOnTap drives and fix inode number handling.
Recognize Samba version beginning with Samba 3.0.28a using the new extended version information negotiated with the Samba developers.
Stop faking hardlinks by copying the file on filesystems which don't support hardlinks natively (FAT, FAT32, etc.). Just return an error instead, just like Linux.
List servers of all accessible domains and workgroups in // instead of just the servers in the own domain/workgroup.).
New openat family of functions: openat, faccessat, fchmodat, fchownat, fstatat, futimesat, linkat, mkdirat, mkfifoat, mknodat, readlinkat, renameat, symlinkat, unlinkat.
Other new APIs: posix_fadvise, posix_fallocate, funopen, fopencookie, open_memstream, open_wmemstream, fmemopen, fdopendir, fpurge, mkstemps, eaccess, euidaccess, canonicalize_file_name, fexecve, execvpe.
New implementation for blocking sockets and select on sockets which is supposed to allow POSIX-compatible sharing of sockets between threads and processes.
send/sendto/sendmsg now send data in 64K chunks to circumvent an internal buffer problem in WinSock (KB 201213).
New send/recv option MSG_DONTWAIT.
IPv6 support. New APIs and later,, gethostbyname2, iruserok_sa, rcmd_af, rresvport_af. getifaddrs, freeifaddrs, if_nametoindex, if_indextoname, if_nameindex, if_freenameindex.
Add /proc/net/if_inet6.
Reworked pipe implementation which uses overlapped IO to create more reliable interruptible pipes and fifos.
The CYGWIN environment variable option "binmode" has been removed.
Improved fifo handling by using native Windows named pipes.
Detect when a stdin/stdout which looks like a pipe is really a tty. Among other things, this allows a debugged application to recognize that it is using the same tty as the debugger.
Support UTF-8 in console window.
In the console window the backspace key now emits DEL (0x7f) instead of BS (0x08), Alt-Backspace emits ESC-DEL (0x1b,0x7f) instead of DEL (0x7f), same as the Linux console and xterm. Control-Space now emits an ASCII NUL (0x0) character.
Support up to 64 serial interfaces using /dev/ttyS0 - /dev/ttyS63.
Support up to 128 raw disk drives /dev/sda - /dev/sddx.
New API: cfmakeraw, get_avphys_pages, get_nprocs, get_nprocs_conf, get_phys_pages, posix_openpt.. The default locale in the absence of one of the aforementioned environment variables is "C.UTF-8".), "KOI8-R", "KOI8-U", "SJIS", "GBK", "eucJP", "eucKR", and "Big5".
Allow multiple concurrent read locks per thread for pthread_rwlock_t..
Support for WCONTINUED, WIFCONTINUED() added to waitpid and wait4.
New APIs: _Exit, confstr, insque, remque, sys_sigabbrev, posix_madvise, posix_memalign, reallocf, exp10, exp10f, pow10, pow10f, lrint, lrintf, rint, rintf, llrint, llrintf, llrintl, lrintl, rintl, mbsnrtowcs, strcasestr, stpcpy, stpncpy, wcpcpy, wcpncpy, wcsnlen, wcsnrtombs, wcsftime, wcstod, wcstof, wcstoimax, wcstok, wcstol, wcstoll, wcstoul, wcstoull, wcstoumax, wcsxfrm, wcscasecmp, wcsncasecmp, fgetwc, fgetws, fputwc, fputws, fwide, getwc, getwchar, putwc, putwchar, ungetwc, asnprintf, dprintf, vasnprintf, vdprintf, wprintf, fwprintf, swprintf, vwprintf, vfwprintf, vswprintf, wscanf, fwscanf, swscanf, vwscanf, vfwscanf, vswscanf.
Getting a domain user's groups is hopefully more bulletproof now.
Cygwin now comes with a real LSA authentication package. This must be manually installed by a privileged user using the /bin/cyglsa-config script. The advantages and disadvantages are noted in
Cygwin now allows storage and use of user passwords in a hidden area of the registry. This is tried first when Cygwin is called by privileged processes to switch the user context. This allows, for instance, ssh public key sessions with full network credentials to access shares on other machines.
New options have been added to the mkpasswd and mkgroup tools to ease use in multi-machine and multi-domain environments. The existing options have a slightly changed behaviour.
New ldd utility, similar to Linux.
New link libraries libdl.a, libresolv.a, librt.a.. This warning may be disabled via the new CYGWIN=nodosfilewarning setting.
The CYGWIN environment variable option "server" has been removed. Cygwin automatically uses cygserver if it's available.
Allow environment of arbitrary size instead of a maximum of 32K.
Don't force uppercase environment when started from a non-Cygwin process. Except for certain Windows and POSIX variables which are always uppercased, preserve environment case. Switch back to old behaviour with the new CYGWIN=upcaseenv setting.
Detect and report a missing DLL on process startup.
Add /proc/registry32 and /proc/registry64 paths to access 32 bit and 64 bit registry on 64 bit systems.
Add the ability to distinguish registry keys and registry values with the same name in the same registry subtree. The key is called "foo" and the value will be called "foo%val" in this case.
Align /proc/cpuinfo more closly to Linux content.
Add /proc/$PID/mounts entries and a symlink /proc/mounts pointing to /proc/self/mounts as on Linux.
Optimized strstr and memmem implementation.
Remove backwards compatibility with old signal masks. (Some *very* old programs which use signal masks may no longer work correctly).
Cygwin now exports wrapper functions for libstdc++ operators new and delete, to support the toolchain in implementing full C++ standards conformance when working with shared libraries.
Different Cygwin installations in different paths can be run in parallel without knowing of each other. The path of the Cygwin DLL used in a process is a key used when creating IPC objects. So different Cygwin DLLs are running in different namespaces.
Each Cygwin DLL stores its path and installation key in the registry. This allows troubleshooting of problems which could be a result of having multiple concurrent Cygwin installations. | http://sourceware.org/cygwin/cygwin-ug-net/ov-new1.7.html | CC-MAIN-2013-48 | en | refinedweb |
table of contents
- buster 1:9.11.5.P4+dfsg-5.1
- testing 1:9.11.5.P4+dfsg-5.1+b1
- unstable 1:9.11.14+dfsg-3
- experimental 1:9.11.8+dfsg-1
NAME¶lwres_nooprequest_render, lwres_noopresponse_render, lwres_nooprequest_parse, lwres_noopresponse_parse, lwres_noopresponse_free, lwres_nooprequest_free - lightweight resolver no-op message handling
SYNOPSIS¶
#include <lwres/lwres.h>
lwres_result_t lwres_nooprequest_render(lwres_context_t *ctx, lwres_nooprequest_t *req, lwres_lwpacket_t *pkt, lwres_buffer_t *b);
lwres_result_t lwres_noopresponse_render(lwres_context_t *ctx, lwres_noopresponse_t *req, lwres_lwpacket_t *pkt, lwres_buffer_t *b);
lwres_result_t lwres_nooprequest_parse(lwres_context_t *ctx, lwres_buffer_t *b, lwres_lwpacket_t *pkt, lwres_nooprequest_t **structp);
lwres_result_t lwres_noopresponse_parse(lwres_context_t *ctx, lwres_buffer_t *b, lwres_lwpacket_t *pkt, lwres_noopresponse_t **structp);
void lwres_noopresponse_free(lwres_context_t *ctx, lwres_noopresponse_t **structp);
void lwres_nooprequest_free(lwres_context_t *ctx, lwres_nooprequest_t **structp);
DESCRIPTION¶These are low-level routines for creating and parsing lightweight resolver no-op request and response messages.
The no-op message is analogous to a ping packet: a packet is sent to the resolver daemon and is simply echoed back. The opcode is intended to allow a client to determine if the server is operational or not.
There are four main functions for the no-op opcode. One render function converts a no-op request structure — lwres_nooprequest_t — to the lightweight resolver's canonical format. It is complemented by a parse function that converts a packet in this canonical format to a no-op request structure. Another render function converts the no-op response structure — lwres_noopresponse_t to the canonical format. This is complemented by a parse function which converts a packet in canonical format to a no-op response structure.
These structures are defined in lwres/lwres.h. They are shown below.
#define LWRES_OPCODE_NOOP 0x00000000U
typedef struct { uint16_t datalength; unsigned char *data; } lwres_nooprequest_t;
typedef struct { uint16_t datalength; unsigned char *data; } lwres_noopresponse_t;
Although the structures have different types, they are identical. This is because the no-op opcode simply echos whatever data was sent: the response is therefore identical to the request.
lwres_nooprequest_render() uses resolver context ctx to convert no-op request structure req to canonical format. The packet header structure pkt is initialised and transferred to buffer b. The contents of *req are then appended to the buffer in canonical format. lwres_noopresponse_render() performs the same task, except it converts a no-op response structure lwres_noopresponse_t to the lightweight resolver's canonical format.
lwres_nooprequest_parse() uses context ctx to convert the contents of packet pkt to a lwres_nooprequest_t structure. Buffer b provides space to be used for storing this structure. When the function succeeds, the resulting lwres_nooprequest_t is made available through *structp. lwres_noopresponse_parse() offers the same semantics as lwres_nooprequest_parse() except it yields a lwres_noopresponse_t structure.
lwres_noopresponse_free() and lwres_nooprequest_free() release the memory in resolver context ctx that was allocated to the lwres_noopresponse_t or lwres_nooprequest_t structures referenced via structp. | https://manpages.debian.org/unstable/libbind-dev/lwres_nooprequest_free.3.en.html | CC-MAIN-2020-10 | en | refinedweb |
DEBSOURCES
Skip Quicknav
sources / libxml-java / 1.1.6.dfsg-3 /
---------------
1. WHAT's NEW
---------------
A list of changes in recent versions:
0.99.0: (30-May-2008)
* [BUG] AbstractReadHandlerFactory must not reset the default handler if the
current configuration does not provide one.
* Switched from JCommon to LibBase. All version information is now contained in
the manifest. The Project-Info implementation reads the version numbers from
the Jar's Manifest or the compiler-output-directory.
* Various cleanups in the API.
* AttributeMap no longer throws a CloneNotSupportedException, like all Collections.
* [BUG] DefaultTagDescription assumed that undefined elements were safe to have
indentions. This created beautiful documents, but sadly the added spaces broke
many of these generated documents.
A empty configuration passed into the configure() method no longer causes
the tag-description to treat all elements as indentable.
* Added better logging to assist the user in case of parsing errors. The current
parse-position is now always logged along with the error.
* Performance:
- AttributeMap is no longer synchronized, as synchronization has to
happen in a larger context most of the time.
- Added streaming writer methods so that attribute values can be normalized
while being sent to the stream instead of having to create temporary strings.
0.9.11: (02-Nov-2007)
* Upgraded to jcommon-1.0.12
0.9.10: (29-Oct-2007)
* Performance Update
* [BUG] XML-writer without encoding produced invalid xml-header.
DefaultTagDescription can now be configured using Java-Calls
AttributeList.setAttribute() with null-value is now the same as removeAttribute
0.9.9: (16-Oct-2007)
* The build-system now uses the modular build again. By default the
compile operation returns a automatic-monolithic build that
automatically strips all classes that have unresolved dependencies.
* Automatic fixes as reported by IntelliJ-IDEA's inspections
* Updated the copyright header on all java-files
0.9.8: (24-Sep-2007)
* All releases are now built with JDK 1.4. However, we are still compatible
with JDK 1.2.2 and all releases run and compile under that release.
* [BUG] The XmlWriter's static normalize() method was a cause of synchronization
and thread contention issues.
0.9.7: (30-Jul-2007)
* [BUG] AttributeList#getAttribute() did not work.
* Added common namespace URIs to the LibXmlInfo class, as these namespaces are
used in most XML and XHTML processing applications.
* Modified the build system so that the build.xml file is now in the root of the
project directory.
0.9.6: (24-Jun-2007)
* [BUG] CharacterEntities for newline and linefeed characters were wrong.
0.9.5: (27-May-2007)
* Added an HTML-Compatibility mode, so that short empty tags have a space before
close-marker.
0.9.4: (21-May-2007)
* Added support for streaming large Chunks of character-data using a
java.io.Reader. This avoids the creation of temporary strings and reduces
the memory footprint in some cases.
* Added a way to directly parse an XML document without the need to go
through LibLoader's interfaces.
0.9.3: (27-Apr-2007)
* [BUG] AttributeList.removeAttribute(..) did not work at all.
0.9.2: (01-Apr-2007)
* Improved the ability to create XML-Snipplets and to embed the XMLWriter's
output in other content.
0.9.1: (07-Mar-2007)
* LibLoader's new resource-key system required some change.
0.9.0: (25-Jan-2007)
* LibXML is feature complete. Both the parser and writer classes
are doing what they should and adding anything else would add extra
code for no real value.
* The XMLWriterSupport does no longer define the namespaces on a per-document
basis. Namespaces are declared using attributes and are inherited to all
child-elements. Namespaces can be redefined on each element, if needed.
* The parser now follows the Namespace standard and accepts namespace
declarations on all elements.
* Some first source-code documentation has been added.
0.2.1: (22-Dec-2006)
* Improved the parsing of xml files that have a DTD. A handler can
now map the DTD into a default namespace. All elements for that
document type will have the namespace asigned.
0.2.0: (03-Dec-2006)
* Added the XMLWriterSupport and better tag managment.
* Each LibXml parser has now access the LibLoader-ResourceKey that was
used to load the Xml-Document.
0.1.1: (30-Jul-2006)
* More changes to the parser.
0.1.0: (29-Jun-2006)
* Initial release of LibXML | https://sources.debian.org/src/libxml-java/1.1.6.dfsg-3/ChangeLog.txt/ | CC-MAIN-2020-10 | en | refinedweb |
Polygons disappear with Polygon Object Generator
On 13/02/2018 at 11:00, xxxxxxxx wrote:
(Object Generator Plugin in Python for Cinema 4D R19.024 on macOS Sierra)
Hello Everyone!
If I register an object plugin as a c4d.OBJECT_GENERATOR | c4d.OBJECT_POLYGONOBJECT and return a polygon object from the GetVirtualObjects() method, the polygon object will disappear from the view, when I switch from model mode to point mode.
If the plugin will be registered as a c4d.OBJECT_GENERATOR | c4d.OBJECT_POINTOBJECT, the polygon object will NOT disappear from the view, when I switch from model mode to point mode.
Like with the point object generator, I would like to be able to see the polygon object of a polygon object generator when I switch to point mode. Is this possible?
See the code below for a simplified example.
Best regards
Tim
PS: I'm new to Cinema 4D plugin development and new to Cinema 4D in general.
import math import sys import os import c4d from c4d import bitmaps, gui, plugins, utils PLUGIN_ID = 9119119 class TestPlugin(plugins.ObjectData) : def Init(self, node) : return True def GetVirtualObjects(self, op, hierarchyhelp) : dirty = op.CheckCache(hierarchyhelp) or op.IsDirty(c4d.DIRTY_DATA) if dirty is False: return op.GetCache(hierarchyhelp) return self.CreateSimplePolyObj() def CreateSimplePolyObj(self) : op = c4d.PolygonObject(4, 1) op.SetPoint(0, c4d.Vector( 0.0, 0.0, 0.0)) op.SetPoint(1, c4d.Vector(100.0, 0.0, 0.0)) op.SetPoint(2, c4d.Vector(100.0, 0.0, 100.0)) op.SetPoint(3, c4d.Vector( 0.0, 0.0, 100.0)) polygon = c4d.CPolygon(0, 1, 2, 3) op.SetPolygon(0, polygon) op.SetPhong(True, 1, utils.Rad(80.0)) op.Message(c4d.MSG_UPDATE) return op if __name__ == "__main__": path, file = os.path.split(__file__) bmp = bitmaps.BaseBitmap() bmp.InitWith(os.path.join(path, "res", "some.tif")) plugins.RegisterObjectPlugin( id = PLUGIN_ID, str = "Py-TestPlugin", g = TestPlugin, description = "Opytestplugin", icon = bmp, info = c4d.OBJECT_GENERATOR | c4d.OBJECT_POLYGONOBJECT )
On 14/02/2018 at 03:05, xxxxxxxx wrote:
Hi,
welcome to the Plugin Café forums
There seems to be common misconception of our SDK docs (a sign we need to improve them). Just recently we had another thread by merkvilson with the same issue: "Hide child of Object plugin" (a bit down the line).
To make it short, don't use OBJECT_POLYGONOBJECT when registering your generator. It's just a generator (even if generating a polygon object). OBJECT_POLYGONOBJECT is used in special cases only. The objectdata_latticeplanemodifier example (sorry, just realized I'm linking to a C++ example in Python forum) uses OBJECT_POINTOBJECT to achieve the behavior of the controlling plane. OBJECT_POLYGONOBJECT would be used similarly.
On 14/02/2018 at 05:29, xxxxxxxx wrote:
Hi Andreas,
the code above is only a simplified example of the problem.
I'm working on a more complex point generator plugin. The generated object has some (control) points which should be editable in point mode. The final object, which will be displayed in model mode, is a complex polygon object, which is based on the (control) points of the point object. Everything worked as expected until I tried to delete points of the point object in point mode. I created another post about that problem. A solution was to switch the plugin registration from a point object to a polygon object. Now I'm able to delete the (control) points, but the polygon object, which I generate in GetVirtualObjects(), will not be displayed anymore when I switch from model mode to point mode!
Best regards
Tim | https://plugincafe.maxon.net/topic/10629/14076_polygons-disappear-with-polygon-object-generator | CC-MAIN-2020-10 | en | refinedweb |
Post by Ryan Maue
You knew it was only a matter of time after the Rolling Stone criticism of Obama on climate, but Al Gore is back and in a big way. Some are calling it an Inconvenient Truth 2.0 with a brand new slide show connecting extreme weather with global warming. The new “Climate Reality Project” moniker replaces Gore’s previous Alliance for Climate Protection racket. UK Guardian eco-journalist Suzanne Goldenberg gives away the store with this line: “The campaign represents a modest comeback for Gore who has reduced his public profile on climate action in the past few years – probably out of consideration for the political consequences to his fellow Democrat Barack Obama.”
So Al Gore’s decision to lower his profile was to avoid forcing Obama to make politically unpopular decisions? I think there are additional reasons related to his personal life.
Thankfully,. Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.
Gore has put out a new video trailer: it’s clear he has worked on his lecture delivery to sound even more professorial, elitist, and condescending. Experts such as NASA scientist Dr. James Hansen and NCAR’s Dr. Kevin Trenberth are prominently mentioned in several interviews given by Al Gore.
Another aspect of the Gore resurrection is the lockstep adherence to his brand of climate change orthodoxy by his sychophants in the liberal media. Here are some links for a flavor:
Joe Romm: Exclusive: Al Gore On His ‘Climate Realilty Project’ Launch: “It’s Urgent To Rendezvous With Reality To Save The Future Of Civilization As We Know It.”
Suzanne Goldenberg: UK Guardian: Al Gore returns with new climate campaign. Climate Reality Project aims to expose reality of global warming crisis and kicks off with a 24-hour live streamed event
Chris Mooney: some blog: Al Gore Launches the “Climate Reality Project” — which deserves some quotes since he is right on the money:
In my view, Gore has been pretty much right about the science of climate all along. But sad to say, everything I’ve learned about this issue convinces me that there is little he can say or do to get conservative climate “skeptics” to accept that.
Yes, they need to renounce their counter-reality and come back to this one. But paradoxically, having Al Gore tell them that may just drive them farther away. Let’s hope he can appeal to the middle, anyway…
Gr.”
139 thoughts on “Lipstick on a pig: Gore rebrands climate outfit”
I hear the cries, “Oh No, not again!”
September 14th. Weird choice. It’s smack dab in the middle of the week. Not on a weekend or anything. I’m thinking that since that’s about the peak of hurricane season, he’s hoping to have a nice little cat 5 coming at us that he can point to as “proof” of global warming. Or climate change. Or climate disruption. Or whatever we’re calling it these days.
You mean, “Lipstick on Man-Bear-Pig”
[ryanm: wuwt readers are damn quick 😉 ]
“Al Gore is back and is planning “24-hours of Reality“”
Well, that would be a start. Not sure if he could handle it for 24 whole hours.
Maybe he’ll kiss a polar bear with the same authenticity that he kissed Tipper on stage with back in 2000. Or start crying like Jimmy Swaggart?
I wouldn’t p**s in Al Gore’s mouth if his teeth were on fire…
The skeptics aren’t just conservative – they are classical liberal, libertarian or moderate. But they know bullshit when they smell it.
Unfortunately it is Gore’s activities, as well as those of Romm and Mooney, Mann, Hansen and Trenberth, that are re-energizing not just the Republican Party, but the extreme wing-nut end of the Republican Party.
You don’t have to be a “conservative skeptic” to be extremely wary of a politician, any politician, who attempts to define and control “Reality”
He is doubling down. Unfortunately time and odds are against him, so it just looks like a desperation attempt. It is, but do not expect to see truth or reality from him or any of his sycophants.
Wasn’t it Dr. James Hansen who advised Gore on an Inconvenient Truth which was later found to have over 8 factual inaccuracies?
If it cools then how the hell are they going to “connect the dots”? They’ll just make a laughing stock of themselves.
Al you pansy – why are you stopping at 24 hours of your brand of reality? Why not show your true devotion to the Cause and extend the webcast to a full week?
Truthfully, I really just want to see Al praying for the quiet comfort of the grave after ~100 hours of sleep deprivation…
The Gospel,according to Al, Gore, and Bull
So some skeptics have been calling themselves “climate realists.” Gore’s re-branded venture is the “Climate Reality Project,” so the followers can call themselves “Climate Realists.”
How much longer until the (C)AGW faithful insist they are the real “Climate Skeptics”?
Hey, we’re the ones critically evaluating the science and making sure it all agrees with itself and belongs in the overwhelming consensus, thus WE are the ones who are Scientifically Skeptical, not them!
You can put lipstick on a (man-bear-)pig, but it’s still a crazed sex-poodle.
September 15 is also within a day or two of the Arctic Ice minimum, which appears to be tracking to a low similar to 2007.
Don’t forget to open the windows the night before, Al
re: “All of the group’s efforts will be devoted to spreading the truth about the climate crisis and the solutions to it,…”
Why don’t they start with the truth about the data before manipulations. And the truth about historical data.
A good starting point would for AL to admit he lied about CO2 leading temperature in his ice cores. He could also admit his hockey stick was completely wrong and probably intentional fraud.
He could end his introduction by owning up to the lies caught by the British court.
On this foundation of truth, he could then talk about Phil Jones’ BBC interview where he said
the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 for all 4 periods are similar and not statistically significantly different from each other. He can add that Jones also said “that from 1995 to the present there has been no statistically-significant global warming” and he said they believe man is the cause of warming because of “The fact that we can’t explain the warming from the 1950s by solar and volcanic forcing” ()
Then Al can start his rant with the recent cooling.
That is if he really wanted to talk about the truth.
Thanks
JK
PHOTOGRAPH OF COCAINE SMUGGLER JORGE CABRERA WITH AL GORE.
This is one of the two photographs that the Justice Department
was confiscating after it was learned that Jorge, who had donated $20,000 to the Clintons, was a major cocaine smuggler. The woman to the right,unidentified, appears to be the same woman seen in the photo with Hillary below.
Despite all of this, Gore won’t debate the skeptic position. If – somehow – before then the MSM could be convinced to appeal to him to publicly debate some top scientific skeptics, he might do a McCarthy and talk himself – and the hysteria – into the ground.
How can he either be put in a debate or on the MSM refuse to debate?
“…Gore who has reduced his public profile on climate action in the past few years – probably out of consideration for the political consequences to his fellow Democrat Barack Obama.”
More probably because he wanted his waterfront condo deal in San Francisco to fade from the public’s mind.
I think Gore has made all the money he will off global warming. Now I think it’s about avoiding an investigation that would expose the whole thing as a fraud and make him look like Bernie Madoff and put him behind bars for life.
Poor Al . . . ever more desperate to keep the gig going.
Well have Al. Just because you failed out of Divinity School doesn’t mean you aren’t a saint.
Those who fail to learn from history are condemned to repeat it. – attribution: various
“Cataclysmic Weather and Global Climate Change: The New Normal…..”
Hmmmmm – Now where have I seen this act before? algore…. igore…… abby something….
This is a candidate for Best Sceptical News of the Summer. The more Al Gore is on the telly, the more sceptic converts are made.
And the more he refuses to debate with the increasing number of sceptics, the more foolish and shallow he – , and his cause,- will look.
Perhaps it will be a Tipping point for us.
I suspect that this re-emergence has more to do with the collapse of the indulgences trade. Al Gore needs a new revenue stream.
Oh good. He hasn’t actually wised up. He is willing to keep selling his AGW Indulgences right up to the big chill. I was worried he would have been forgotten about by the time the town folk were torch-and-pitch fork angry.
Stay there, Al. You stay right there.
Just like the penultimate scene of a ‘B’ horror movie, when you thought the beast had been slain, and the good guy turns away, there is that final terrifying twist. Similarities of Al Gore to Glen Close’s fatal attraction character in the bath scene are too close for comfort.
“Lipstick on a pig”
I know you are referrencing Gore’s manner, but –
First, you’ve insulted pigs everywhere by relating them to Gore,
and,
second, the “lipstick on a pig” phrase implies putting the lipstick on the front end of the pig which is exactly opposite of where Gore uses it.
Maybe “perfume on a cow patty” would describe Gore’s drivel better?
Dunno.
🙂
I expect that Mr. Gore’s project will result in an endless number of (richly deserved) snarky remarks and general ridicule from people who question his version of the “science.”
Well, this is good news. For a while there, even the alarmist quit identifying with him. This will fun. Are they going to identify with and embrace again, Al “millions degrees/unhappy ending” Gore once again?
I hope so. Makes for easy going. 🙂
Support the man. After AIT skepticism grew. He works for us.
Of course he will do this without someone taking the opposing view. No online debate, no opposing time. All AL, all the time.
He’s waited this long, why not wait for the next IPCC report and have more behind his death by power point?
God thing he is coming back with a Science fiction show… they are running out of candidates for the next Nobel price.
Did somone say that he’s back because he reached the Tipper point?
Chris Mooney: some blog: Al Gore Launches the “Climate Reality Project” — which deserves some quotes since he is right on the money:
“In my view, Gore has been pretty much right about the science of climate all along.”
Really?
Hasn’t he watched An Inconveinient Truth?
What planet is he living on?
I wonder whether there will be a “Gore effect” with respect to Atlantic hurricanes. September 14th is, IIRC, the date for maximum hurricane activity. To date, now mid-July, we have had one rather small tropical storm. There are no hurricanes in sight. I know it is early days, but one never knows. These Gore effects seem to be very strong.
Indeed, this will be entertaining. I can’t wait to see and hear all the BS he’ll spout. Popcorn time
They should have timed it for Halloween…
“Reveal the deniers” ?
Are they hidden?
I don’t remember giving a secret password to get on here.
The stupidity and dishonesty is piling up and reaching levels which seem to have no limit.
What’s next? Calling skeptics terrorists and calling for their prosecution & execution?
Sounds like the divorce has forced him to having to make more money to live the hypocrite life he has “chosen” to live. God help him if he ever had to live like us regular people!
Is a re-launch somewhat equivalent to a second chakra?
Watch the teaser video ( Popcorn please, the full event is going to be an embarrasing own goal for Gore) check out the 3,000 people ‘trained’ by Al Gore to give a slideshow. 😉
can’t wait, every cliche going – How many climate scientists are going to be happy to be associated with this…. yet, you are either with Al or against Al, a dilemma for them.
“reveal the deniers”
“cataclysmic weather events ARE occuring”
“big oil & big money”
Should be interesting. A world event, over 24 hours, Al Gore speaks! 64 days and counting..
Al Gore –.
” !!!
does he really believe this rubbish?!
“Climate Science is the search for fact… not truth. If it’s truth you’re looking for, Dr. Tyree’s philosophy class is right down the hall.”
Hopefully Dr. Jones (Indiana, of course) will forgive me taking liberties with his quote. Seemed apropriate given Gore’s website and Grist’s comment.
Re “The New Normal”; he MUST take this gal with him on his Powerpoint tour; i insist; she’s gold.
Good idea with Gore leading the charge they will over the cliff of ‘doom’ even quicker .
If you want to have fun, travel over to the youtube page where his promo is posted and read some of the supportive comments.
Good, he’s done more damage to their ’cause’ than any other person………..
“Poor Al . . . ever more desperate to keep the gig going.”. You are all the victims of an extremely effective denial campaign and are in a psychological-ideological deadlock which prevents you from seeing reality in an objective manner. This is exactly why initiatives like ’24 hours of climate reality’ are being held.
When I was about 12, I used to think climate change was exxagerated because I felt it was cool to think the opposite of everybody else. I think I can understand how you feel. It’s okay to change your mind.
And if you don’t change your mind, ask yourself this: are you human? – could you be wrong? – what if the doomsday scenarios turn out to be right? – is it responsible to risk such damage based on your being 100% certain that the science is bogus?
I want to reach out to you. I want to work together with you. It’s not about politics. Please leave behind your preconceptions and your fear and start to see the reality of human-caused climate change.
Let’s be friends!
Al’s Climate Launch may turn out to be a Lurch when he speaks.
That Gore Effect is directly proportional to the level of bull.
Take cover.
Some editing of the Picture of Gore is needed:
You need a right hand extension that shows Old Man Winter blowing back.
They’d better use something to keep the “circle flies” away during 24-hours of Reality. You can’t hardly fool them circle flies.
Should be a target-rich environment! The challenge will be getting the truth out to a broad audience so that his fatuous arguments for the AGW echo-chamber can be exposed. Let’s roll!
The first thing that struck me is Gore’s use of the word “Reality” in the name of his venture—and I contemplated the ramifications of this misuse of the English language. So I go into the SBA.gov Web site and look up Truth in Advertising. It doesn’t bode well for Mr. Gore.
You see, I found that the FTC, the main federal agency that enforces advertising laws and regulations through Truth In Advertising, says that:
I submit Gore’s use of the word “Reality” doesn’t conform to the SBA’s requirements as stated above.
I also laugh when Gore maintains that “reality is on our side”. That’s not the reality I see, Mr. Gore. I suggest you open your eyes, quit trying to cover your sorry past, and face the facts that your “Climate Crisis” as you call it is a factual inexactitude.
Yeah Go Al Go.
DirkH says:
July 12, 2011 at 2:15 pm
What a classic! How does she spout that with a straight face???
Come home, Al. The chickens have come home to roost and you are standing under the coop.
2011 is a lean year for ManBearPig:
@bob nye: I would…
So Al Gore’s decision to lower his profile was to avoid forcing Obama to make politically unpopular decisions?
Obama doesn’t need to be forced to make politically unpopular decisions – he’s perfectly capable of doing that all by his lonesome. Witness the last couple years, and now intimating he will stop SS checks to Seniors if he doesn’t get his way on the budget. I have some advise for Obama: Never pick a fight with an old guy; he won’t fight, he’ll just kill ya (politically speaking of course ).
Every time that Al is about to pop up and make yet another presentation, after having spent an unhealthy number of hours learning powerpoint–again–I always wonder if there’s been yet another insane asylum having closed up shop.
If the content is of no import to the presentation, and if no one is allowed to say anything, maybe not even look, no microphone, no cameras, in fact all them nasty evil IRL people is not to know anything, if the preferred preference for building is something akin to old nazi bunkers, a presenter pretty much has to be insane to go through with the presentation when the choice of just making a tweet about having held such a presentation (nobody being the wiser, wink wink nudge nudge) is so much less time consuming and easier. :p
LOL – Al Bore, you are a card! He must have a mansion payment coming up or something – maybe a new pool heater? Nobody took you serious when you were Veep, a Presidential candidate or as a tree hugging hypocrite. Go chase a maid across a room and go away.
It stands to reason that if banging your head on the desk is painful, diving into it headfirst from atop a tall stepladder should feel pretty good. Well, at least it does for a warmist.
Maybe he’s trying to get the VP nomination. Gore, the leader of a Green Party within the Democratic Party, temporarily. In addition to being VP, he could be Green Czar. Yeah, that might work. Gore is set up as the Idol for the Loony Left and Obama moves to the middle. Gore takes over the Democrat Party from the inside.
To Save The Future Of Civilization As We Know It
Precisely the opposite is true, if Gore et Al get their way, there will be no more “civilization as we know it”.
Expect unseasonable cold.
@ Some European says:
July 12, 2011 at 2:30 pm
I can think of a few other historical figures who “just wanted to be friends”. Stalin, PolPot, Genghis Khan, Caligula — just off the top of my head.
One in 10 Species Could Face Extinction: Decline in Species Shows Climate Change Warnings Not Exaggerated, Research Finds
That’s right. Already happened. Panic in the streets.
Someone commented here that hes doubling down, he’s done that already a few times. This would 8 or 10 folding down by now. I would like to follow the money on this one but I can’t. The carbon trading ponzi scheme is over. Does he think he can win another oscar or noble prize? so wheres the money?
Joe Romm about Al Gore at ThinkProgress.
Read the comments. Hint: It’s not ThinkCritical.
DirkH July 12, 2011 at 2:15 pm
That YouTube video-
“Adding comments has been disabled for this video”
How shocking that yet again an alarmist has to prevent challenge.
When you have to call the baloney you’re selling “real,” you’re on very thin ice.
14th September is the fourth day of the Rugby World Cup here in New Zealand. On the 14th will be Samoa vs Namibia (Samoa to win?), Tonga vs Canada (Tonga to win?) and Scotland vs Europe 1 (Scotland to win?). On the 15th (still 14th in California) is Europe 2 vs USA (USA to win?). What players make up Europe 1 and Europe 2? Will they wear the EU flag?
So let’s see, watch the rugby or Al Gore’s 24 hour Unreality Show?!? 🙂
Maybe Gore should go read George Orwell’s 1984 before he starts telling the rest of us ‘free’ people how to think. But then maybe it doesn’t come in a Powerpoint version.
The big difference this time?
We are ready and waiting for him…….
When all else fails, he can make a case for reducing CO2 emissions “just because.” For example:
See? Inconsiderate Dutch drivers are wreaking havoc with natural ecological systems.
Increase the carbon taxes! Force them to drive less! Save the insects before it’s too late!
When Gore and his cult of scientific apocalyptic screamers predict the same old tired scenarios of doom they are correct in one sense. It will be their credibility’s doom from a continued downward spiral of their already abysmal intellectual integrity.
To paraphrase a line from the character Quintus in the movie Gladiator: “
PeopleWarmists should know when they are conquered.”
John
Too much of a good thing…..
This sums it up quite nicely…..
Some European:
Many people including me know
(a) climate has always changed everywhere and it always will,
(b) no unprecedented climate changes have been observed during the past century,
(c) there is no empirical evidence to support the AGW hypothesis,
and
(d) there is evidence that refutes the AGW hypothesis.
But at July 12, 2011 at 2:30 pm you say to those of uis who know these facts:
” I’m ever more desperate to make you guys understand that the earth is warming, that human activities are the main cause,
…
could you be wrong? – what if the doomsday scenarios turn out to be right? ”
I can tell you that your desperation is to be expected because the only things that could encourage us to share your delusion is
(1) observation of some unprecedented climate change,
(2) some empirical evidence which supports the AGW hypothesis
(3) disproof of the evidence which refutes the AGW hypothesis (i.e. the ‘hot spot’ is found, the ‘missing heat’ is found, the ‘warming in the pipeline’ reappears, the predictions of sustained warming start to be confirmed, etc.).
Until then I suggest that you recognise your delusions are certainly wrong because the existing evidence clearly indicates that the AGW hypothesis is wrong and, therefore, the “doomsday scenarios” are impossible.
Additionally, your assertion that you really are deluded into believing your untrue assertions would have more credibility if you did not hide behind an alias.
Richard
Where will this be broadcast? On Gore’s network?
“Thankfully, Al Gore is back”……………
…..the first sign of the coming ice age
Others could coordinate a corresponding “24 Hours Of Fantasy” event that mocks his (which is what it deserves). I’m sure we’ll hear of that, soon…
I’d like to know when this cooling is going to start.
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
Global cooling my butt.
I caution all here not to underestimate this “Climate Reality” show.
This is an “All in” bet by the Gore and allies and here are the stakes:
1. This is 13 months before the US Presidential Election.
2. I am not convinced that Obama will be the Democratic Presidential Candidate in the end. His negatives are high and irreparably so. By the January 2012 State of the Union, Obama will be unable to avoid the “Are you Better off now than 4 years ago?” question that sank Jimmy Carter.
3. So, is it beyond reason that Al Gore will contest the Democratic Nomination? The greens will abandon Obama and Gore came so close in 2000. 9 out of 10 reports will give him a free ride.
4. At the very least, he will be a cheerleader for every Democratic Candidate backing a Climate Change agenda.
5. I’m a Bayesian. Right now, for what ever reason, I think in mid September I think it is even money that the Arctic Ice cover will be below 2007 minimums. It’s better than even money that at least one of the Ice Pack measures will be at historic lows. Even if it is close, it will add credibility to his message. “The deniers have been telling you it’s been cooling for 10 years. Look at the ice pack now and laugh at them. I told you so!.”
6. This is just the overture to 14 months of aggressive climate change propaganda. What is at the end of that 14 months is a US Election and Rio 2012 20-year IPCC Sustainability agenda. During Rio Obama will still in office, even as a lame duck.
Given what is to be gained, what Al Gore has to lose is a pittance The more I think about it, it is strategically very clever.
The poker analogy is poor here. Sept. 14 is Al Gore’s D-Day. It is a commitment of every asset and ally on his side toward total victory.
If we skeptics and freedom lovers do not anticipate and meet this assault with more than humor, the end of 2012 will not be a future I would welcome.
I hope this load of manure will not be required viewing at schools during the day. If Gore has convinced the gov’ts and school boards to show this it must be challenged or opposing views alloted equal time.
Oh no Gore is back – another reason why Mother Gaia is gonna cool down.
Shhh. Don’t tell anyone, but this is the next phase in that astroturfing project.
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
What town? Just for giggles, I picked a random town in S GA (Valdosta) and it doesn’t show above average from May 25 to today. Southern GA being a big place, maybe a little more detail would help.
Can’t resist the urge to boil Climate Reality Project down to an acronym:
C. Ra. P.
Mark H.
‘Joe Romm: Exclusive: Al Gore On His ‘Climate Realilty Project’ Launch: “It’s Urgent To Rendezvous With Reality To Save The Future Of Civilization As We Know It.”’
To save the future of the civilization as we know it?
So, exactly whoms version of civilization during which time period should we save as we know it? Don’t we really already do that by noting previous civilizations in the history books. During the last hundred years all our civilizations, that exist on this planet at any given time, has changed numerous time, for the better no less.
Or what, should the African people still slave in economical bonds to the western world, like the crazy greenies seem to want? Should the Chinese not prosper even more? Should the western world not develop new BIG industries to further western world civilizations? Should not Chile have a space program because it doesn’t fit the crazy hippies or the Venezuelan communists? Should the Japanese or the Indian not have a shot at going to the moon just because the green minions and their green masters think otherwise?
Hooray! The bulb act died!
Al Gore the Climate Scientist ? Can someone reresh my memory as to Al Gores academic qualifications as a climate scientist ?
Brian says:
July 12, 2011 at 4:23 pm
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
Indeed, humanity is unbearable lately.
In what town or city do you live Brian?
I hope they have the snowplows ready at the location Al Gore goes to on September 14.
I’m sure 1.0% of the planet is having extreme weather today – 1 in 100 year type extreme weather. 99% is having relatively normal weather which will go un-noticed.
If 100% of the planet had normal weather for decades at a time, then we could start talking about the end-times and the antichrist being big oil/big coal. We might call it Apocalypse 1:13 and have fancy graphics in the margin.
On more serious note, people living in South Georgia should expect their summer to be hot and humid. It’s subtropics, and their butt have nothing to do with it, unless they would move it up North.
Brian says:
July 12, 2011 at 4:23 pm
I’d like to know when this cooling is going to start.
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
Do you not think your being a tad harsh on the good people of south GA ? lol
“In my view, Gore has been pretty much right about the science of climate all along.”
Apparently Mooney never heard about the British court’s opinion of Gore’s propaganda film.
.”
I guess Chris needs, at minimum, a dozen inaccuracies before he gets suspicious.
Some European says:
July 12, 2011 at 2:30 pm
.”
The Earth is NOT warming in fact it is cooler now than it has been for most of the Holocene. There is nothing about the late 20th century warming that was different to the warming periods in the last 2 centuries and its only just about as warm now as it was in the Medieval Warm Period. There is NOT a solid body of empirical evidence that mankind’s CO2 emissions have had any effect. Indeed as the Earth has not been warming for more than a decade while atmospheric CO2 has risen the evidence if anything is to the contrary.
And if you don’t change your mind, ask yourself this: are you human? – could you be wrong? – what if the doomsday scenarios turn out to be right? – is it responsible to risk such damage based on your being 100% certain that the science is bogus?
I think that YOU should ask yourself if you are human.
Huge amounts of money are being spent on what appears to be a myth that has been latched onto by tax and power hungry politicians. What if we are wrong?? Well what is 100% absolutely sure is that in the time you have been reading this around 10 children have died from malaria – FACT around one child every 6 seconds. Similar numbers are dying of starvation and diseases due to poor water quality or thirst. THIS IS FACT “One person every other second needlessly dies. Approximately 85% of them are children.”
Yet you would rather pay extra money to speculators like Al Gore on a carbon exchange?
Fund more flights by Al Gore between his residences paid for by his companies that sell carbon credits?
What kind of person disregards ACTUAL harm to others and throws money that could save millions of lives a year to self-serving speculators and politicians – then claims it is ‘just in case’ the falsified AGW hypotheses might be right?
Some European you are!
In other words, Al Gore has been an ebarrassment to our cause, and only leads to people turning away from it further, but we can’t tell him to stop.
Some European says:
July 12, 2011 at 2:30 pm
Response:
S. E. – I appreciate your sincerity and your heartfelt belief that our naturally and ever changing climate is directly influenced by man made sources. You have been misled…. but I think I can understand how you feel. I too was gulled into the ‘Save The Earth’ dogma back in the 1970s, until I realized it was just a front for Big Socialism. The baseless Save The Earth dogma you have chosen to embrace is the ‘cross you bear’. It prevents you from examining reality in an objective manner. This is why initiatives like ’24 hours of climate reality’ appeal to you, the misled faithful. You have an indoctrinated need to attend your AGW religious services regularly… and generously give the appropriate tithes!
Hallelujah! We Are Saved!
Just remember this – It’s okay to change your mind. It’s OK to question the Orthodoxy and demand open and honest answers. It’s reasonable and just to expect full disclosure of all data, analyses, programming code, and computer models used to support or refute the central hypothesis of AGW. This will be difficult for you. Your indoctrination is deeply seated and the path back to reality will not be an easy one. The first step is admitting you have been misled……..
Ask yourself this: Are you a rational human? Could you have been misled? Could you be wrong? What if the doomsday scenarios that churn your fevered fears turn out to be entirely false? What if the much contested hypothesis of Anthropogenic Global Warming proves to be baseless or of only minor scientific significance? Would you accept responsibility for deliberately destroying entire global industries supplying and using the low cost energy to feed, house, and clothe the population of an entire planet, because you mistakenly believed a scary story? Would you accept responsibility for bankrupting nation after nation with ill conceived and wasteful spending on far less efficient energy sources and irrational ‘carbon credits’, because you saw a frightening movie and then had a bad dream? Are you personally and collectively willing to accept responsibility for the worldwide starvation, pestilence, and death that stems from unreliable forms of inefficient and expensive energy and will be a direct result of your profoundly misguided actions? Are you human?
Are you human? Yes, of course you are, regardless of how effectively you have been misled. Misleading the gullible is a high human art that we are all susceptible to, on occasion. The more important question is “Are you a rational human?”. Are you willing to move beyond the ‘sound bites’ and ‘press reports’? Are you willing to examine data and analyses from all sides of the issue, to thoroughly inform yourself? Are you willing to acknowledge that much of the data and analysis used to support the hypothesis of Anthropogenic Global Warming is hopelessly compromised by a host of errors embedded in the data collection methods, data samples, data recording systems, faulty data storage, inappropriate use of tree growth rings for temperature proxy data, etc., etc., ad nauseum? Are you willing to acknowledge that much of the foundation analysis using this flawed data is further compromised by a host of human selection, calculation, and interpretation errors, both inadvertent and willful?
When you can honestly challenge the foundation data and the foundation analyses (and righteously so!), you will be well on your path to achieving ‘rational human’ status. You will find yourself proudly accepting the title of ‘skeptic’, as every rational human scientist must. Skepticism is the bedrock of all honest science.
I want to reach out to you. Let’s be rational!
Brian
I’d like to know when this warming is going to start.
I live here in the UK and it’s been well below average nearly everyday since early May basically with unbearable humanity. (did you mean humidity, or do you seriously hate your neighbours?)
Global warming my butt.
Everyone knows the second album is always a flop if it’s left too long and the first album is already out of fashion. Just ask Guns and Roses.
I can’t help thinking he has accepted money from Big Oil to further undermine the global warmists’ cause.
Ian W:
Thankyou for your excellent post at July 12, 2011 at 5:07 pm.
I lacked the courage to be as blunt as you are in your post but, of course, you are right: those truths need to be stated because there are people who need ‘the dots to be joined’ for them and it seems ‘Some European’ is one such (assuming his/her post was genuine).
Again, thankyou.
Richard
I thought Al had been locked up behind bars by men in white coats
Of rested case, lads and gentiles, I hereby present…drum roll…the Nobel life of Randy Man Minus Wife:
Tobacco farmer Gore’s six-fireplace palace:
And “BIO-solar” jet ski launch (and bachelor) pad yacht:
Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.
Oh, please… Of course it will. And Nature and SciAm will be more than happy to publish something for Al Gore to say, “Why, Just Last Week A Brand New Peer Review Paper Saaaaays That All This Horrible Weather Is Indeed Due To Global Climate Disruption!!!!”.
Google: delay on renewables will cost U.S. trillions, over million jobs.
35 Inconvenient Truths
The errors in Al Gore’s movie
Isn’t September 14th Hugo Chavez National Gonorrhea Day in Venezuela?
I have been around long enough to remember the warmist mantra that people should not confuse weather with climate. Now that it is no longer convenient they reject that mantra and instead seek to link weather with climate. Little wonder that qualified and experience scientists like myself have real questions that remain unanswered. I’m not a “denier” I just don’t believe that the scientific case has been made. This man is NOT a scientist so who gave him leave to ponce around the world name-calling real scientists?
“Hi, my name is Al Gore, and I’m a climaholic.”
Stephen Rasey says:
July 12, 2011 at 4:26 pm
Interesting points, Stephen. Thought provoking…. I wonder how much of Al’s personal fortune was tied up in the ‘carbon credit’ trading schema? He may be feeling a bit pinched financially, at the moment, with the collapse of these irrationally unsustainable ‘markets’. As you say, it does look like an ‘all in’ moment, for the man whose home state of Tennessee helped elect George W. Bush.
The choice of Sept 14 is interesting but I did find one interesting fact.
Bishop Gore School in Wales was founded on that day.
I knew this was coming down the pike when I read Trenbreth’s big push for attribution as the next step in climate scare tactics. It’s not enough for him and his flunkies to spread the word, it will take Al Gore’s acolytes to further the mission.
You would think they would have learned their lesson after trying to convince the public that the ’88 droughts would become a frequent occurrence in N. America or that the 2005 hurricane season was the beginning of a new era. But I guess if you cast your net wide enough and lay claim to EVERY event as a sign of global warming, you can never be wrong.
But we all know, the more loudly you squawk, the more immune people become to your squawking. So, let them push this flimsy ‘science of attribution’ … it will only drive the numbers even lower.
I for one must thank dear Al, he made me a skeptic; after watching his Inconvenient “Truth”, I bought Michael Crichton’s “State of Fear”.
With Gore, it will be more like “connect the dolts”
A lot of drive-by’s around here lately. They never seem to stick around and back up their astonishingly narrow assertions. Thanes never did get back to me on the Gavin/martha/polar bear thread. R.Gates has been reduced to comparing our climate with sand piles. I get where he was trying to go with that (tipping points), but it is a sadly weak analogy that does not begin to describe the complexity of our climate, and the interactions of the variables and influx/ outflow of energies. I must say, though, that I am glad that someone gave Gore more rope.
Go home Al. Make a nice warm bowl of soup, turn on the TV and watch til the test-pattern stops. Tomorrow, wake up, turn on your TV, and repeat the above exercise.
I had to watch Apple’s “1984” Mac video again in his honor.
This will be the best thing for skeptics since “No Pressure”.
Imagine that after making a movie grossing over $100 million, getting endless free press and softball interviews, after getting huge tax payer funded conferences and making many tens of millions personally off of AGW, that Gore feels the need to ‘bring climate reality into the mainstream’, and to out ‘deniers’.
This pathetic money waste by Gore & gang is effectively an admission of defeat.
My bet is it falls apart before it even takes off.
Ms. F.(?): “Al sometimes has trouble with blabber control…”
No sooner does Al Gore return ( somebody hinted at this) then along comes the first hints of La Nina round #2.
I note that the usual toxic AGW sycophants who use foul language enthusiastically on the Guardian’s CiF are at their foaming best over Gore’s new movie and the ‘in the pay of big oil’ flag is being waved vigorously. I have never seen so many comments removed from there before now, either.
So he got a bunch of ignorant kiddies to spread the Gospel of the Goreacle? Always use the dumb useful idiots.
Steve Oregon says:
July 12, 2011 at 1:56 pm
“Reveal the deniers” ?
Are they hidden?
I don’t remember giving a secret password to get on here.
The stupidity and dishonesty is piling up and reaching levels which seem to have no limit.
What’s next? Calling skeptics terrorists and calling for their prosecution & execution?
================
actually that HAS been mentioned and proposed. by more than one alarmist.
tattoed foreheads, re education camps, and exploding kids all come to mind.
As the word HOAX appeared my Adobe Flash Player crashed. Prophetic or what?
Isn’t Gore just another cog in the machinery keeping Khrushchev’s prognostication alive?
You forget the bigger-than-BskyB BBC left wing pro-AGW conglomerate: they will lap this drivel up too!!
The title is only half right. I’ve googled up a few hundred online images of Al Gore. And can’t find one where he’s wearing lipstick.
“probably out of consideration for the political consequences to his fellow Democrat Barack Obama.”? What? I thought we were all going to fry, the seas would envelop us and there would be death and destruction from any number of cataclysms if we didn’t act now!!!!
But we had a time out for political considerations? Oh brother!
I suggest we, on the sceptic side, hold a simulcast on Sept. 14, debunking every claim and statement they make, as they make them. Wonderful split screen possibility!
Particularly when they “out” the “deniers”! Why wait as they bring up one name at a time on their screen, we’ll scroll the entire list on ours!
Outing deniers, so dark!Ohhhhh! Kind of like pointing out the gays in a gay bar! This will be hilarious! “See they guy over there waving his arms and jumping up and down? Yes folks, those are the trash hiding in the shadows trying to prevent us from making mone…ummm saving the earth!!” LMFAO!
@Dennis Cox – any of those pictures show his “Shakra”?
I’ve lived in Southern Georgia (St. Mary’s/King’s Bay) and, trust me, it’s always hot. I do take strong exception to the “unbearable humanity” remark too. I found most of the people there quite pleasant.
Dennis Cox says:
July 13, 2011 at 9:13 am
The title is only half right. I’ve googled up a few hundred online images of Al Gore. And can’t find one where he’s wearing lipstick.
Apparently you did not read my post earlier in this thread.
🙂
Mr. Maue, “Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.”
Perhaps you should direct this at Mr. Watts who has in previous posts tried to deny a link between weather and climate based on anecdotal evidence about previous droughts and storms.
[dr maue: i will berate him immediately: anthony, please do not discuss previous droughts and storms … climatology is about the future, not the past]
sceptical says:
July 13, 2011 at 9:37 pm
Mr. Maue, “Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.”
================================
And who the hell are you “sCeptical”?
What gives you the right to say “Mr.” as opposed to “Dr.”?
He has his qualifications.
What are yours?
You just don’t have any better time to spend than to make potshots in your AC controlled room behind the safety of your laptop.
But even so…what does your lame protests have to do with anything here?
Are they part of your 24 hours of reality?
Go ahead and live in your world.
Meanwhile…the rest of us will actually make things happen…and we certainly do not need a complete NIMROD like Al Gore to make anything happen whatsoever…
Chris
Norfolk, VA, USA
[rmaue: you wanna read some really disgusting comments: head on over to Salon.com: page 2 has a dandy about “firing up some ovens”: ]
@Howarth says:
July 12, 2011 at 3:29 pm
“This would 8 or 10 folding down by now. I would like to follow the money on this one but I can’t. …. so wheres the money?”
AL Gore has never made any significant $$$ from his crackpot climate scheming.
He became an “adviser” to Google in April 2001 – one month after his pal Eric Schmidt was hired as CEO – and one would guess that ex-U.S. Vice President Gore received generous pre-IPO stock options as compensation (Schmidt was granted 14 million options at hiring).
Let’s say it was 100,000 options @ $0.30/share (Schmidt’s price one month earlier). In late 2005 (18 months after the 2004 IPO) those options would have been worth $40 million.
Of course anyone can play with the numbers in this manner because it’s unknown how many options Gore was actually granted, but it certainly was greater than zero, and probably substantial given his status as ex-VP.
Finally, I recently checked the web site for Gore’s company Generation Investment LLP, and noticed that the partner bios for each of the (formerly) 21 partners have disappeared. In fact the 21 partners are not even listed anymore. Is that because 18 of those 21 were ex-Goldman Sachs, hence uncomfortable PR? Or have they quit? Unknown. From 2011 SEC filings it appears that the firm continues to have a couple billion under management. But the web site does look rather shoddy and shopworn for an investment firm of this size, and there seem to be no press releases since last year (2010). Odd.
Ryan Maue – could not those be considered death threats (since no one came out of the ovens alive)? Where is David Appell to denounce these death threats?
Phil;
IIRC, the ovens were just used to get rid of the debris from the showers.
New name, same game, to scare the gullible in his best preacher voice, telling the world that the end is nigh and that the only way to save it is to dig deeply into your pockets and give generously to the church of AGW, my brothers and sisters.
In my July 12, 4:26pm post, I erroneously implied that Rio+20 was shorty after the US Nov. 2012 election.
Rio+20 is in fact on June 4-6, 2012.
The Democratic National convention is early September 2012.
The Republican National convention is late August 2012.
Putting Rio+20 well before the U.S. elections is a colosal blunder by the IPCC in my opinion.
The US government will make no commitment before the election. None of the other G-20 will do much if the USA is waffling.
Finally, the IPCC must be geographically obtuse. Choosing to have a Climate Summit in late fall of a Southern Hemisphere location is asking for another record-breaking cold snap to visit during the keynote address.
What a load of BS! Carbon tax, tax the world and make money!
OH REALLY I GUESS 30,000 SCIENTESTS THAT SIGNED A PETITION AND FELT SO STRONGLY THEY MARCHED ON THE WHITE HOUSE SOME OF THE BEST MINDS IN SCIENCE MIND YOU, DONT COUNT, PLUS THE 100 MILLION PLUS HE MAKES IS BECAUSE HE LOVES US, AND OH ONE MORE THING, CARBON DIOXIDE? ISNT THAT WHAT HUMANS BREATH OUT, STOP BREATHING THEN AL AND DO US ALL A FAVOUR YOU FRUAD. | https://wattsupwiththat.com/2011/07/12/lipstick-on-a-pig-gore-rebrands-climate-outfit/ | CC-MAIN-2020-10 | en | refinedweb |
WARNING: Version 5.5 of Beats has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Beats settings can reference other settings splicing multiple optionally custom named settings into new values. References use the same syntax as Environment Variables do. Only fully collapsed setting names can be referenced to.
For example the filebeat registry file defaults to:
filebeat.registry: $\{path.data}/registry
With
path.data being an implicit config setting, that is overwritable from
command line, as well as in the configuration file.
Example referencing
es.host in
output.elasticsearch.hosts:
es.host: '$\{ES_HOST:localhost}' output.elasticsearch: hosts: ['\{es.host}:9200']
Introducing
es.host, the host can be overwritten from command line using
-E es.host=another-host.
Plain references, having no default value and are not spliced with other references or strings can reference complete namespaces.
These setting with duplicate content:
namespace1: subnamespace: host: localhost sleep: 1s namespace2: subnamespace: host: localhost sleep: 1s
can be rewritten to
namespace1: ${shared} namespace2: ${shared} shared: subnamespace: host: localhost sleep: 1s
when using plain references. | https://www.elastic.co/guide/en/beats/libbeat/5.5/config-gile-format-refs.html | CC-MAIN-2020-10 | en | refinedweb |
Opened 7 years ago
Closed 6 years ago
Last modified 6 years ago
#5142 closed Bugs (invalid)
type_erased feels unnecessary.
Description
type_erased Range Adaptor is in release brunch of Boost 1.46.0, but I feel that this is unnecessary. It is enough if there is any_range:
#include <iostream> #include <vector> #include <boost/assign/list_of.hpp> #include <boost/range/any_range.hpp> #include <boost/range/algorithm/for_each.hpp> #include <boost/range/adaptor/filtered.hpp> typedef boost::any_range< int , boost::forward_traversal_tag , int , std::ptrdiff_t > integer_range; void disp(int x) { std::cout << x << std::endl; } void disp_all(integer_range r) { boost::for_each(r, disp); } bool is_even(int x) { return x % 2 == 0; } int main() { const std::vector<int> v = boost::assign::list_of(1)(2)(3)(4)(5); disp_all(v | boost::adaptors::filtered(is_even)); }
Even if there is few it, I feel that I am premature to include it in 1.46.0 official release.
Attachments (0)
Change History (3)
comment:1 Changed 7 years ago by
comment:2 Changed 6 years ago by
I didn't feel it was too early to put this into the release branch. The core of the interface design is built upon experience gained with other any_iterator implementations. The implementation of any_interface has been tested in production environments successfully. The inclusion of the type_erased definitely has application particularly when chaining adaptors that are passing results to templatised range algorithms that are not specialised for any_range.
At this point experience shows that the feature is popular and working.
comment:3 Changed 6 years ago by
The core of the interface design is built upon experience gained with other any_iterator implementations.
I cannot find the experience. Where can it be seen? At least, with the sample on a document, I do not think that it is useful.
<del>Even if there is few it, I feel that I am premature to include it in 1.46.0 official release.</del> At least, I feel that it is too early to include type_erased into 1.46. | https://svn.boost.org/trac10/ticket/5142 | CC-MAIN-2017-47 | en | refinedweb |
Cache resource settings. More...
#include <svn_cache_config.h>
Cache resource settings.
It controls what caches, in what size and how they will be created. The settings apply for the whole process.
Definition at line 49 of file svn_cache_config.h.
total cache size in bytes.
Please note that this is only soft limit to the total application memory usage and will be exceeded due to temporary objects and other program state. May be 0, resulting in default caching code being used.
Definition at line 55 of file svn_cache_config.h. | https://subversion.apache.org/docs/api/latest/structsvn__cache__config__t.html | CC-MAIN-2017-47 | en | refinedweb |
How I've been spending my time later? Playing with SharePoint "v3" Beta 1. Sorry, I should have said SharePoint Server 2007, since it's already been properly announced at.
My current effort is to implement a scenario where you have a parent portal and a few geographically dispersed child sites. The main idea is to keep the children close to the user's location (network-wise) while keeping the capability of performing enterprise-wide searches.
My challenge is to do all this testing without actually having a dozen servers to play with. That's when virtual technologies come to the rescue. I am almost done implementing the whole thing in a single notebook.
First, I wiped one of my notebooks clean. It's an HP NX6125 with an ADM Turion x64 CPU running at 2.20 GHz equipped with 2 MB of RAM. The Host OS (the outside system running on the real machine) is Windows Server 2003 SP1 x64 (yes, the CPU on this notebook is the mobile version of the 64-bit Athlon CPU).
To be able to run without any network dependencies, I installed the Microsoft Loopback Adapter in addition to physical network card in the notebook. This way, I can get a fixed IP required for a DNS server/Active Directory domain controller and also enjoy DHCP connectivity wherever I might connect.
With a fixed IP on the loopback adapter, I was able to load the DNS server and create a zone for my domain. Then I ran DCPROMO to get Active Directory installed on that namespace. I also installed SQL Server 2005 and Virtual Server 2005 R2, both in their x64 editions.
Next, it was time to set up the guest OSes (the virtual machines running under that main host). I ended up setting 3 guest each running Windows Server 2003 SP1 x86. I was able to dedicate 448 MB of RAM to each one without starving the host OS.
Each guest was configured with a virtual network connected to the host's loopback adapter. They got their fixed IP addresses and became members of the domain. I also installed IIS, the .NET Framework 2.0 and the Windows Workflow Foundation (beta). Those are all pre-requisites for SharePoint Server 2007.
I also leveraged Virtual Server's ability to do differencing disks. I created a base image with all the components I needed first. Then I ran sysprep on that image and shut it down. I then created three disks based on that image and did the customization there. That meant that instead of 3 very large virtual disks, I ended up with one large base virtual disk and 3 smaller disks with the customizations.
Next it was time to install SharePoint Server on the three servers, using the host SQL as the storage for all of them. I'm still working on some of the details on the SharePoint customization, but I can already see that this is going to provide the test environment I needed.
Tomorrow I will continue to work on this one, with the goal of configuring two farms in a parent-child Shared Services configuration. Wish me luck... | https://blogs.technet.microsoft.com/josebda/2006/02/22/sharepoint-server-2007/ | CC-MAIN-2017-47 | en | refinedweb |
By now, you have seen several examples of composition.
One of the
first examples was using a method invocation as part of an
expression. Another example is the nested structure of statements;
you can put an if statement within a while loop, within
another if statement, and so on.
Having seen this pattern, and having learned about lists and objects,
you should not be surprised to learn that you can create lists of
objects. You can also create objects that contain lists (as
attributes); you can create lists that contain lists; you can
create objects that contain objects; and so on.
In this chapter and the next, we will look at some examples of these
combinations, using Card objects as an example.
If you are not familiar with common playing cards, now would be a good
time to get a deck, or else this chapter might not make much sense., the rank.
By "encode," we do not mean what some people think, which is to
encrypt or translate into a secret code. What a computer scientist
means by "encode" is "to define a mapping between a
sequence of numbers and the items I want to represent." For example:
An obvious feature of this mapping is that the suits map to integers in
order, so we can compare suits by comparing integers. The mapping for
ranks is fairly obvious; each of the numerical ranks maps to the
corresponding integer, and for face cards:
The reason we are using mathematical notation for these mappings is
that they are not part of the Python program. They are part of the
program design, but they never appear explicitly in the code. The
class definition for the Card type looks like this:
class Card:
def __init__(self, suit=0, rank=2):
self.suit = suit
self.rank = rank
As usual, we provide an initialization method that takes an optional
parameter for each attribute. The default value of suit is
0, which represents Clubs.
To create a Card, we invoke the Card constructor with the
suit and rank of the card we want.
threeOfClubs = Card(3, 1)
In the next section we'll figure out which card we just made.:
suitList = ["Clubs", "Diamonds", "Hearts", "Spades"]
rankList = ["narf", "Ace", "2", "3", "4", "5", "6", "7",
"8", "9", "10", "Jack", "Queen", "King"]
#init method omitted
def __str__(self):
return (self.rankList[self.rank] + " of " +
self.suitList[self.suit])
A class attribute is defined outside of any method, and it can be
accessed from any of the methods in the class.
Inside __str__, we can use suitList and rankList
to map the numerical values of suit and rank to strings.
For example, the expression self.suitList[self.suit] means
"use the attribute suit from the object self as an index
into the class attribute named suitList, and select the
appropriate string."
The reason for the "narf" in the first element in rankList is to act as a place keeper for the zero-eth element of the
list, which should never be used. The only valid ranks are 1 to 13. This
wasted item is not entirely necessary. We could have started at 0,
as usual, but it is less confusing to encode 2 as 2, 3 as 3, and so on.
With the methods we have so far, we can create and print cards:
>>> card1 = Card(1, 11)
>>> print card1
Jack of Diamonds
Class attributes like suitList are shared by all Card
objects. The advantage of this is that we can use any Card
object to access the class attributes:
>>> card2 = Card(1, 3)
>>> print card2
3 of Diamonds
>>> print card2.suitList[1]
Diamonds
The disadvantage is that if we modify a class attribute, it
affects every instance of the class. For example, if we decide
that "Jack of Diamonds" should really be called
"Jack of Swirly Whales," we could do this:
>>> card1.suitList[1] = "Swirly Whales"
>>> print card1
Jack of Swirly Whales
The problem is that all of the Diamonds just became
Swirly Whales:
>>> print card2
3 of Swirly Whales
It is usually not a good idea to modify class attributes.
For primitive types, there are conditional operators
(<, >, ==, etc.)
that compare
values and determine when one is greater than, less than, or equal to
another. For user-defined types, we can override the behavior of
the built-in operators by providing a method named
__cmp__. By convention, __cmp__
has two parameters, self and other, and returns
1 if the first object is greater, -1 if the
second object is greater, and 0 if they are equal to each other.
Some types are completely ordered, which means that you can compare
any two elements and tell which is bigger. For example, the integers
and the floating-point numbers are completely ordered. Some sets are
unordered, which means that there is no meaningful way to say that one
element is bigger than another. For example, the fruits are
unordered, which is why you cannot compare apples and oranges.
The set of playing cards is partially ordered, which means that
sometimes you can compare cards and sometimes not. For example, you, you have to decide which is more
important, rank or suit. To be honest, the choice is
arbitrary. For the sake of choosing, we will say that suit is more
important, because a new deck of cards comes sorted
with all the Clubs together, followed by all the Diamonds, and so on.
With that decided, we can write __cmp__:
In this ordering, Aces appear lower than Deuces (2s).
As an exercise, modify __cmp__ so that Aces are
ranked higher than Kings.
Now that we have objects to represent Cards, the next logical
step is to define a class to represent a Deck. Of course, a
deck is made up of cards, so each Deck object will contain a
list of cards as an attribute.
The following is a class definition for the Deck class. The
initialization method creates the attribute cards and generates
the standard set of fifty-two cards:
class Deck:
def __init__(self):
self.cards = []
for suit in range(4):
for rank in range(1, 14):
self.cards.append(Card(suit, rank))
The easiest way to populate the deck is with a nested loop. The outer
loop enumerates the suits from 0 to 3. The inner loop enumerates the
ranks from 1 to 13. Since the outer loop iterates four times, and the
inner loop iterates thirteen times, the total number of times the body
is executed is fifty-two (thirteen times four). Each iteration
creates a new instance of Card with the current suit and rank,
and appends that card to the cards list.
The append method works on lists but not, of course, tuples.
As usual, when we define a new type of object we want a method
that prints the contents of an object.
To print a Deck, we traverse the list and print each Card:
class Deck:
...
def printDeck(self):
for card in self.cards:
print card
Here, and from now on, the ellipsis (...) indicates that we have
omitted the other methods in the class.
As an alternative to printDeck, we could
write a __str__ method for the Deck class. The
advantage of __str__ is that it is more flexible. Rather
than just printing the contents of the object, it generates a string
representation that other parts of the program can manipulate
before printing, or store for later use.
Here is a version of __str__ that returns a string
representation of a Deck.
To add a bit of pizzazz, it arranges the cards in a cascade
where each card is indented one space more than the previous card:
class Deck:
...
def __str__(self):
s = ""
for i in range(len(self.cards)):
s = s + " "*i + str(self.cards[i]) + "\n"
return s
This example demonstrates several features. First, instead of
traversing self.cards and assigning each card to a variable,
we are using i as a loop
variable and an index into the list of cards.
Second, we are using the string multiplication operator to indent
each card by one more space than the last. The expression
" "*i yields a number of spaces equal to the current value
of i.
Third, instead of using the print command to print the cards,
we use the str function. Passing an object as an argument to
str is equivalent to invoking the __str__ method on
the object.
Finally, we are using the variable s as an accumulator.
Initially, s is the empty string. Each time through the loop, a
new string is generated and concatenated with the old value of s
to get the new value. When the loop ends, s contains the
complete string representation of the Deck, which looks like.
If a deck is perfectly shuffled, then any card is equally likely
to appear anywhere in the deck, and any location in the deck is
equally likely to contain any card.
To shuffle the deck, we will use the randrange function
from the random module. With two integer arguments,
a and b, randrange chooses a random integer in
the range a <= x < b. Since the upper bound is strictly
less than b, we can use the length of a list as the
second argument, and we are guaranteed to get a legal index.
For example, this expression chooses the index of a random card in a deck:
random.randrange(0, len(self.cards))
An easy way to shuffle the deck is by traversing the cards and
swapping each card with a randomly chosen one. It is possible that
the card will be swapped with itself, but that is fine. In fact, if
we precluded that possibility, the order of the cards would be less
than entirely random:
class Deck:
...
def shuffle(self):
import random
nCards = len(self.cards)
for i in range(nCards):
j = random.randrange(i, nCards)
self.cards[i], self.cards[j] = self.cards[j], self.cards[i]
Rather than assume that there are fifty-two cards in the deck, we get
the actual length of the list and store it in nCards.
For each card in the deck, we choose a random card from among the
cards that haven't been shuffled yet. Then we swap the current
card (i) with the selected card (j). To swap the
cards we use a tuple assignment, as in Section 9.2:
self.cards[i], self.cards[j] = self.cards[j], self.cards[i]
As an exercise, rewrite this line of code
without using a sequence assignment.
Another method that would be useful for the Deck class is removeCard, which takes a card as an argument, removes it, and
returns True if the card was in the deck and False
otherwise:
class Deck:
...
def removeCard(self, card):
if card in self.cards:
self.cards.remove(card)
return True
else:
return False
The in operator returns true if the first operand is in the
second, which must be a list or a tuple. If the first operand is an
object, Python uses the object's __cmp__ method to determine
equality with items in the list. Since the __cmp__ in the
Card class checks for deep equality, the removeCard method
checks for deep equality.
To deal cards, we want to remove and return the top card.
The list method pop provides a convenient way to do that:
class Deck:
...
def popCard(self):
return self.cards.pop()
Actually, pop removes the last card in the list, so we are in
effect dealing from the bottom of the deck.
One more operation that we are likely to want is the boolean function
isEmpty, which returns true if the deck contains no cards:
class Deck:
...
def isEmpty(self):
return (len(self.cards) == 0)
Warning: the HTML version of this document is generated from
Latex and may contain translation errors. In
particular, some mathematical expressions are not translated correctly. | http://greenteapress.com/thinkpython/thinkCSpy/html/chap15.html | CC-MAIN-2017-47 | en | refinedweb |
0
Hi. I am using a comparer class to sort a list of People details by their firstName.
The code to do this is as follows:
public class PersonDetailsComparer : IComparer<PersonDetails> { public int Compare(PersonDetails x, PersonDetails y) { int returnValue = 0; if (x != null && y != null && x. FirstName != null && y. FirstName != null) { return x.FirstName.CompareTo(y.FirstName ); } return returnValue; } }
This is working fine, that is the list is sorted in ascending order. But now, I need to add some exceptions, for example I want the following names to appear first from the list:
Lever
Johnson
Zaher
and then sorting the remaining list in ascending order.
Can any one please help me.
Thanks | https://www.daniweb.com/programming/software-development/threads/437749/sorting-using-icomparer | CC-MAIN-2017-47 | en | refinedweb |
Getting this error:
Makefile:835: recipe for target 'libccm_timeline_la_vala.stamp' failed
Disappeared after vala was installed.
Search Criteria
Package Details: cairo-compmgr-git 1:0.3.1.57.g416ae1a-3
Dependencies (8)
- libsm
- gtk2>=2.16.0 (gtk2-patched-filechooser-icon-view, gtk2-patched-gdkwin-nullcheck, gtk2-ubuntu)
- gconf (gconf-gtk2) (make)
- gettext (gettext-git) (make)
- git (git-git) (make)
- gtk-doc (make)
- vala (vala-git, vala0.26) (make)
- intltool>=0.41 (make)
Required by (0)
Sources (1)
Latest Comments
flashywaffles commented on 2015-06-05 11:49
Getting this error:
loserMcloser commented on 2015-05-28 22:35
Sure seems to need vala.
/bin/sh: valac: command not found
cgirard commented on 2015-05-20 12:27
No this use the newer branch without vala.
Osleg commented on 2015-05-20 10:40
The package requires vala as build dependency
ThePierrezou commented on 2015-04-28 18:29
Thank you verry much uchenic, this pkgbuild seem to work :)
uchenic commented on 2015-04-18 19:12
Proposed by delusionallogic PKGBUILD file
delusional commented on 2015-03-24 17:03
The error seems to be caused by the "clone" plugin. Disable that and it will run fine.
acgtyrant commented on 2015-02-11 04:42
I meet the error as Corax too~
Corax commented on 2014-12-21 14:26
I retried today (with a clean build directory), still the same, I get this when I try to launch cairo-compmgr:
0,000002:
0,000040: cannot add class private field to invalid type 'CCMClone'
0,002096: ??:0 ccm_log_print_backtrace()
0,002106: ??:0 g_logv()
0,002120: ??:0 g_log()
0,002124: ??:0 ccm_clone_register_type()
0,002127: ??:0 ccm_clone_get_plugin_type()
0,002131: ??:0 ccm_extension_loader_get_preferences_plugins()
0,002135: ??:0 ccm_preferences_page_new()
0,002138: ??:0 ccm_preferences_new()
0,002142: ??:0 ccm_tray_menu_new()
0,002145: ??:0 ccm_tray_icon_new()
0,002148: ??:0 main()
0,002151: ??:0 __libc_start_main()
0,002155: ??:0 _start()
0,516211:
0,516227: g_type_instance_get_class_private() requires a prior call to g_type_add_class_private()
0,518637: ??:0 ccm_log_print_backtrace()
0,518648: ??:0 g_logv()
0,518652: ??:0 g_log()
0,518656: ??:0 g_type_class_get_private()
0,518659: ??:0 ccm_output_new()
0,518662: ??:0 ccm_screen_plugin_load_options()
0,518666: [0x24ff] ??() ??:0
0,518669: ??:0 ccm_screen_plugin_load_options()
0,518673: ??:0 ccm_screen_plugin_load_options()
0,518676: ??:0 ccm_window_paint()
0,518679: ??:0 ccm_screen_new()
0,518682: ??:0 ccm_display_new()
0,518686: ??:0 ccm_tray_menu_new()
0,518689: ??:0 ccm_tray_icon_new()
0,518692: ??:0 main()
0,518696: ??:0 __libc_start_main()
0,518699: ??:0 _start()
0,520852: ??:0 ccm_log_print_backtrace()
0,520861: ??:0 ccm_preferences_page_plugin_init_utilities_section()
0,520866: sigaction.c:0 __restore_rt()
0,520869: ??:0 ccm_output_new()
0,520872: ??:0 ccm_screen_plugin_load_options()
0,520876: [0x24ff] ??() ??:0
0,520879: ??:0 ccm_screen_plugin_load_options()
0,520883: ??:0 ccm_screen_plugin_load_options()
0,520886: ??:0 ccm_window_paint()
0,520889: ??:0 ccm_screen_new()
0,520893: ??:0 ccm_display_new()
0,520896: ??:0 ccm_tray_menu_new()
0,520899: ??:0 ccm_tray_icon_new()
0,520902: ??:0 main()
0,520905: ??:0 __libc_start_main()
0,520909: ??:0 _start()
cgirard commented on 2014-11-17 16:55
It works from me. Try starting from a clean build directory.
Corax commented on 2014-11-09 18:30
Has anyone tried it with vala 0.26? Even before the PKGBUILD was updated I tried recompiling with vala 0.26 (modifying the PKGBUILD accordingly), but I had a runtime error when running cairo-compmgr, so I downgraded to 0.24 and recompiled again.
Xiaoming94 commented on 2014-10-28 14:35
configure: error: Package requirements (xcomposite,
xdamage,
xext,
xi,
x11,
ice,
sm,
xrandr,
gl,
cairo >= 1.8.0,
pixman-1 >= 0.16.0,
gtk+-2.0 >= 2.16.0
libvala-0.24 >= 0.18.0) were not met:
No package 'libvala-0.24' found
Got these errors while compiling
cgirard commented on 2014-05-30 12:21
Thanks filand and sorry for the delay
Xiaoming94 commented on 2014-05-26 10:23
@flland
Your patch your rejected for me for some reason
Got:
Hunk #1 FAILED at 55.
1 out of 1 hunk FAILED -- saving rejects to file src/ccm-debug.c.rej
:(
filand commented on 2014-05-26 08:12
Following patch helps. Please add folowing to 4.diff or create a separate patch (IMHO prefered):
diff --git a/src/ccm-debug.c b/src/ccm-debug.c
index 1b4d3d7..3fab8f5 100644
--- a/src/ccm-debug.c
+++ b/src/ccm-debug.c
@@ -55,8 +55,9 @@
#include <stdio.h>
#include <stdlib.h>
#include <execinfo.h>
+#include <libiberty/ansidecl.h>
+#include <libiberty/libiberty.h>
#include <bfd.h>
-#include <libiberty.h>
#include <dlfcn.h>
#include <link.h>
#endif /* HAVE_EDEBUG */
Xiaoming94 commented on 2014-05-19 21:17
Same build error as the two below.
Corax commented on 2014-05-18 13:11
Same here, a static assertion fails in multiple files:
G_STATIC_ASSERT (sizeof *(location) == sizeof (gpointer));
bsidb commented on 2014-05-09 00:26
Do you meet compile error? It fails to compile due to an error in
'Makefile:640: recipe for target 'ccm-debug.o' failed'
leong commented on 2014-04-15 14:35
vala0.22 to vala0.24 (2014-04-14) upgrade creates a broken dependency
A quick & dirty workaround (waiting for something better): change these 2 lines in PKGBUILD
sed -i 's!libvala-0.18!libvala-0.22!' configure.ac
sed -i 's!libvala-0.18!libvala-0.22!' vapi/cairo-compmgr.deps
to
sed -i 's!libvala-0.18!libvala-0.24!' configure.ac
sed -i 's!libvala-0.18!libvala-0.24!' vapi/cairo-compmgr.deps
Seems to work fine
cgirard commented on 2014-03-31 20:18
For reference:
cgirard commented on 2014-03-31 19:57
Nice! I'll update ASAP. Did you report upstream?
socke commented on 2014-03-31 19:50
I've fixed the build errors and posted the two patches with a new PKGBUILD in the german thread on:
cgirard commented on 2014-02-23 14:30
Xiaoming94: sorry but I still haven't find how to solve the error reported by Next7. The libiberty error is easy to fix (just replace libiberty.h by libiberty/libiberty.h in the include) but I cannot release a new version which still does not compile.
Xiaoming94 commented on 2014-02-22 21:49
I Get the same Compilation error as dnf
cgirard commented on 2014-01-07 14:02
I have tried a fix which works on cairo-compmgr PKGBUILD but not on the git version (I am hitting the compilation error given by Next7 below).
cgirard commented on 2014-01-07 14:00
Fixed (thanks to xartii)
dnf commented on 2014-01-05 17:54
build() ends up with error:
ccm-debug.c:59:23: fatal error: libiberty.h: No such file or directory
#include <libiberty.h>
I have found the file in:
/usr/include/libiberty.h
Apparently, other users had the same problem, as described here (in German language):
Any sugestions how to build it?
Next7 commented on 2013-12-02 04:01
The package fails to build with vala-0.22.1-1. Any suggestions?
===
cairo-compmgr.vapi:484.9-484.33: error: overriding method `CCM.Window.query_opacity' is incompatible with base method `CCM.WindowPlugin.query_opacity': incompatible type of parameter 1.
public void query_opacity (bool deleted);
^^^^^^^^^^^^^^^^^^^^^^^^^
cairo-compmgr.vapi:494.9-494.37: error: overriding method `CCM.Window.set_opaque_region' is incompatible with base method `CCM.WindowPlugin.set_opaque_region': incompatible type of parameter 1.
public void set_opaque_region (CCM.Region region);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
cairo-compmgr.vapi:273.9-273.30: error: overriding method `CCM.Screen.add_window' is incompatible with base method `CCM.ScreenPlugin.add_window': incompatible type of parameter 1.
public bool add_window (CCM.Window window);
^^^^^^^^^^^^^^^^^^^^^^
cairo-compmgr.vapi:274.9-274.33: error: overriding method `CCM.Screen.remove_window' is incompatible with base method `CCM.ScreenPlugin.remove_window': incompatible type of parameter 1.
public void remove_window (CCM.Window window);
^^^^^^^^^^^^^^^^^^^^^^^^^
ccm-window-animation.vala:322.27-322.29: warning: Gtk is deprecated. Use gtk+-3.0
Compilation failed: 4 error(s), 2 warning(s)
Makefile:569: recipe for target 'libccm_window_animation_la_vala.stamp' failed
make[2]: *** [libccm_window_animation_la_vala.stamp] Error 1
make[2]: Leaving directory 'aur/cairo-compmgr-git/src/cairocompmgr/plugins/window-animation'
Makefile:429: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 'aur/cairo-compmgr-git/src/cairocompmgr/plugins'
Makefile:499: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1
==> ERROR: A failure occurred in build().
Aborting...
cgirard commented on 2013-11-12 11:41
@ckozler: there is not much I can do. Please report this directly to upstream.
ckozler commented on 2013-11-09 23:56
Hi -
First off thank you for providing this package- it really is lightweight and great!
However, after I moved from xf86-video-ati to catalyst-hook/catalyst driver on xorg 1.3 I can no longer seem to run this. Stack trace is below - please let me know what else you need from me to help solve this
┌─[18:54:01]─[ckozler@localhost]
└──> cairo-compmgr-git $ >> cairo-compmgr
Xlib: extension "RANDR" missing on display ":0.0".
0.000002:
0.000516: IA__gtk_widget_set_colormap: assertion 'GDK_IS_COLORMAP (colormap)' failed
0.004024: ??:0 ccm_log_print_backtrace()
0.004067: ??:0 g_logv()
0.004091: ??:0 g_log()
0.004137: ??:0 ccm_preferences_new()
0.004155: ??:0 ccm_tray_menu_new()
0.004174: ??:0 ccm_tray_icon_new()
0.004193: ??:0 main()
0.004211: ??:0 __libc_start_main()
0.004229: ??:0 _start()
0.006077:
0.006111: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
0.009137: ??:0 ccm_log_print_backtrace()
0.009173: ??:0 g_logv()
0.009196: ??:0 g_log()
0.009239: ??:0 ccm_window_new()
0.009259: ??:0 g_object_unref()
0.009276: ??:0 ccm_display_new()
0.009295: ??:0 ccm_tray_menu_new()
0.009312: ??:0 ccm_tray_icon_new()
0.009329: ??:0 main()
0.009348: ??:0 __libc_start_main()
0.009364: ??:0 _start()
0.009383:
0.009394: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
0.012055: ??:0 ccm_log_print_backtrace()
0.012079: ??:0 g_logv()
0.012094: ??:0 g_log()
0.012109: ??:0 ccm_window_new()
0.012124: ??:0 g_object_unref()
0.012138: ??:0 ccm_display_new()
0.012152: ??:0 ccm_tray_menu_new()
0.012166: ??:0 ccm_tray_icon_new()
0.012181: ??:0 main()
0.012195: ??:0 __libc_start_main()
0.012209: ??:0 _start()
0.012236:
0.012251: Composite init failed for (null)
0.014746: ??:0 ccm_log_print_backtrace()
0.014780: ??:0 g_logv()
0.014801: ??:0 g_log()
0.014844: ??:0 ccm_display_new()
0.014862: ??:0 ccm_tray_menu_new()
0.014882: ??:0 ccm_tray_icon_new()
0.014901: ??:0 main()
0.014918: ??:0 __libc_start_main()
0.014936: ??:0 _start()
cgirard commented on 2013-10-09 09:49
Only corrected the vala dep. mesa-libgl provides libgl but other packages provide it as well.
torors commented on 2013-10-08 15:48
After upgrade (okt. 2013) I think those lines must bee changed:
depends=("gtk2>=2.16.0" "vala>=0.20" libsm libgl)
to
depends=("gtk2>=2.16.0" "vala>=0.20" libsm mesa-libgl)
and:
sed -i 's!libvala-0.18!libvala-0.20!' configure.ac
sed -i 's!libvala-0.18!libvala-0.20!' vapi/cairo-compmgr.deps
to
sed -i 's!libvala-0.18!libvala-0.22!' configure.ac
sed -i 's!libvala-0.18!libvala-0.22!' vapi/cairo-compmgr.deps
cgirard commented on 2013-08-04 08:56
Or not... crayZsaaron, please read the wiki about AUR guideline. base-devel are implicit dependencies.
crayZsaaron commented on 2013-08-03 21:37
autoconf, automake, and libtool should be added to the dependencies - The build script relies on autoreconf and aclocal, and libtool is required in configure.ac.
Thanks for contributing, though!
benjiprod commented on 2013-04-22 21:15
vala-0.20 is out
Need to fix sed replacement to libvala-0.20 as cairo-compmgr
cgirard commented on 2013-01-20 18:23
Fixed.
@sporkasaurus: it should fix your issue as well.
cgirard commented on 2013-01-20 18:23
Fixed.
@sporkasaurus: it should fixed your issue as well.
Anonymous comment on 2013-01-18 14:32
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/../../../../lib/libbfd.a(compress.o):function bfd_compress_section_contents: error: undefined reference to 'compressBound'
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/../../../../lib/libbfd.a(compress.o):function bfd_compress_section_contents: error: undefined reference to 'compress'
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/../../../../lib/libbfd.a(compress.o):function bfd_get_full_section_contents: error: undefined reference to 'inflateEnd'
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/../../../../lib/libbfd.a(compress.o):function bfd_get_full_section_contents: error: undefined reference to 'inflateInit_'
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/../../../../lib/libbfd.a(compress.o):function bfd_get_full_section_contents: error: undefined reference to 'inflate'
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/../../../../lib/libbfd.a(compress.o):function bfd_get_full_section_contents: error: undefined reference to 'inflateReset'
collect2: error: ld returned 1 exit status
make[2]: *** [cairo-compmgr] Error 1
make[2]: Leaving directory `/home/koss/Downloads/cairo-compmgr-git/src/cairocompmgr-build/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/koss/Downloads/cairo-compmgr-git/src/cairocompmgr-build/src'
make: *** [all-recursive] Error 1
==> ERROR: A failure occurred in build().
Aborting...
Anonymous comment on 2013-01-16 00:56
Hey guys, am having alot of trouble compiling this. Please see whats in the pastebin. Seems to be different than the issue everyone else is reporting. Any help would be most appreciated.
FoolEcho commented on 2013-01-12 12:10
To compile and link correctly, you need to add -lz to the LIBS (like for the no-git package).
cgirard commented on 2012-11-14 09:37
Well, thanks!
I'm afraid it will. If anyone know how to properly handle this I'm all ear.
DaveCode commented on 2012-11-13 23:27
OK thank you for clarifying what happened to me. It was a freak glitch. Will it happen whenever vala version number bumps?
I would still like the PKGBUILD to check deps before download/build steps. I'm not sure it's possible but it would help.
Thank you so much, you make Arch a happy place for us.
cgirard commented on 2012-11-13 09:24
The PKGBUILD does work out of the box. When you tested vala had just been updated in Arch repos but the dependency check not changed in upstream source files. That is why it failed.
What I meant by more strict was putting something like "vala=0.18" but it would mean that whenever vala is updated in the repo you would not be able to install the update without uninstalling cairo-compmgr-git.
People using vala-git have to edit the PKGBUILD and only them.
DaveCode commented on 2012-11-13 06:19
@cgirard: I meant to communicate something else and think you even got strictness backwards, let me explain.
The PKGBUILD doesn't work out of the box, period, regardless of vala version, stock or -git. It should work with one or the other (or both). Any PKGBUILD should work without edits. This one needs edits no matter which vala is installed. So it fails to track the Arch ecosystem in any way at all.
Pick stock or -git vala and run with it, or allow any version of vala. If anything, I propose less strictness, not more. Thanks again!
cgirard commented on 2012-11-04 20:31
@DaveCode: I could make the vala dependency more strict but it would annoy the user at each vala update. Not sure it is really better...
DaveCode commented on 2012-11-02 12:12
Thank you for maintaining the package. Here is my report. It fails with stock Arch vala 0.18.0-1, for maybe the same reasons as vala-git, I don't know. It builds against stock Arch vala 0.18.0-1 by changing the 0.20's to 0.18's in the commented-out sed commands, and uncommenting same. So whether one uses stock vala or vala-git one needs to edit this PKGBUILD.
Can the PKGBUILD check all deps first, avoiding so many download/build steps before bailing on a missing dep?
Thank you for your efforts in the AUR.
hplgonzo commented on 2012-10-30 13:20
@cgirard: building with vala-git. thx for adding the commented section.
cgirard commented on 2012-10-29 10:53
Switched to a different git repo (more up-to-date). Please remove your src dir before rebuilding (or it won't switch remote git).
@hplgonzo: I've added a commented section that should do what you have asked (not tested).
hplgonzo commented on 2012-10-28 10:13
not building with vala-git (libvala-0.20.so) anymore since last update.
missing possibility to change version in the PKGCONFIG's sed commands from 16 to 20.
hplgonzo commented on 2012-10-28 10:05
not building with vala-git (libvala-0.20.so) anymore since last update.
missing possibility to change version in the PKGCONFIG's sed commands from 16 to 20.
hplgonzo commented on 2012-10-28 10:04
not working with vala-git (libvala-0.20.so) anymore since last update.
missing possibility to change version in the PKGCONFIG's sed commands from 16 to 20.
cgirard commented on 2012-10-15 21:52
@benjiprod: the git url is already set to: git://git.tuxfamily.org/gitroot/ccm/cairocompmgr.git
benjiprod commented on 2012-10-15 21:46
Doesn't work because the good git name is :
name_git : cairo-compmgr
git://git.tuxfamily.org/gitroot/ccm/cairocompmgr.git
cgirard commented on 2012-05-01 17:14
@buergi: yes right.
@sleepforlife: Could you please run: "LANG=C makepkg -s" and post the output?
@donniezazen: Just remove cairo-compmgr-git before building it.
buergi commented on 2012-05-01 16:46
Note to compile it with vala-git(which provides libvala-0.18.so) just change the version in the PKGCONFIG's sed commands from 16 to 18 and fix the depends-line.
donniezazen commented on 2012-05-01 16:09
Install or build missing dependencies for cairo-compmgr-git:
resolving dependencies...
looking for inter-conflicts...
:: vala and vala-git are in conflict. Remove vala-git? [y/N] y
error: failed to prepare transaction (could not satisfy dependencies)
:: cairo-compmgr-git: requires vala-git
sleepforlife commented on 2012-05-01 15:58
cm-timeline.c:1105:8: error: stray '\260' in program
ccm-timeline.c:1105:15: error: expected ';' before 'MEL'
ccm-timeline.c:1105:15: error: stray '\304' in program
ccm-timeline.c:1105:15: error: stray '\260' in program
make[1]: *** [ccm-timeline.lo] Hata 1
make[1]: `/tmp/yaourt-tmp-sleepforlife/aur-cairo-compmgr-git/src/cairocompmgr-build/lib' dizininden çıkılıyor
make: *** [all-recursive] Hata 1
==> HATA: build() içinde bir hata oluştu.
Çıkılıyor...
==> ERROR: Makepkg was unable to build cairo-compmgr-git.
==> Restart building cairo-compmgr-git ? [y/N]
==> ------------------------------------------
cgirard commented on 2012-04-25 15:08
I've read your comments and will update the PKGBUILD ASAP. Switching back to extra/vala seems a good idea.
ewaller commented on 2012-04-25 15:03
I can confirm that on my system, a 64 bit system with testing repositories enabled, that cairo-compmgr does not pkgbuild "Out of the box"
On my system, I used aur/vala-git rather than extra/vala. To link, I required Kulpae's 19 April 12 patch to the PKGBUILD.
Works dandy.
Anonymous comment on 2012-04-24 10:33
Seems to build okay with vala 0.16.0-1 that is in extra now. I changed the sed lines in the PKGBUILD to 0.16 instead of 0.18 and the depends line to vala>=0.16 instead of vala-git.
Also had to make the change to LIBS as suggested by kulpae below, in the PKGBUILD file.
kulpae commented on 2012-04-19 11:43
I had to link to libgmodule-2.0 to make it build:
(because of "ccm-extension.o: undefinied reference to symbol 'g_module_symbol'")
./autogen.sh --prefix=/usr LIBS="-ldl -lgmodule-2.0"
cgirard commented on 2012-04-03 15:54
"==> ERROR: Makepkg was unable to build vala-git." : this is an error with vala-git build not cairo-compmgr-git
donniezazen commented on 2012-04-03 15:18
./autogen.sh: line 10: valac: command not found
**Error**: You must have valac >= 0.12.0 installed
to build vala. Download the appropriate package
from your distribution or get the source tarball at
==> ERROR: A failure occurred in build().
Aborting...
==> ERROR: Makepkg was unable to build vala-git.
Anonymous comment on 2012-04-03 14:45
@zertyz
I met the same problem and now it's fixed. I compiled vala-git 20120216-1 and libvala was updated to 0.18 version. Change the version number in following lines would help.
sed -i 's+vala-0.10+libvala-0.12+' configure.ac
sed -i 's+vala-0.10+libvala-0.12+' vapi/cairo-compmgr.deps
zertyz commented on 2012-03-31 17:57
build is failing with "No package 'libvala-0.16' found"
cgirard commented on 2012-03-07 11:32
"valac" is provided by vala-git. I don't understand how you can be in a state where makepkg try to build without installed dependencies.
I'll have a look to see if there is something else wrong.
segrived commented on 2012-03-06 18:34
can't install. now this package requires "valac"
cgirard commented on 2012-02-16 13:43
OK. Corrected.
buergi commented on 2012-02-15 22:27
+1 for bch24's solution, i had the same problem
bch24 commented on 2011-12-22 11:38
Was unable to build using yaourt.
Had to modify PKGBUILD and edit:
./autogen.sh --prefix=/usr
to
LDFLAGS+="/usr/lib/libdl.so" ./autogen.sh --prefix=/usr
to compile and install without errors.
cgirard commented on 2011-09-22 22:22
Switched to vala-git waiting vala to be updated.
Anonymous comment on 2011-09-21 10:46
vala is out of date
change it into vala-git
libvala could not found
cgirard commented on 2011-03-17 10:02
Corrected. Thanks for the head up.
haawda commented on 2011-03-16 21:18
I had to put
sed -i 's+vala-0.10+libvala-0.12+' configure.ac
sed -i 's+vala-0.10+libvala-0.12+' vapi/cairo-compmgr.deps
before the autogen.sh call to make this build. Otherwise there are complaints about missing vala-0.10. We have 0.12 in the repos. If you change that, also the depends array should reflect this.
We usually prefer install -d over mkdir -p.
cgirard commented on 2010-11-04 14:02
Thanks.
Done.
bcat commented on 2010-11-04 04:39
The Vala patch has been integrated upstream, so now the package won't build until it's removed.
cgirard commented on 2010-10-13 16:42
I've improved the package following cairo-compmgr in community as a guideline.
Det commented on 2010-09-29 14:41
Lol, I now got my name on an official package.
cgirard commented on 2010-09-29 09:35
Because it has been promoted to the community repo:
Det commented on 2010-09-29 08:24
Huh, why was cairo-compmgr removed? Didn't find anything on the mailing list.
cgirard commented on 2010-09-28 09:23
Right. Thank you for the remember.
Det commented on 2010-09-28 09:20
Should probably be mentioned in the bug report you filed?:
cgirard commented on 2010-09-27 20:55
OK. I'll add the relevant option to the PKGBUILD then.
Anonymous comment on 2010-09-27 19:30
"Try removing your '-j3' option in your "MAKEFLAGS""
==============================
Yes, successfully build git-version!
cgirard commented on 2010-09-27 18:45
Yes but the bug could have been introduced upstream between both versions.
Try removing your '-j3' option in your "MAKEFLAGS".
I don't know why but I got a similar .h file not found when using '-j5'.
Anonymous comment on 2010-09-27 18:21
hmm, but cairo-compmgr 0.3.0 () I have now successfully build...
my makepkg.conf
cgirard commented on 2010-09-27 18:05
Ok. I was thinking of a bug with a previous version of automake...
The new error makes me think of some strange behaviour I had when changing some options in my /etc/makepkg.conf. Could you post yours ?
Is anybody able to compile cairo-compmgr on a 32 bits system ?
Anonymous comment on 2010-09-27 17:59
o_O new error)))
Anonymous comment on 2010-09-27 17:56
automake (GNU automake) 1.11.1
cgirard commented on 2010-09-27 17:50
Sorry I was meaning 'automake --version'
Anonymous comment on 2010-09-27 17:13
[catalyst@catalyst ~]$ automake -v
automake: `configure.ac' or `configure.in' is required
cgirard commented on 2010-09-27 15:42
@catalyst: what is your automake version (automake -v) ?
Anonymous comment on 2010-09-26 17:11
yes, 32-bit vala
Det commented on 2010-09-26 09:54
With cairo-compmgr there was also a compilation issue complaining about a non-existing library: "configure.ac:130: warning: macro `AM_GCONF_SOURCE_2' not found in library".
Clearly that message was about gconf and this one about vala ("AM_PROG_VALAC"). So 32-bit vala issue perhaps?
Det commented on 2010-09-24 20:25
Wait... so the configure.ac patch actually fixes that? How is that possible? They share the same source.
Det commented on 2010-09-24 18:17
Maybe the configure.ac patch should be applied for 32-bit users until fixed upstream?:
[ "$CARCH" = "i686" ] && [patch-configure.ac]
cgirard commented on 2010-09-24 14:21
OK. As I don't think this bug is related to the package itself, I've opened a bug upstream:
Anonymous comment on 2010-09-23 18:41
Det commented on 2010-09-23 17:58
Catalyst, please use pastebin with build logs too. Your post is _really_ long.
Det commented on 2010-09-23 17:58
Catalyst, please use pastebin with build logs too. The "Please..." comment should also mention build logs instead of just PKGBUILDs, patches and scripts.
Det commented on 2010-09-23 13:26
People shouldn't use Yaourt either way. It's kinda buggy.
cgirard commented on 2010-09-23 08:50
OK. Waiting for it.
Anonymous comment on 2010-09-23 08:35
I have cleared all, but it is impossible in any way. I use yaourt. A full system update has made yesterday
The full log of error I will lay out later
cgirard commented on 2010-09-23 07:48
Could you please post a log of the error? What AUR helper are you using if you're using one ? Have you tried to delete the folder where files are pulled from git to be sure you have not fiddled with them.
cgirard commented on 2010-09-23 05:51
Seems strange. The bug is corrected upstream. I've changed the branch it pulls when a first checkout has already been done.
Anonymous comment on 2010-09-23 03:56
For me doesn't work. It is necessary to make a patch configure.ac
cgirard commented on 2010-09-22 19:02
Well I'm adopting it then. Let me now if this new version is not working.
Det commented on 2010-09-22 16:46
No, I'm... not going to do that... o_O
Anonymous comment on 2010-09-22 16:12
Det take away
Det commented on 2010-09-22 16:01
Hardly anybody _needs_ to maintain a package. Bumped for starters.
Anonymous comment on 2010-09-22 15:51
))) guys so сan pick up this package, I do not need
Det commented on 2010-09-22 15:11
K, good to know.
cgirard commented on 2010-09-22 15:00
The link I provided is a version between "origin" and "master" in the upstream tree. The pkgbuild sync on the "origin".
Det commented on 2010-09-22 14:48
I was just thinking about the same thing :l...
E: the vala-required thingy has already been fixed upstream. Actually doesn't the link you provided show the "upstream tree" (is that even a nearly correct term?) anyways?
E2: Here's the new package tarball with the correct patching scheme and proper dependencies:
Det commented on 2010-09-22 14:45
I was just thinking about the same thing :l...
E: the vala-required thingy has already been fixed upstream.
E2: Here's the new package tarball with the correct patching scheme and proper dependencies:
Det commented on 2010-09-22 14:37
I was just thinking about the same thing :l...
E: the vala-required thingy has already been fixed upstream.
Det commented on 2010-09-22 14:36
I was just thinking about the same thing :l...
cgirard commented on 2010-09-22 14:27
I will.
But I don't understand the purpose of maintaining AUR packages if you cannot do it by yourself.
Anonymous comment on 2010-09-22 13:56
please will make patch configure.ac and give me PKGBUILD
cgirard commented on 2010-09-22 09:14
Some corrections:
* You should apply the patch after copying the git folder because it complies with the guidelines and because as it is today it avoids rebuilding twice the package (the patch cannot be applied twice)
* configure.ac need to be patched, as done here (vala version):;a=commitdiff;h=06d2dea6cf28c27c11a268f6631ddfb84dd5229d
Det commented on 2010-09-21 18 there very weird..
Anonymous comment on 2010-09-21 18:08
((((((
Det commented on 2010-09-21 17:32
:DD
Anonymous comment on 2010-09-21 17:18
I don't know a fucking english))
Det commented on 2010-09-21 15:59
Still compiles fine here. Have you 'pacman -Syu'd lately? Cleaned your build environment?
Also this probably sounds so pedantic that you "facepalm" your ass of but "not compiled" isn't really proper English grammar - it sounds like you hadn't done it yet. It should be something like "the build fails", "compilation fails", "unable to compile", "can't compile", "doesn't compile" and so on <:).
Anonymous comment on 2010-09-21 13:41
updated. Not compiled...
==> ERROR: Makepkg was unable to build cairo-compmgr-git.
Anonymous comment on 2010-09-21 12:09
thanks, all ok. This evening I will update
Det commented on 2010-09-21 11:45
Really? Well, here you go:
E: Just tried the former link with two different proxies and sure enough, I couldn't download it with either on of them. Perhaps I'm doing something wrong with MediaFire...
Det commented on 2010-09-21 11:41
Really? Well, here you go:
Anonymous comment on 2010-09-21 11:34
Det I can not download this... please upload this file to another host
Det commented on 2010-09-21 11:07
Here's the full thing (tarball): - flagging until fixed.
Anonymous comment on 2010-09-20 16:21
As Det said, the fix is straightforward (one line). Within the build directory there is a file vapi/cairo-compmgr.deps - just change the line 'vala-1.0' to 'vala-0.10'. Make a diff file using diff -u. Then within the PKGBUILD add lines to patch said file with your saved diff.
Det commented on 2010-09-17 14:58
Sure, just rename or symlink all the "0.10" stuff to "1.0" (rather ugly to do that manually, though, since you'd also need to manually remove all that stuff when removing vala(-devel) with pacman). OR apply a patch to this thing to look for the "1.0" stuff from the "0.10" dirs.
Det commented on 2010-09-17 14:57
Sure, just rename or symlink.
Det commented on 2010-09-17 14:56
Sure, just rename.
Anonymous comment on 2010-09-17 06:00
So it is possible to make?
Det commented on 2010-09-17 05:54
The stuff with even vala 0.9.8 (the latest one) is installed using the "vala-0.10" name. That is way even vala 0.9.8 won't work.
Anonymous comment on 2010-08-29 16:53
not compiled...(( where to find vala 0.10???
Anonymous comment on 2010-08-27 08:10
I don't know if it's some sort of dual versioning on vala's part, because I tried installing an older version of cairo-compmgr aswell, and then it complained over not finding vala-0.1.
I removed the 'vala-devel' package I had installed from AUR, and installed version 0.8.1 from the official repositories instead, and the older cairo-compmgr installed fine.
Anonymous comment on 2010-08-26 18:22
hmm...
Anonymous comment on 2010-08-26 10:42
Got this when I tried to compile it:
configure: error: Package requirements (xcomposite,
xdamage,
xext,
xi,
sm,
cairo >= 1.8.0,
pixman-1 >= 0.16.0,
gtk+-2.0 >= 2.16.0
vala-0.12 >= 0.9.7) were not met:
No package 'vala-0.12' found
The latest version of vala isn't new enough?
Anonymous comment on 2010-08-11 18:06
updated
Det commented on 2010-08-10 21:52
Please update the PKGBUILD:
Det commented on 2010-08-10 21:51
Please update the PKGBUILD:
Runiq commented on 2010-06-04 12:17
I have the following depends & makedepends:
depends=('cairo' 'libxcomposite')
makedepends=('gtk-doc' 'intltool' 'cvs')
And it compiled and worked fine.
Anonymous comment on 2010-04-20 12:49
Maybe move some of the deps to optdeps, since it can compile just fine without libgnomeui, etc.
maxi_jac commented on 2010-03-28 15:49
You should add 'cvs' to the makedepends, it is necessary for ./autogen.sh
orivej commented on 2010-03-25 01:01
arch=('any') is a nice idea, but since it is meant to be used for architecture independent packages (see ‘man PKGBUILD’), you should better use arch=('i686' 'x86_64') by default. | https://aur.archlinux.org/packages/cairo-compmgr-git/?ID=30042&comments=all | CC-MAIN-2017-47 | en | refinedweb |
Implement custom map provider
Here is a list of all provider-related changes:
The IMapProvider interface no longer exists.
The MapProviderBase class exists but it should not be used as a base class for custom providers any more.
All custom map providers should inherit either from TiledProvider, or ImageProvider.
TiledProvider is a base class for map providers that show map as a sequence of tiles (e.g. BingMaps, OpenStreetMap).
ImageProvider is a base class for providers that show map as a single image (UriImageProvider).
Since all existing custom providers (prior to Q1 2011) show map as a sequence of tiles, they should be changed to inherit from the TiledProvider class. The only property which must be overridden in these custom providers is SpatialReference -- it should return the actual projection used by the provider (in all existing custom providers prior to Q1 2011 it should be MercatorProjection).
Here is a list of all map source-related changes:
The map provider cannot be used as a map source by itself as it was allowed before.
The logic which provides tiles must be moved to separate class – map source.
All map source classes should inherit either from TiledMapSource, or ImageMapSource classes and override methods depending on the specific map source type.
TiledMapSource is a base class for map sources that return map tiles (that includes all BingMaps sources: aerial, road, birds eye, and all OpensStreetMap sources: Mapnik and Osmarenderer)
ImageMapSource is a base class for map sources that return single map image.
Since all existing providers prior to Q1 2011 show map as a sequence of tiles, their new map source classes should inherit from the TiledMapSource class and meet the following requirements:
Override the Initialize method.
Override the GetTile method and existing custom logic should be moved here.
Call RaiseInitializeCompleted method when the respective custom provider is initialized (in case of simple providers it is enough to call this method from the override Initialize method).
Simple map provider that supports one map source can be as simple as following:
public class MyMapProvider : TiledProvider { /// <summary> /// Initializes a new instance of the MyMapProvider class. /// </summary> public MyMapProvider() : base() { MyMapSource source = new MyMapSource(); this.MapSources.Add(source.UniqueId, source); } /// <summary> /// Returns the SpatialReference for the map provider. /// </summary> public override ISpatialReference SpatialReference { get { return new MercatorProjection(); } } } public class MyMapSource : TiledMapSource { /// <summary> /// Initializes a new instance of the MyMapSource class. /// </summary> public MyMapSource() : base(1, 20, 256, 256) { } /// <summary> /// Initialize provider. /// </summary> public override void Initialize() { // Raise provider initialized event. this.RaiseIntializeCompleted(); } /// <summary> /// Gets the image URI. /// </summary> /// <param name="tileLevel">Tile level.</param> /// <param name="tilePositionX">Tile X.</param> /// <param name="tilePositionY">Tile Y.</param> /// <returns>URI of image.</returns> protected override Uri GetTile(int tileLevel, int tilePositionX, int tilePositionY) { int zoomLevel = ConvertTileToZoomLevel(tileLevel); // Prepare tile url somehow ... string url = CustomHelper.GetTileUrl(tileLevel, tilePositionX, tilePositionY); return new Uri(url); } }
Public Class MyMapProvider Inherits TiledProvider ''' <summary> ''' ''' Initializes a new instance of the MyMapProvider class. ''' ''' </summary> ''' Public Sub New() MyBase.New() Dim source As New MyMapSource() Me.MapSources.Add(source.UniqueId, source) End Sub ''' <summary> ''' ''' Returns the SpatialReference for the map provider. ''' ''' </summary> ''' Public Overrides ReadOnly Property SpatialReference() As ISpatialReference Get Return New MercatorProjection() End Get End Property End Class Public Class MyMapSource Inherits TiledMapSource ''' <summary> ''' ''' Initializes a new instance of the MyMapSource class. ''' ''' </summary> ''' Public Sub New() MyBase.New(1, 20, 256, 256) End Sub ''' <summary> ''' ''' Initialize provider. ''' ''' </summary> ''' Public Overrides Sub Initialize() ' Raise provider intialized event. ' Me.RaiseIntializeCompleted() End Sub ''' <summary> ''' ''' Gets the image URI. ''' ''' </summary> ''' ''' <param name="tileLevel">Tile level.</param> ''' ''' <param name="tilePositionX">Tile X.</param> ''' ''' <param name="tilePositionY">Tile Y.</param> ''' ''' <returns>URI of image.</returns> ''' Protected Overrides Function GetTile(tileLevel As Integer, tilePositionX As Integer, tilePositionY As Integer) As Uri Dim zoomLevel As Integer = ConvertTileToZoomLevel(tileLevel) ' Prepare tile url somehow ... ' Dim url As String = CustomHelper.GetTileUrl(tileLevel, tilePositionX, tilePositionY) Return New Uri(url) End Function End Class | https://docs.telerik.com/devtools/silverlight/controls/radmap/how-to/howto-custom-provider | CC-MAIN-2017-47 | en | refinedweb |
Fences allow programmers to express a conservative approximation to the precise pair-wise relations of operations required to be ordered in the happens-before relation. This is conservative because fences use the sequenced-before relation to select vast extents of the program into the happens-before relation.
This conservatism is commonly desired because it is difficult to reason about operations hidden behind layers of abstraction in C++ programs. An unfortunate consequence of this is that precise expression of ordering is not possible in C++ currently, which makes it easy to over-constrain the order of operations internal to synchronization primitives that comprise multiple atomic objects. This constrains the ability of implementations (compiler and hardware) to reorder, ignore, or assume the absence of operations that are not relevant or not visible.
In existing practice, the flush primitive of OpenMP is more expressive than the fences of C++ in at least this one sense: it can optionally restrict the ordering of operations to a developer-specified set of memory locations. This is enough to exactly express the required pair-wise ordering for short lock-free algorithms. This capability isn’t only relevant to OpenMP and would be further enhanced if it was integrated with the other facets of the more modern C++ memory model.
An example use-case for this capability is a likely implementation strategy for N4392‘s std::barrier object. This algorithm makes ordered modifications on the atomic sub-objects of a larger non-atomic synchronization object, but the internal modifications need only be ordered with respect to each other, not all surrounding objects (they are ordered separately).
In one example implementation, std::barrier is coded as follows:); // Only need to order {expected, arrived} -> {epoch}. epoch.store(myepoch + 1, memory_order_release); } else while (epoch.load(memory_order_acquire) == myepoch) ; } private: int expected; atomic<int> arrived, nexpected, epoch; };
The release operation on the epoch atomic is likely to require the compiler to insert a fence that has an effect that goes beyond the intended constraint, which is to order only the operations on the barrier object. Since the barrier object is likely to be smaller than a cache line and the library’s implementation can control its alignment using alignas, then it would be possible to compile this program without a fence in this location on architectures that are cache-line coherent.
To concisely express the bound on the set of memory operations whose order is constrained, we propose to accompany std::atomic_thread_fence with an object variant which takes a reference to the object(s) to be ordered by the fence.
Under 29.2 Header <atomic> synopsis [atomics.syn]:
namespace std { // 29.8, fences // ... template<class... T> void atomic_object_fence(memory_order, T&&... objects) noexcept; }
Under 29.8 Fences [atomics.fences], after the current atomic_thread_fence paragraph:
template<class... T> void atomic_object_fence(memory_order, T&&... objects) noexcept;
Effect: Equivalent to atomic_thread_fence(order) except that operations on objects other than those in the variadic template arguments and their sub-objects are un-sequenced with the fence. The objects operands are not accessed.
Note: The compiler may omit fences entirely depending on alignment information, may generate a dynamic test leading to a fence for under-aligned objects, or may emit the same fence an atomic_thread_fence would.
The __cpp_lib_atomic_object_fence feature test macro should be added.
At the Kona meeting, the SG1 group expressed concerns about the current wording and suggested that it be reworked. The main concern was that the exclusive behavior expressed in the effect clause wasn’t fully correct.
The authors seek comments on the following approach.
The current definition from 1.10 (13) is:
An evaluation A inter-thread happens before an evaluation B if
- (13.1) — A synchronizes with B, or
- (13.2) — A is dependency-ordered before B, or
- (13.3) — for some evaluation X
- (13.3.1) — A synchronizes with X and X is sequenced before B, or
- (13.3.2) — A is sequenced before X and X inter-thread happens before B, or
- (13.3.3) — A inter-thread happens before X and X inter-thread happens before B.
An alternate wording could update (13.3.1) and (13.3.2) for the case where X is an object fence. In that case, these clauses apply if A’s and B’s modified memory location are named in the fence’s objects parameters.
This could be done by either:
A trivial, yet conforming implementation may implement the new fence in terms of the existing std::atomic_thread_fence using the same memory order:
template<class... T> void atomic_object_fence(std::memory_order order, T &&...) noexcept { std::atomic_thread_fence(order); }
A more advanced implementation can overload this for the single-object case on architectures (or micro-architectures) that have cache coherency with a known line size, even if it is conservatively approximated:
#define __CACHELINE_SIZE // Secret (micro-)architectural value. template <class T> std::enable_if_t<std::is_standard_layout<T>::value && __CACHELINE_SIZE - alignof(T) % __CACHELINE_SIZE >= sizeof(T)> atomic_object_fence(std::memory_order, T &&object) noexcept { asm volatile("" : "+m"(object) : "m"(object)); // Code motion barrier. }
To extend this for multiple objects, an implementation for the same architecture may emit a run-time check that the total footprint of all the objects fits in the span of a single cache line. This check may commonly be eliminated as dead code, for example when the objects are references from a common base pointer.
The above std::barrier example’s inner-code can use the new overload as follows:
if (result == expected) { expected = nexpected.load(memory_order_relaxed); arrived.store(0, memory_order_relaxed); atomic_object_fence(memory_order_release, *this); epoch.store(myepoch + 1, memory_order_relaxed); }
It is equivalently valid to list the individual members of barrier instead of *this. Both forms are equivalent.
Less trivial implementations of std::atomic_object_fence can enable more optimizations for new hardware and portable program representations.
In P0154R0 we propose to formalize the notions of false-sharing and true-sharing as perceived by the implementation in relation to the placement of objects in memory. In the expository implementation of the previous section we also showed how a cache-line coherent architecture or micro-architecture can elide fences that only bisect relations between objects that are in the same cache line, if provable at compile-time. These notions interact in a virtuous way because P0154R0’s abstraction enables reasoning about likely cache behavior that implementations can optimize for.
The example application of std::atomic_object_fence to the std::barrier object is improved by combining these notions as follows:
alignas(std::thread::hardware_true_sharing_size) // P0154); atomic_object_fence(memory_order_release, *this); // P0153 epoch.store(myepoch + 1, memory_order_relaxed); } else while (epoch.load(memory_order_acquire) == myepoch) ; } private: int expected; atomic<int> arrived, nexpected, epoch; };
By aligning the barrier object to the true-sharing granularity, it is significantly more likely that the implementation will be able to elide the fence if the architecture or micro-architecture has cache-line coherency. Of course an implementation of the Standard is free to ensure this by other means, we provide this example as exposition for what developer programs might do.
The semantics of fences mean that:
Therefore the program is well-defined (so far) and the assert(x) of 6 does not fire.
However, the un-sequenced semantics of the object fence also mean that:
Therefore the assert(w) of 7 makes the program undefined due to a data-race. | http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0153r0.html | CC-MAIN-2017-47 | en | refinedweb |
[
]
Hadoop QA commented on HDFS-929:
--------------------------------
-1 overall. Here are the results of testing the latest attachment
against trunk revision 1139.
> DFSClient#getBlockSize is unused
> --------------------------------
>
> Key: HDFS-929
> URL:
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 0.21.0
> Reporter: Eli Collins
> Assignee: Jim Plush
> Priority: Minor
> Fix For: 0.23.0
>
> Attachments: HDFS-929-take1.txt
>
>
> DFSClient#getBlockSize is unused. Since it's a public class internal to HDFS we just
remove it? If not then we should add a unit test.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201106.mbox/%3C1369141757.40845.1309036667481.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2015-14 | en | refinedweb |
Re: apache::asp maintenance
Expand Messages
- Joshua Chamas wrote:
> If you have other needs please let me know.Well, for inertia reasons, we're still maintaining a lot of CentOS 3 and
Red Hat Linux 9 type systems, with mod_perls of 1.99_07 and _09 vintage,
which was before the big Apache2:: namespace reorg. As a result, when
installing Apache::ASP, I have to manually edit ApacheCommon.pm to
remove all the '2's. Also, the Apache2::ServerRec module doesn't exist
on these systems, so I have to comment it out. Apache::ASP then works
just fine.
It would be spiffy if the library detected this situation and coped
automatically.
I tested this, and it works here:
eval {
# Try new Apache2 module requests first
require Apache2::RequestRec;
require Apache2::RequestUtil;
require Apache2::RequestIO;
require Apache2::Response;
require APR::Table;
require APR::Pool;
require Apache2::Connection;
require Apache2::ServerUtil;
require Apache2::ServerRec;
require Apache2::SubRequest;
require Apache2::Log;
};
eval {
# Alternative if above fails because system is old, but not
# so old that it's incompatible.
require Apache::RequestRec;
require Apache::RequestUtil;
require Apache::RequestIO;
require Apache::Response;
require APR::Table;
require APR::Pool;
require Apache::Connection;
require Apache::ServerUtil;
require Apache::SubRequest;
require Apache::Log;
} if defined $@;
---------------------------------------------------------------------
To unsubscribe, e-mail: asp-unsubscribe@...
For additional commands, e-mail: asp-help@...
- J. | https://groups.yahoo.com/neo/groups/apache-asp/conversations/topics/2202?l=1 | CC-MAIN-2015-14 | en | refinedweb |
20 April 2010 19:53 [Source: ICIS news]
WASHINGTON (ICIS news)--Emissions mandates on US industry before carbon capture and storage (CCS) is broadly available would force dismantling of much of the nation’s power, refining and chemicals production capacity, a Senate witness said on Tuesday.
Kurt House, a research fellow at the Massachusetts Institute of Technology (MIT), told the Senate Committee on Energy and Natural Resources that it would be mathematically impossible for the US to make significant cuts in greenhouse gas (GHG) emissions and continue to use its abundant coal and natural gas resources without wide-scale commercial deployment of carbon capture and sequestration systems.
House, who holds a doctorate in geoscience from Harvard University and specialises in carbon dioxide (CO2) and chemical processes, noted that existing US industrial infrastructure most responsible for CO2 emissions - power stations, refiners and chemical plants - represented an installed capital investment of more than $1,000bn (€740bn).
“It is arithmetically impossible to make stated cuts in our CO2 emissions without either dismantling the majority of that installed capital or by doing CCS,” House said.
House was testifying about legislation before the Senate Energy Committee designed to clarify ownership, liabilities and other issues confronting the goal of making massive underground injections of carbon dioxide captured from various industrial processes.
Separately, the US Senate is expected to begin consideration next week of a new climate bill that reportedly calls for a reduction of US emissions of CO2 and other greenhouse gases to a level 17% below the nation’s 2005 output by 2020.
“But without the large-scale deployment of CCS,” said House, “it is arithmetically impossible for us to use those reserves [of coal and natural gas] for productive purposes, while simultaneously making significant cuts in our greenhouse gas emissions.”
House said that technologies already developed by the ?xml:namespace>
The obstacles to wide-scale commercial deployment, he said, included systems integration and, most importantly, securing financing for large-scale CCS investments in an uncertain regulatory environment.
Other witnesses argued that commercial deployment of carbon capture and storage could only be achieved if the
Speaking for the Obama administration, Energy Department Assistant Secretary James Markowsky said that climate legislation that puts a cap on carbon “will provide the largest incentive for CCS because it will create stable, long-term, market-based incentives to channel private investment in low-carbon technologies”.
Mark Brownstein, deputy director of the energy programme at the Environmental Defense Fund (EDF) told the panel that “the most important thing we can do to accelerate our nation’s transition to a low carbon, clean energy economy is to put a price on carbon through federal climate and energy legislation”.
Brownstein said that CCS “is critical to the future of coal, and indeed, over the long term, natural gas as well”.
Brownstein agreed with House that all the technologies for commercial carbon capture and storage are available.
“What is missing is the market driver to cause companies to put the pieces together,” he said, “and this comes with a price on carbon.”
However, House added, "if the US CCS industry does not grow rapidly, then we will either be unable to make meaningful cuts in our CO2 emissions, or we will be forced to dismantle our country's significant installed base of CO2-emitting industrial facilities".
( | http://www.icis.com/Articles/2010/04/20/9352320/us-carbon-cap-could-dismantle-refining-chemical-sectors-witness.html | CC-MAIN-2015-14 | en | refinedweb |
Stevens Indicator - Summer/Fall 2013
SUMMER – FALL 2013 THE MAGAZINE OF THE STEVENS ALUMNI ASSOCIATION $'%/7+%-!/+&3&$'/ #$ " ! &%6+. !/7 "$'% % 7 $ 4 # +%8%/+!2%7+!6)'*% % /$%.$20+/)/)$'% ! " #$ !%& ! ' () ' * #%& + , ) + -% # + &.%/!. (" + #%0&$ 1 "+ 2 #% % 3 &$2%45 # 5%/$%1! ' * " ! 6 + # 7 !"#$#%!&'(')*!&+',' SATURDAY APRIL 5, 2014 THE PLAZA HOTEL NEW YORK, NY Join us at the second annual Stevens Awards Gala, when we will celebrate and honor remarkably accomplished alumni and friends of Stevens. B L AC K TI E For further information, please visit: FROM THE EDITOR ALUMNI OFFICE WELCOMES NEW LEADERSHIP An alumni a airs veteran and familiar face at Stevens has recently taken the helm of the Alumni O ce. Michael K. Smullen was appointed executive director of the Stevens Alumni Association e ective July 1, 2013. He succeeds Anita Lang, who was named executive director emeritus in June a er 47 years with Stevens. Michael Smullen Before joining Stevens, Smullen served as director of Alumni A airs at Berkeley College, with campuses in New York and New Jersey. Previously, he served as alumni director at Kean University in Union, N.J., and as alumni associate at Drew University in Madison, N.J. While at Berkeley, Smullen created a professional skills webinar series for alumni, launched a “Teaching and Learning” faculty and alumni panel series and coordinated Berkeley’s rst Alumni Reunion in New Jersey and New York. At Kean, Smullen created and implemented myKean, an online community designed for alumni to reconnect with one another for career networking, online information updates, and news and events updates. He also created both the rst bi-monthly electronic newsletter for Kean alumni and the Career Network & Mentor Program. Smullen is a graduate of Drew University. Besides Smullen, the SAA welcomed a new assistant director, Priya Vin. Vin joined the Alumni O ce in January, focusing on directing and supporting the regional, professional and a nity networks of Stevens alumni. Prior to this position, she was Priya Vin program manager with the Harvard Business School Club of New York. She received her bachelor’s degree in political science from the University of North Carolina at Chapel Hill and her master’s degree in Social Service Administration from the University of Chicago. “I’m excited to be working to strengthen and grow these alumni networks,’’ Vin said. “I look forward to connecting the 35,000 Stevens alumni to each other through their professional a liations, regional area and interests.’’ Smullen joined the Alumni O ce in 2011 and served as associate executive director before taking on his current leadership role. He says that he’s excited for this new challenge. “I am very honored to serve Stevens and the Stevens Alumni Association as Executive Director,” Smullen said. “For the past two years, I have had the pleasure of working with Anita Lang, Executive Director Emeritus, who has guided the alumni program for over 47 years. With Anita’s example as a guide, I look forward to working with the Executive Committee of the Association to strengthen and build our alumni programs, honor and promote our proud Stevens alumni tradition, and warmly welcome the growing number of new alumni every spring. “From every corner of the globe, a Stevens degree connotes not only educational excellence, but also innovative thinking and superior knowledge. I feel privileged to work with so many dedicated, passionate alumni.” During his time at Stevens, Smullen has helped spearhead the most successful Alumni Weekends in the university’s history, with recordbreaking attendance records for the last two years in a row. SUMMER – FALL 2013, VOL. 133, NO. 3 Executive Director Michael Smullen Anita Lang Art Direction/Design Flint Design.com Indicator Correspondence Additional Art Direction/Design Jason Rodriguez Executive Director Emeritus Editor Beth Kissinger Published quarterly by Associate Editor Lisa Torbic the Stevens Alumni Association, member of the Council for Advancement and Support of Education. © 2013 Stevens Alumni Association The Stevens Indicator Stevens Alumni Association Castle Point Hoboken, NJ 07030 Phone: (201) 216-5161 Fax: (201) 216-5374 Class log submissions General SAA inquiries alumni-log@stevens.edu Contact the Alumni Office Phone: (201) 216-5163 Fax: (201) 216-5374 alumni@stevens.edu Letters to the Editor editor@alumni.stevens.edu !"#"$"%"&"'""!"("(")"*"'"!"+"'")"& ,"-".""!"(","-"."!""!"/""!"("+"."! SUMMER – FALL 2013 1 FEATURES SUMMER – FALL 2013 THE MAGAZINE OF THE STEVENS ALUMNI ASSOCIATION THE MAGAZINE OF THE STEVENS ALUMNI ASSOCIATION SUMMER – FALL 2013 e House that !$ Stevens Built For the past two years, 60 Stevens students from the university’s four schools have been working on their entry in the U.S. Department of Energy’s Solar Decathlon. View the house and learn what will become of it a er the competition ends. By e O ce of Communications & Marketing DEPARTMENTS 1 ................................................From the Editor 4 ............................................Presidents’ Corner 6 ............................................ Grist from the Mill 32 ............................................................ Clubs 37 .................................................Graduate Log 38 .............................. Alumni Business Directory 40 ............................................................ Vitals OCTOBER 3-6 FOR MORE INFORMATION, VISIT STEVENS HOMECOMING !" Stevens on the Rise By Beth Kissinger, Editor OR TURN TO PAGE 34 Stevens has fared well recently in a wide variety of university rankings. Here's a look at some of the rankings and the methodology behind them. !# Tech Upgrades Information technology changes on campus will change the way Stevens students learn. By Paul Karr, Special to The Indicator "$ Historic Alumni Weekend By Lisa Torbic and Beth Kissinger, Editors Revisit the memories as Alumni Weekend 2013 smashed previous records for attendance. If you couldn’t make it, see what you missed. "% New Vice Provost on Campus By Lisa Torbic, Associate Editor 2 THE STEVENS INDICATOR e new vice provost for research is busy at work. Read about his longtime interest in Stevens and why he feels like he’s come home. "# Head of the Class Educator Peter Astor ’64 re ects on his career and Stevens as he plans his 50th reunion. By Lisa Torbic, Associate Editor Opposite: Stevens’ Solar Decathlon house in the summer, prior to the October 2013 competition. From left: Homecoming 2013 will feature food, fun, and, of course, sports; Stevens President Nariman Farvardin, center, greets alumni and guests at Alumni Weekend 2013; Luis Ortega ’85 earned his sixth degree from Stevens this spring. "& Distinguished Lecture Series e President’s Distinguished Lecture Series continues this fall with another high-pro le guest to speak to Stevens community members. !"#$%&#'()&* Members of Stevens’ Solar Decathlon team are seen with Ecohabit, their solar-powered house that they built on the Hoboken waterfront. The team will compete in the Solar Decathlon 2013 in Irvine, Calif., in October. Photo: Jeffrey Vock By Laura Bubeck, Stevens’ Assistant Director of News and Media Relations "' Problem Solvers By Beth Kissinger, Editor e Systems Engineering Research Center (SERC), based at Stevens, tackles complex problems at home and abroad. () Coming Home Stevens Homecoming 2013 promises a fun- lled weekend of sports, the Athletic Hall of Fame inductions, theater and much camaraderie. By Robert Kulish, Stevens’ Director of Sports Information & Events (% Another Degree from Stevens By Lisa Torbic, Associate Editor One student at the May commencement made history as he picked up his sixth degree from Stevens. !"#"$"%"&"'""!"("(")"*"'"!"+"'")"& ,"-".""!"(","-"."!""!"/""!"("+"."! SUMMER – FALL 2013 3 PRESIDENTS’ CORNER CALL TO ACTION FROM YOUR NEW SAA PRESIDENT is marks my rst time addressing you as your Stevens Alumni Association president. I’m excited to get to work and help President Farvardin as he guides Stevens to greater heights. Before I go any further, I want to thank my predecessor, Mark LaRosa ’93. He’s an incredibly dedicated alumnus, a true leader who does all he can to promote Stevens and the SAA. Mark, thank you for your dedication, your support and your caring spirit. Stevens is a better place because of your e orts. I recently attended the May 2013 Commencement ceremony, 50 years a er my own graduation, and I couldn’t help but re ect on how the world has changed. Tuition has increased at colleges and universities across the country, and those costs can seem overwhelming. I can only imagine how a student feels when they receive that acceptance letter: a mixture of pride and accomplishment, but also concern about how to pay for this great education. I have some good news for those students and families. First, Stevens has made great progress in controlling tuition costs. As President Farvardin mentions in his column, the university is doing a systematic analysis of its resources to better control costs and improve e ciency. Second, a Stevens education pays o for students in a big way. Our little “gem on the Hudson” was ranked 12th nationally by CNNMoney.com (in a study done by PayScale.com) for graduates earning the highest mid-career salaries, and U.S. News & World Report recently ranked Stevens 7th nationwide among elite universities in the percentage of STEM (science, technology, engineering and mathematics) degrees awarded. is is proof that a Stevens education is an investment well worth its cost. I recently read the op-ed piece that Dr. Farvardin wrote for e Star Ledger (June 2013). He said that we must do more to remind students and their families that STEM elds are an economic sector where thousands of good jobs exist. I couldn’t agree more. I’m amazed when I see how quickly my grandchildren pick up new technology. Today’s student seems quite capable of adapting to a new way of communicating and working. Stevens graduates are much valued by employers, because of their experiences in the classroom, extracurricular activities, and through stellar research opportunities. In every way, Stevens graduates are well-prepared for the world they enter, and they bring a high value to all employers. As evidence, I point to the 94 percent of the Class of 2012 (the latest data reported) who have accepted jobs or entered graduate school within six months of graduation. At Stevens, the job is the education of students. at education might seem like a high cost, but the abundant bene ts far outweigh this and pay o for students for the rest of their lives. Now it’s our turn to do our part as proud Stevens alumni and members of the SAA. Let me be speci c about what I’m asking you to do. Visit to update your information with the Alumni O ce. Attend an alumni event in your area, or contact alumni who live nearby. Recommend Stevens or host an event for students and parents. If you want to post a job for students and other alumni, send it to alumni@stevens.edu. Organize a reunion for your class, student group, sport, or Greek house. Volunteer for an SAA committee. And, of course, make sure to show your philanthropic support for Stevens this year. If you do any of those things this year, you will make a signi cant impact at Stevens. If you do several, you will ensure that the SAA’s mission will always be successful — “… to establish, maintain and cultivate among its members a sentiment of regard for one another, and of attachment to Stevens, and to promote in every way the interests of the Institute.” Stevens and the SAA are ready. Let’s roll up our sleeves and get to work. Per aspera ad astra, Tom Moschello ’63 President, Stevens Alumni Association tmoschello@alumni.stevens.edu 4 THE STEVENS INDICATOR PRESIDENTS’ CORNER OUR PLEDGE TO BE “STUDENT-CENTRIC,” EFFICIENT, EFFECTIVE Autumn in Hoboken brings cooler temperatures, brilliant fall colors, and tremendous excitement here on Castle Point! New students, the start of classes, fall sports, and Homecoming festivities all contribute to a renewed sense of energy within our campus community. “Back-to-school” themes are a perennial topic in the media during the fall. Recently, the national conversation has focused on the rising cost of higher education, with media coverage of ballooning student debt and increased attention to the return on investment (ROI) of higher education. Many across the country, from college presidents to parents to policymakers, are concerned about the value that universities are providing for their students and for society. I would like to take this opportunity to tell you what Stevens is doing to increase value, control costs and improve e ciency. Several principles have emerged through our strategic planning process that guide our e orts. Most relevant are: “E ciency and E ectiveness” and “Student Centricity.” rough the lens of E ciency and E ectiveness (E&E) and with the encouragement of the Trustees and engagement of the Cabinet, the leadership team is analyzing the ROI of the resources we expend on programs, infrastructure, human resources, and new initiatives. For example, we are asking ourselves questions such as, – Have we optimized our nancial aid investment? – Can we be more e cient with delivering academic and extracurricular programs? – Can we reduce expenditures and reallocate to mission-oriented priorities? – How do we compare to similar institutions in the budgetary allocations for key functions? e leadership team is going through a systematic analysis of our resources to review, reprioritize, and reallocate resources according to our Strategic Priorities. A second guiding principle that underlies this work is “Student Centricity.” As Stevens continues to align its educational o erings and research programs with the goals of the Strategic Plan, we are reevaluating everything we do on the basis of creating value for our students—in the classroom, through co- and extracurricular activities and supports, research experiences, top- ight facilities and technology infrastructure, and unparalleled career preparation. Over the last 30 years, the cost of higher education has risen signi cantly faster than personal income and even faster than healthcare. While Stevens is grappling with the same escalation that universities nationwide are facing, it continues to deliver an excellent value to students, particularly as compared to many other institutions. As an example, PayScale.com ranked Stevens #9 in the nation—ahead of universities such as Stanford and Harvard—for return on investment. Our job placement statistics rank among the best in the nation, with 94 percent of 2012 graduates (the latest data reported) having accepted jobs or entered graduate school within six months of graduation. As an institution, Stevens is more tuition-dependent than most of our peers: 58 percent of our operating revenue comes from tuition and fees, compared to 36 percent, on average, for our peers. Recognizing that we are largely tuition-driven, we are vigilant about our priority to be a Student-Centric university. We have made substantial progress in controlling tuition increases, while being prudent with the funds that families and donors invest in Stevens. Interestingly, even though we are #9 in the nation in ROI, we are 335th in the size of our endowment, resulting in far less exibility and ability to cope with nancial uncertainty. As we compete with top-tier institutions for talented students and for nationally recognized scholars and researchers, this nancial exibility is absolutely essential. As we continue to implement our Strategic Plan and enhance the quality and recognition of our programs, we will intensify our development activities, especially to increase the size of the endowment, to reduce our dependence on tuition and increase our nancial exibility. We are enormously grateful to alumni and friends who have re-engaged with Stevens over the last year. On behalf of Stevens students, current and future, we thank you for your generosity and support and the role you are playing to ensure a strong and vibrant University for the next generation. Per aspera ad astra, Nariman Farvardin President, Stevens Institute of Technology PresidentFarvardin@stevens.edu 201-216-5213 SUMMER – FALL 2013 5 GRIST FROM THE MILL STEVENS ATHLETICS GRABS ECAC TOP HONOR Stevens is the Eastern College Athletic Conference (ECAC) Jostens Institution of the Year for the 2012-2013 academic year, honored for its top academics and athletics, the ECAC Commissioner Kevin T. McGinniss announced this summer. Stevens was selected from more than 300 Division I, II and III programs that make up the largest athletic conference in the nation. e ECAC Jostens Institution of the Year is presented annually to the ECAC institution that best exempli es the highest standards of collegiate academics and athletic performance. is is Stevens’ second win; the university also received the honor in 2008. Stevens will be recognized at the 2013 ECAC Honors dinner in conjunction with the 25th Annual ECAC Convention and Trade Show that will be held on Sept. 29 to Oct. 1 at the Sea Crest Beach Hotel, North Falmouth, Mass. e selection process for this honor is based annually on participation and success of an institution’s athletic program in recognition of the following criteria: documentation and con rmation of academic success by the institution’s student-athlete population; the number of ECAC championships won; number of an institution’s teams selected for participation in ECAC championships; and National Association of Collegiate Directors of Athletics (NACDA) Lear eld Sports Directors’ Cup Points. “Stevens Institute of Technology is honored to be selected as the 2013 ECAC Institution of the Year,” Stevens Director of Athletics Russell Rogers said. “ e list of previous winners is truly an impressive group of institutions that represent both academic and athletic excellence, and we are thrilled to have been chosen as the recipient for the second time.” e Ducks participated in nine ECAC championship events in 2012-2013, claiming titles in the Men’s and Women’s Preseason Swimming and Diving Championships as well as the 2012 ECAC Mid-Atlantic Field Hockey Championship. In the classroom, 87 of Stevens’ studentathletes were named to the Empire 8 Athletic Conference President’s List. Recipients of this honor must earn a 3.75 grade point average or Stevens men’s soccer team celebrates its Empire 8 championship last fall. higher and must display positive conduct on and o campus. In addition, the Ducks placed 14 teams on the President’s List. To earn this distinction, teams must have a grade point average of 3.2 or higher. e Stevens’ teams on this list posted a combined grade point average of 3.31. “For Stevens student-athletes to be collectively recognized by the ECAC for their outstanding athletic accomplishments as well as their achievements in the classroom is truly an honor,” said Stevens President Nariman Farvardin. “It is no easy task to balance both athletics and academics at very high levels, but our student-athletes have remained true to our strategic priority: Excellence in All We Do.” e Ducks join Princeton University (three) and Williams College (six) as the only multiple winners of the ECAC award, which was established in 1995. —Robert Kulish 6 THE STEVENS INDICATOR GRIST FROM THE MILL GENEROUS GIFT TO BENEFIT NEW ADMISSIONS CENTER Two alumni have pledged $1.3 million for a modern, admissions center on campus. e gi from Virginia and Kevin Ruesterholz, both Class of 1983, will support the university’s planned growth in the size and selectivity of the student population. e donation will enable Stevens to renovate the historic Colonial House building and transform it into the Ruesterholz Admissions Center, a welcoming space for visitors. “Virginia and Kevin stepped forward once again in a signi cant way to help Stevens jumpstart the important work to support our university’s planned expansion,” said Stevens President Nariman Farvardin. “It is a great pleasure to know that such enthusiasm exists around our future.” “ e Ruesterholz Admissions Center will provide a wonderful environment to welcome prospective students and their families to campus and have a tremendously positive impact on our admissions activities,” added Stevens Vice President for Enrollment Management and Student A airs Marybeth Murphy. In May 2013, Virginia Ruesterholz, a longtime executive at Verizon, was elected the rst woman to hold the position of chairman of the Stevens Board of Trustees. renowned for her 30-year career at Verizon, where she retired in 2012 as executive vice president and president of Verizon Services Operations. Kevin Ruesterholz is an attorney and head of Ruesterholz Law LLC in Morristown, N.J. He has more than 25 years of corporate experience with AT&T and Lucent Technologies in international business development, contract negotiations, partnership development and business management. e couple has served as co-chairs of the Edwin A. Stevens Society, the university’s leadership society for annual giving, and also established a scholarship fund for Stevens engineering management students. With their gi , the Ruesterholzes are helping the university in its 2012-2022 strategic plan, e Future. Ours to Create. By 2022, Stevens plans to grow the undergraduate student population by 60 percent and increase its full-time graduate student population by 30 percent. Virginia and Kevin Ruesterholz, both Class of ’83, have made a generous gift that will bring a new admissions center to campus. “As students, alumni and leaders at Stevens, we have been thrilled to witness rsthand.” —Laura Bubeck STEVENS REPORTS RECORD $26 MILLION IN PHILANTHROPY FOR FY ’13 e second year of Stevens President Nariman Farvardin’s service to the University has concluded with a record $26.3 million in annual fundraising, the O ce of Development announced recently, making it possible to successfully complete the President’s Initiative for Excellence more than one year ahead of schedule. More than 4,000 gi s were made to Stevens during FY13. e $26.3 million total represents a 91 percent increase over the amount raised in the previous scal year, which itself was 189 percent greater than was raised the year before. Gi s made during FY13 included a $10 million commitment from Greg Gianforte ’83 and the Gianforte Family Foundation to support construction of a new Academic Gateway Complex; $1.3 million from Board of Trustees Chairman Virginia Ruesterholz ’83 and husband Kevin Ruesterholz ’83 to support creation of a new admissions center by renovating Colonial House; $1 million each from John Hovey ’57 and Frank Semcer ’65 to assist construction of the Gateway Complex; $2 million from the estate of Elbert Calhoun Brinning, Jr. to create e Viola Ward Brinning and Elbert Calhoun Brinning Endowed Chair in the Schaefer School of Engineering and Science; and $2 million from engineer, philanthropist and business executive A. James Clark to create the Nariman Farvardin Endowed Chair in Civil Engineering in the Schaefer School. e American Bureau of Shipping also awarded Stevens $3 million to support construction of a new civil, mechanical and naval laboratory complex in the Davidson Laboratory. Support received during the scal year included a total of 24 newly created scholarships and a total of 26 newly created or realized bequests, both record numbers for a Stevens scal year. To learn more about giving to Stevens, visit. —Paul Karr SUMMER – FALL 2013 7 CONFERENCES ON BIOMATERIALS, SANDY Hurricane Sandy made it clear that our transportation systems must be more resilient to disasters, according to the focus of a forum where transportation leaders, emergency managers and experts from major cities discussed what was learned in the storm’s a ermath. e symposium, held in June at Stevens, was co-organized with Northeastern University and designed to strengthen the nation’s mass transit, port and aviation infrastructure to better withstand and recover quicker from major disasters. Participants identi ed lessons learned from Sandy that should in uence a national agenda for building more catastrophe-resilient transportation systems in urban coastal communities. “A er Hurricane Sandy: Lessons Learned for Bolstering Infrastructure Resilience” is supported by a grant from the Alfred P. Sloan Foundation. Panel participants included Richard Serino, the number two o cial at FEMA who led their response to Sandy; and Richard Reed, the former Deputy Assistant to the President who coordinated the White House’s response. A nal report will be published in spring 2014. Also in June, Stevens hosted the Stevens Conference on Bacteria-Material Interactions. is event presented the latest research on a common, costly medical problem—implantassociated infection. More than 90 interdisciplinary scientists, engineers and clinicians from 10 countries gathered to address the scienti c, technical, and regulatory challenges facing the development of infection-resistant tissue-contacting biomaterials. e conference covered many topics, including biomaterials-associated infection, bio lms and antimicrobial resistance; new approaches to evaluating biomaterials e cacy; and computational microbiology and materials design. roughout the two-day meeting, researchers in biomedical engineering, chemical engineering, materials science, mechanical engineering, and computer science shared The Stevens Conference on Bacteria-Material Interactions, held in June, presented the latest research on implant-associated infection. presentations. For more on the Sandy conference, visit stevens-institute-technology-and-northeastern -university-co-host-hurricane-sandy. Learn more about the bacteria-material interactions conference at. edu/news/content/stevens-hosts-bacteriamaterial-interactions-conference-advanceinfection-resisting. —Laura Bubeck STEVENS WELCOMES NEW LEADERS ON CAMPUS New cabinet and board additions at Stevens will expand university leadership and advance institutional goals. Marybeth Murphy, a distinguished leader in higher education, became Vice President for Enrollment Management and Student A airs at Stevens in April. Murphy joins Stevens a er more than 30 years in university enrollment management and research and planning. Most recently, Murphy had served as Vice President for Enrollment Management and Student Success at the Fashion Institute of Technology (FIT) in New York City since 2009. Prior to joining FIT, Murphy was Assistant Vice President for Enrollment Management at Baruch College (CUNY). Before Baruch, she served for 20 years in various roles at New York University. Two alumni have also joined the Stevens leadership, as John Dearborn ’79, M.S. ’81, and Steven Feller ’70 have been selected for the Stevens Board of Trustees. Dearborn is senior vice president of Williams and Williams Partners, a leading diversi ed master limited partnership focused on natural gas transportation; gathering, treating and processing; storage; and natural gas liquid (NGL) fractionation. Feller is the founder and president of Steven Feller, P.E., Inc., a full service engineering rm with vast experience and innovative expertise in municipal, commercial, large residential and industrial type projects. 1 Marybeth Murphy 2 John Dearborn ’79, M.S. ’81 3 Steven Feller ’70 8 THE STEVENS INDICATOR GRIST FROM THE MILL STAHLEY RETIRES AFTER 25 YEARS OF SERVING STUDENTS A well-known advocate for Stevens students—whose thoughtful leadership, care and concern has touched two generations of alumni—retired this past June. Joseph Stahley spent 25 years at Stevens, where he was a familiar face and mentor to students and alumni. He served rst as director of the O ce of Cooperative Education and the O ce of Career Development, where he was instrumental in building a hallmark program which has prepared Stevens students for successful entry into their careers, and from which the university’s reputation as a premier provider of top talent has grown. en, as Assistant Vice President for Student Development for the past 11 years, he oversaw a number o ces that have the health, safety, personal development, and professional wellbeing of Stevens students at their core, expanding his portfolio to include Student Life; Student Health Services; Student Counseling, Psychological and Disability Services; the Stevens Technical Enrichment Program (STEP); and the Campus Police. Stahley’s commitment to Stevens extended to the important campus events and operations he oversaw, including Commencement, Convocation, Parents Weekend and Freshman Orientation, among other leadership roles over the years. His leadership at the helm of the Stevens Emergency Response Team, which worked to restore operations and maintain safety, security and communications on campus during Superstorm Sandy and its a ermath in fall 2012, was critical to the well-being of thousands of students and community members. “Joe’s leadership during that crisis was instrumental in what was a dangerous and di cult time,” said one team member, who recalled his style as thoughtful, deliberate, and above all, caring. Dozens of his Stevens colleagues gathered in mid-June on campus to thank him and wish him well. Stahley’s family—his mother, Nancy, wife Barbara, M.S. ’07, son Joseph, III ’01, daughter Johanna Stahley Gordon and young grandson Oliver Gordon—shared the celebration with him. Colleague a er colleague praised Stahley for his calm and thoughtful manner, his wisdom, his establishment of a very successful Cooperative Education program, his sense of fairness. In a moving tribute, Stevens Police Chief Tim Gri n saluted Stahley – the only time he has ever saluted a civilian, he said. Alumni learning of his retirement called Stahley a true career mentor. “As with all the co-op o ce sta , he made sure he got to know everyone’s name, where you worked, your strengths, and always had our best interests in mind,” says Michael Andreano ’96. “His passion and e orts are what elevated the co-op program into the worldclass program it is today.” At his campus gathering, Stahley enjoyed the moment. “We are incredibly fortunate in our jobs; we focus on some of the most talented students anywhere,” he said. He praised his colleagues for the personal investment that they’ve made in their jobs, and noted with pride that the “student experience” at Stevens has been transformed for the better over the past 20 years. “We have worked to make ‘student centricity’ a reality here,” Stahley said. Stahley will continue as coach of the Stevens men’s golf team. With this and two grandchildren—and two more on the way— Stahley said that he’ll keep busy. He looks forward to more time for golf and long bike rides, he said, and moves on with a sense of gratitude. —Beth Kissinger Joe Stahley, far right, poses with his family at his retirement celebration at Stevens. From left: mother Nancy; son Joseph, III ’01; daughter Johanna Stahley Gordon; and wife, Barbara, M.S. ’07, who is holding their grandson Oliver Gordon. SUMMER – FALL 2013 9 P U E S O L C S T I R O F Y E C O H A BI T IS R E A D A FTER MONTHS OF INTENSE WORK AND TWO YEARS OF PLANNING, THE STEVENS ENTRY INTO THE U.S. DEPARTMENT OF ENERGY’S 2013 SOLAR DECATHLON IS FINALLY READY FOR THE COMPETITION. Ecohabit, an intelligent, energy-e cient and sustainable solar-powered home, is the University’s largest collaborative e ort and Photos by Jeff Vock and Cou rtney Gnash ’16 united 60 interdisciplinary students to build the smart house. e competition runs across two weekends, from Oct. 3-6 and Oct. 10-13, and is free and open to the public. It will be held at the Orange County Great Park, Irvine, Calif., from 11 a.m. to 7 p.m. More than 20 students from the Ecohabit team plan to attend. e students, both undergraduates and graduate students, combine expertise from Stevens’ four schools in a wide variety of elds – mechanical, electrical, chemical and civil engineering; architecture, design and visual arts; business and technology and engineering management; and computer science. A er the competition ends, the house will get a second chance at life. Stevens will donate it to California State University San Marcos (CSUSM) to use as its new Veterans Center. CSUSM, near Camp Pendleton and several other military installations, has the highest percentage of student veterans per capita of any California State University campus. About 10 THE STEVENS INDICATOR T IO N I T E P M O C N O L H T A C E AT S O L A R D sSD2013 Stevens SD2013 @ Steven pretty hard. #Ecohabit 900 CSUSM students are a U.S. veteran, service member, reservist or dependent. Stevens’ entry in the most recent Solar Decathlon in 2011, Empowerhouse, won awards in two Solar Decathlon categories. —Stevens’ O ce of Communications & Marketing Stevens SD2013 @ StevensSD2013 to @City_of_Irvine #eastcoast #westcoast @CityofHoboken #SD2013 Steve Momorella @ GreenNewsDay July 24 The best pla ce for solar power is.. . Ne - Mother Nature Netwo rk w Jersey? T For more information about Ecohabit and the Solar Decathlon 2013, visit SUMMER – FALL 2013 11 U.S. News & World Report may publish the country’s best-known college rankings guides. But many other media outlets also crunch numbers, create rankings and provide a perspective on what makes a quality education. The Princeton Review, for example, chooses the “Best Career Services” and the “Best College Towns.” PayScale.com reports on schools that provide a strong return on investment. S o how’s Stevens faring in the various rankings? Over the past year, the news has been good. Indeed, a number of recent rankings – from a variety of sources – show that Stevens is making remarkable progress and placing among the top universities in the country in many important areas. The university has enjoyed not just one or two top rankings, but more than a dozen top national rankings and recognitions. They cover everything from academic programs and return on investment to the university’s music program and the accomplishments of its student athletes. ese various rankings measure di erent areas and use a variety of methodologies, some of which evolve from year to year. But all measure, in their own way, the quality of an education. And all seem to reveal a trend of progress at Stevens. One recent remarkable accolade—Stevens was ranked #9 in the country for its 30-year net return on investment for graduates—was reported by PayScale.com and released this past spring. Stevens jumped from 24th to 9th in this ROI ranking and joined the Massachusetts In- stitute of Technology and Cal Tech in the top 10. Indeed, Stevens placed ahead of such prestigious universities like Harvard University and Stanford University. Stevens was also the fastest rising university in the 2013 U.S. News & World Report’s popular “Best Colleges” guidebook. In 2013, Stevens enjoyed the greatest improvement of all universities on the Top 100 colleges and universities list, rising 13 spots to #75. At press time, the 2014 “Best Colleges” list was due to be released in late September. U.S. News also recognized Stevens in 2013 as 7th in the country among elite universities in the percentage of degrees awarded in the STEM (Science, Technology, Engineering and Mathematics) elds. e Princeton Review placed Stevens 13th in the nation for career services, in its 2013 “ e Best 377 Colleges.” Last fall, in another PayScale.com study published on CNNMoney.com, Stevens alumni ranked 12th in the nation for the highest mid-career salaries, with alumni with 10 years of experience earning an average of $112,000 per year. In early 2013, Stevens was named 12 THE STEVENS INDICATOR BY B E T H K IS SING ER , EDITO R Stevens receives top college rankings in many important areas as having the second most innovative college music program in the country, second only to the esteemed Berklee College of Music, by the blog eBestColleges (). And in July, Stevens received word of yet another prestigious recognition—it had been chosen as the Eastern College Athletic Conference (ECAC) Jostens Institution of the Year for 2012-2013. e honor is presented annually to the ECAC institution that best exempli es the highest standards of collegiate academics and athletic performance; Stevens was selected from more than 300 Division I, II and III programs. Stevens also received the award in 2008. (See story on p. 6.) “ ese rankings demonstrate what Stevens alumni and those who hire them have known for some time—that a Stevens education provides an outstanding foundation for a successful career and a tremendous value for our graduates and for the companies and organizations in which they work,” said Stevens President Nariman Farvardin. “ ese accolades are a symbol of the growing recognition that Stevens is a major force in American technical education.” Stevens’ goal and challenge, of course, is to accelerate the trend of improvement in the quality of the education and experience that it provides for students—the rankings will follow, Farvardin said. e various rankings measure important areas that are critical to a university’s success, including graduation rates, student selectivity, faculty resources, career placement and ROI. Stevens is already measuring its e ectiveness in many of these areas and is striving to further improve upon them, in its 10-year Strategic Plan, e Future. Ours to Create., launched in 2012. (Visit.) “One would not structure a university improvement plan for the purpose of doing well in the rankings,” Farvardin said, “but by making progress toward our strategic goals, we will make measurable improvements that will manifest themselves in improved rankings.” Stevens alumni and friends can certainly accelerate Stevens’ progress and increase its national prestige. Indeed, alumni are a key factor, Farvardin said. “Alumni in in uential positions in industry, research and government can help this STEVENS HAS RECENTLY ENJOYED A NUMBER OF TOP NATIONAL RANKINGS AND HONORS. HERE’S A SAMPLING: Ranked 9th in the nation for 30-year net return on investment for graduates by PayScale in 2013 Ranked as the 2nd most innovative college music program in the nation by TheBestColleges in 2013 universities in percentage of STEM degrees awarded in a 2013 report from U.S. News & World Report Ranked 7th in the nation among elite Ranked 13th in the nation for career services by The Princeton Review ’s 2013 edition of “The Best 377 Colleges” Best National Universities in the 2013 edition of “Best Colleges” by U.S. News & World Report number of engineering master’s degrees awarded in 2012 by the American Society for Engineering Education (ASEE). with the highest mid-career salaries by CNNMoney.com in 2012 Ranked 75th in the category of Ranked 7th in the nation for the Ranked 12th in the nation for alumni Listed among the top 11 schools for insurance and technology talent in Insurance & Technology in 2012 Awarded honorable mention in Wall Street & Technology’s 2012 list of the 11 schools that capital market executives list as their favorite in hiring computer programming and engineering graduates Report’s 2012 “Short List” of 10 national universities that produce the most interns Ranked on U.S. News & World Stevens has become more selective, accepting 38 percent of undergraduate applicants for Fall 2013. The university will continue to improve selectivity, projecting a 33 percent accept rate by Fall 2022. Source for graphs: Stevens Institute of Technology SUMMER – FALL 2013 13 SAT scores for Stevens students have improved over the past several years, with the Fall 2013 incoming class scoring a range of 1210-1390 (middle 50 percent, combined critical reading and math), with the goal of accepting students with higher scores in the future. Top photo: A Senior Design team at the Stevens Innovation Expo this spring. Above: Students hard at work. important e ort by promoting Stevens within their professional circles and broadening the network of friends of Stevens,” Farvardin said. “We are a relatively small institution, but we need to cast a big shadow.” To read more about Stevens’ top national rankings and recognitions, visit. stevens.edu/sit/about/rankings-recognition. W H AT D O R A N K I N G S R E A LLY M E A S U R E ? @*%' F+&%' 4".+%)2' ,-' 3#+4%.5+)2' ."#$+#75' *"4%' )*+5' +#' ;,66,#S' )*%2' 6%"53.%'5,6%'"5?%;)',-'F*")'6"$%5'"'T3"<+)2'%&3;")+,#P'A3)'F*")')*%2' 6%"53.%' "#&' )*%' F"2' )*%2' 6%"53.%' 4".2' 7.%")<2P' U%.%G5' "' <,,$' ")' )F,' ."#$+#75'"#&')*%+.'6%)*,&,<,7+%5P @*%' .%;%#)' 9"2:;"<%P;,6' !VW' 5)3&2X' )*")' ."#$%&' :)%4%#5' Y(' +#' )*%' #")+,#X'5*,F%&')*")'"':)%4%#5'%&3;")+,#'?"25',--'+#'"'8+7'F"2P Z,.'/012%".'#%)'!VWX':)%4%#5'7."&3")%5'7.,55%&'[>P\]'6+<<+,#'+#';3631 <")+4%'%".#+#75'"8,4%')*%';,5)',-')*%+.'8";*%<,.G5'&%7.%%'"-)%.'/0'2%".5' +#' )*%' F,.$' -,.;%X' )*%' 5)3&2' 5*,F%&P' 9"2:;"<%' %4"<3")%&' 6,.%' )*"#' >XJ00';,<<%7%5'"#&'3#+4%.5+)+%5'";.,55')*%'LP:P'"#&';,6?".%&';,5)5'),' 6%&+"#' "<36#+' <+-%)+6%' %".#+#75P' ^")"' F"5' ;,<<%;)%&' -.,6' %6?<,2%%5' F*,' ;,6?<%)%&' 9"2:;"<%G5' %6?<,2%%' 53.4%2' "#&' ,#<2' %".#%.5' F+)*' 8";*%<,.G5'&%7.%%5X'#,)')*,5%'F+)*'"&4"#;%&'&%7.%%5X'F%.%'+#;<3&%&P' Z,.' 6,.%' ,#' )*%' 9"2:;"<%' 5)3&2X' 4+5+)' *))?S__FFFP?"25;"<%P;,6_&")"1 ?";$"7%5_;,<<%7%1.,+1=0>/_6%)*,&,<,72P 9%.*"?5')*%';,3#).2G5'6,5)'F%<<1$#,F#';,<<%7%'."#$+#75'+5')*%'!"#"$%&'($ )$ *+,-.$ /&0+,1>(' HA%5)' B,<<%7%5I' 73+&%X' F*+;*' *"5' 5%?".")%' +553%5' &%&+;")%&'),'3#&%.7."&3")%'"#&'7."&3")%'?.,7."65'+#')*%'LP:P'W#'=0>/X' :)%4%#5G'3#&%.7."&3")%'."#$+#7'+6?.,4%&'6,.%')*"#'"#2',)*%.'3#+4%.1 5+)2',#')*%'),?'>00'<+5)X'`36?+#7'-.,6'aa)*'),'CJ)*P' @*%' 63;*1?38<+;+b%&' 3#&%.7."&3")%' ."#$+#75' +#;<3&%' >]' 6%"53.%5' ,-' T3"<+)2' +#' 5%4%#' 8.,"&' ;")%7,.+%5S' ?%%.' "55%556%#)c' 7."&3")+,#' "#&' "<36#+'7+4+#7c'"#&X'-,.'#")+,#"<'3#+4%.5+)+%5'<+$%':)%4%#5X'7."&3")+,#'.")%' ?%.-,.6"#;%'"#&'*+7*'5;*,,<';,3#5%<,.'3#&%.7."&3")%'";"&%6+;'.%?31 )")+,#'.")+#75P'L#+4%.5+)+%5'5386+)'"'4".+%)2',-'&")"d-.,6')*%'+#;,6+#7' ;<"55G5':M@'5;,.%5'),')*%'+#5)+)3)+,#G5'7."&3")+,#'.")%d-,.')*%'."#$+#75P' :;,.%5'-,.'%";*';")%7,.2'".%';,6?3)%&'";;,.&+#7'),'"'-,.63<"'),'"..+4%' M' 5;*,,<G5' .%?3)")+,#' +5' $%2P' M' 3#+4%.5+)2G5' 3#&%.7."&3")%' .%?3)")+,#' '";;,3#)5'-,.')*%'<".7%5)'?%.;%#)"7%',-')*%'5;*,,<G5',4%."<<'5;,.%X'")'==PJ' ?%.;%#)P'V)*%.'),?'-";),.5')*")'";;,3#)'-,.'=0'?%.;%#)'%";*',-')*%'5;,.%' ".%' 5)3&%#)' 5%<%;)+4+)2' -,.' )*%' +#;,6+#7' ;<"55c' -";3<)2' .%5,3.;%5c' "#&' 7."&3")+,#'"#&'.%)%#)+,#'.")%5P'@*%'"<36#+'7+4+#7'.")%d)*%'?%.;%#)"7%' ,-'3#&%.7."&3")%'"<36#+'F*,'F%.%'5,<+;+)%&'82'"#&'7"4%'),':)%4%#5d 6"$%5'3?'J'?%.;%#)',-')*%',4%."<<'5;,.%P'' @*%'.%?3)")+,#'5;,.%'+5'8"5%&',#'53.4%25'5%#)'),';,<<%7%'"#&'3#+4%.5+)2' ?.%5+&%#)5X'?.,4,5)5X'&%"#5',-'"&6+55+,#5X'"#&'*+7*'5;*,,<';,3#5%<,.5P' D";*'+#&+4+&3"<'.")%5')*%'?%%.'5;*,,<5G'";"&%6+;'?.,7."65',#'"'5;"<%' ,-'>'N6".7+#"<O'),'J'N&+5)+#73+5*%&OX'"#&')*%'5;*,,<G5'5;,.%'+5')*%'"4%."7%' 5;,.%'"6,#7'"<<'F*,'.")%&'+)P'W#')*%'=0>/'!#%)*/'."#$+#7X':)%4%#5'.%1 ;%+4%&'"'=PC',3)',-'JX'+#&+;")+#7'.,,6'-,.'7.,F)*'+#')*+5'+6?,.)"#)'6%).+;P' Z,.'6,.%',#')*%'L:Kef!'."#$+#75X'4+5+)'*))?S__FFFP35#%F5P;,6_8%5)1 ;,<<%7%5' ''BC&13$D5((56<&, 14 THE STEVENS INDICATOR For more about Stevens’ top national rankings visit !"#$%&"#'("#)*%&"+ ,-#%-&#,..#'("#/0"1&2 Stevens edged out 300 Division I, II and III teams to grab top honors for academic and athletic excellence from the nation’s largest athletic conference. Our student-athletes’ success is made possible in part by the support of our alumni, many of whom also participated in athletics programs—or spiritedly cheered on the Ducks from the stands —during their years on Castle Point. Congratulations to Stevens Athletics for bringing home the ECAC Jostens Institution of the Year award for the second time in program history. 3"4%*ȷ"11"-7"2# Make a gift in support of our teams and student-athletes at stevens.edu/makeagift. SUMMER – FALL 2013 15 Wired & Stevens embarks upon series of leading≠edge IT upgrades to transform teaching, learning, research 16 THE THE STEVENS INDICATOR Inspired By Paul Karr, Special to The Indicator I t s well known on Castle Point that Stevens was the rst signi cant American edu cational institution, in 1982, to require its students to purchase and use personal computers in classrooms. The University then pioneered a second technological innovation during the early 1980s as well, constructing one of the nation s very rst intranets and connecting academic departments within Stevens in ways that had not previously been possible. During the three decades since, however, weather, other institutional priorities, and the rapid pace of technological advances have all taken a toll on these systems. “My eyes were opened,” says David Dodd, Vice President for Information Technology and Chief Information O cer, who joined Stevens in August 2012 as the University’s rst cabinet-level IT o cer and immediately began surveying the IT landscape across campus. “I found that our infrastructure and systems have begun aging out, and that this may place us at something of a competitive disadvantage.” Now that will change, and quickly. e University has begun an ambitious upgrading and enhancement of campus technology that, when completed, will transform the learning environment at Stevens. is effort will reinvent campus technology both on the ground and under the hood to expand and speed up wireless Internet capabilities, replace aging computer components, and update administrative so ware campus-wide — while also creating new, state-of-the-art ways of receiving Stevens email and phone messages and teaching and taking courses at the University. Partial support for the upgrades will come from the state of New Jersey, whose legislature agreed in June to award Stevens $7.25 million for IT upgrades from a newly released $1.3 billion pool of state higher education funds. Stevens had submitted proposals that were competitively reviewed, alongside proposals from nearly every other institution of higher learning in the state. “ is support will allow more in New Jersey to access a national quality STEM (science, technology, engineering and mathematics) education,” says Dodd. “ e Governor and the Legislature wanted explicitly to expand STEM education in New Jersey, and we presented them with two excellent ways to make that happen.” e state funding is speci cally earmarked for two major IT initiatives. One is the creation of a so-called ‘virtualized learning environment,’ which will e ect a revolution in course delivery at Stevens (making it far easier, for example, to access course material via personal computers, smartphones and other devices) without over-taxing existing computing systems. is enhanced access will mean fewer, or reduced need for, traditional computer labs scattered in pockets across campus, and fewer students toting heavy laptops to class. Stevens currently issues laptops to all incoming freshmen; that policy will likely be reviewed in light of these exciting new technologies. Web sites campus-wide will be redesigned in cleaner user interfaces, and migrated to a single shared platform speci cally selected for ease of use. And students will be enabled to tailor their learning environments individually for the rst time, using new so ware tools to access whichever mixtures of lecture videos, notes, team collaborations, library resources, virtual presentations and other materials work most e ectively for each. “ is personalized learning model is a new leading-edge approach to the way courses are delivered,” explains Dodd. “ e technology didn’t exist three years ago; now it is coming of age. And Stevens can be out in front.” Course material will become increasingly accessible to a national, and even global, cohort, building upon the success of Stevens’ award-winning WebCampus program. “ at’s what we’re known for,” adds Dodd. “Academic excellence and innovation. We will simply do more toward that end.” e second initiative will unify all campus communications systems — a process that requires the replacement of nearly all networking electronics and the installation of a new beroptic infrastructure (hubs, switches, routers and cables) on campus, as well as the replacement of the entire traditional phone system at Stevens. e existing, branch-type phone system will be replaced with VoIP (Voice over Internet Protocol) technology that will enable voice and video communications to travel over both the campus network and the Internet. During the process of unifying and updating these systems, wireless access will become much faster and more accessible across campus, and technology in classrooms and laboratories will similarly bene t from a boost in power and e ciency. “ ese projects are large and complicated, but they are also very exciting, and everyone at Stevens will ultimately bene t,” says Dodd. “A er completing this work, we will be at parity with or — in some cases — ahead of our peer schools and be a national leader in the use of educational and information technology. I think that is quite appropriate, given our name and our stated mission.” e technology upgrades are particularly SUMMER – FALL 2013 17 well timed, says Dodd, because they will support Stevens’ new Strategic Plan, e Future. Ours to Create. e Plan speci cally calls for “a campus IT and physical infrastructure be tting a worldclass technological university” and notes how “just as technology is our past, so will it differentiate us in the future, o ering a distinctive educational experience to our students, driving our research and scholarship, leading us to devise novel teaching and learning methods and enhancing our administrative, outreach and communication activities.” Many of the retro ts directly involve classroom instruction and course delivery. “ ese upgrades, once complete, will absolutely transform the learning environment at Stevens, positioning us among the nation’s leaders in educational technology,” noted Provost and University Vice President George Kor atis. “Coursework will be delivered in ways we have never before seen, not only conferring Dodd expects all major upgrades to be completed within four years — and many, including the new Internet telephony project, within 12 to 18 months. Disruption to campus activities, he notes, will be kept to a minimum: with work occurring during weekends, overnights, and academic breaks. An external security review of Stevens’ network and systems infrastructure is already under way, and a second assessment by an outside expert will also be performed one year later. Given the quantity of proprietary research taking place on campus on a daily basis, “we are required to worry about this constantly,” says Dodd. e university is studying creation of an additional data backup server site located outside the Northeast in order to provide greater operations redundancy in the event of natural disasters and other crises. Stevens will also study improvement of the University’s high-performance computing resources, a project that has not yet been funded but is high on Dodd’s wish list. at project is critical, he says, because as the analysis of “big data” blossoms at Stevens — including a new Financial Systems Center, the nation’s rst undergraduate major in quantitative nance, the nation’s rst master’s programs in business intelligence and analytics, and increasing emphasis on healthcare research and data analysis — increased computing power will be required. Stevens currently owns and operates two Cray supercomputers, but those resources are not currently evenly distributed or accessible across campus for researchers. “ ose resources need to be made more accessible to all researchers on campus,” Dodd said. “We are continuing to make progress on this, although it will be di cult to complete without external funding support.” Still, Dodd has plenty on his plate already. Taken together, the scheduled upgrades will add up to a better-educated and much betterconnected Stevens, delivering highly trained graduates to the region like never before. “When we are done,” he concludes, “more residents of New Jersey will be able to access a top-quality STEM education. e state has said, explicitly, that they wish to expand STEM education in New Jersey to help strengthen our workforce, our economy, our global competitiveness, and our standard of living. “With these technology upgrades, we are delivering a number of excellent new ways to help make that happen.” Updating software In addition to the two initiatives funded by state bond monies, Stevens will also launch a ra of additional overhauls of its own. e list of projects include these initiatives: New, state-of-the-art systems and so ware packages will be purchased and installed across campus, including a new student information system, new human resources systems, and new systems for student registration, alumni contact management, nancial aid administration, recruitment, and other key processes. “ The Governor and the Legislature wanted explicitly to expand STEM education in New Jersey, and we presented them with two excellent ways to make that happen.” –David Dodd some of the very best STEM educations in America upon our students but also preparing the state of New Jersey for a robust future as we supply its workforce with ever-better educated and trained STEM graduates, ready to grow and transform this state’s powerful economy. “ is transformation will not be possible without the constant collaboration of our wonderful and committed faculty, and I know they will lead the way as we deploy and integrate the technological changes being made.” As it upgrades, Stevens will also utilize the best available so ware solutions, modeling the University a er corporate leaders such as Boeing and Volvo, in order to maximize its computing capabilities. “We are doing many, many new things and they are graphic-intensive, compute-intensive,” says Dodd. “You need considerable computing power for this. One solution is to build a new server farm. But we do not plan to do that. e leading-edge so ware we deploy will supply us with the added capacity.” Most of the University’s systems so ware will be moved to a ‘cloud’ storage model within the next 12 months, a process that has already begun with the University’s nancial systems. Once updated systems are in place, Stevens will begin deploying more sophisticated business analytics and intelligence to analyze data in order to strategically direct the marketing, recruitment, fundraising, communications and other e orts that support the Strategic Plan. Systems will be assessed, maintained and upgraded more regularly. “You can’t just throw money at these challenges one time, without doing life-cycle replacement later,” explains Dodd. “Otherwise, in ve years, we will be back in the same place as we are now.” 18 THE STEVENS INDICATOR Call for Nominations ALL NOMINATIONS DUE BY OCTOBER 18, 2013 !"#$#%!&'(')*!&-./0 Nominations are now being accepted for the 2014 honorees. This continuing program helps to strengthen a rich tradition of excellence at Stevens while recognizing the successes and accomplishments of alumni and friends. Stevens Honor Award This award recognizes an individual for notable achievement in any field of endeavor. SUBMITTING YOUR NOMINATION IS FAST AND EASY! This award recognizes an alumnus/a for extraordinary successes and noted achievements through entrepreneurial or innovative endeavors. Charles V. Schaefer, Jr. ’36 Entrepreneur Award WEB Distinguished Alumni Awards stevens.edu/awards These awards recognize outstanding alumni/ae for their success in fields of engineering, science and technology, business and finance, arts and letters, academia and government, and extraordinary community or humanitarian service. Young Alumni Achievement Award This award recognizes an undergraduate alumnus/a from the last 15 years who has demonstrated outstanding professional achievement. alumni@stevens.edu E≠ MAIL Lifetime Service Award This award recognizes an alumnus/a for sustained, dedicated service to Stevens. MAIL Friend of Stevens Award This award recognizes a non-alumnus/a who has demonstrated significant commitment to and extraordinary support of Stevens. Stevens Alumni Association 1 Castle Point Terrace Hoboken, NJ 07030 Outstanding Contribution Award This award recognizes an alumnus/a for one or more significant, recognizable contributions to Stevens. FAX International Achievement Award 201≠ 216≠ 5374 This award recognizes an alumnus/a who has demonstrated significant international achievement and impact. To learn more about the award categories or cast your nomination, please visit: !"#$#%!& '(')*!& +',' APRIL 5, 2014 THE PLAZA HOTEL NEW YORK, NY SUMMER – FALL 2013 19 William Knowles, left, and Vincent Baldassari, both Class of ’63, sneak a peek at The Link before the Alumni Dinner Dance. 20 THE STEVENS INDICATOR H U N D R E DS R E T U R N TO C A S T L E P O I N T TO R E U N I T E A N D R E D I S COV E R S T E V E N S Alumni Weekend Breaking attendance records for the second year in a row, more than 800 people returned to campus for Alumni Weekend 2013 this past spring. Steve Bryk ’73 returned to campus to celebrate his 40th reunion, his rst time back on Castle Point since graduation. He marveled at some of the changes to the school and curriculum. “I’m glad to see that Stevens is not just standing still, that the school has kept up with the times,’’ he said, as he looked at the dozens of computers in the Hanlon Financial Systems Laboratory, which combines a state-of-the-art nancial systems research training facility with a so ware engineering lab for development and a cybersecurity testing facility. Bryk was just one of many alumni who shared this sentiment of appreciation and awe in the face of progress at his alma mater. George Tompkins ’57 had never stepped foot inside Davidson Laboratory, a premier research facility in the elds of naval architecture and marine engineering, until he took a student-led campus tour during the weekend’s events. He said he was most excited to see the two wave tanks inside of Davidson, calling them “amazing.’’ e weekend o ered something for everyone, from the youngest future Ducks in attendance, who scrambled through the Lollipop Run, to more recent alumni who enjoyed the Beer Tasting event, and older alumni who visited their fraternities to reminisce about their days of brotherhood on Castle Point during the Greek Open Houses, a new addition to the weekend schedule. “Alumni Weekend 2013 was a great success,’’ said Michael Smullen, the Stevens Alumni Association’s then-associate executive director. “We added a number of new events that were very well received, and listened to feedback from last year’s event attendees, who requested a return (to campus) of the Saturday night Alumni Dinner Dance.” Other new events this year included the Hoboken Sampler, featuring culinary specialties from local restaurants, and the Alumni Block Party. e golden anniversary class, the Class of 1963, enjoyed a 50th Reunion dinner cruise along the Hudson River on Friday, May 31. Several dozen classmates and their guests took in breathtaking views of the Manhattan skyline at night. e evening—and the mood—was warm, as classmates recalled good times of more than 50 years ago and life since then. “Everyone seems to be relaxed with each other,” said Joe Polyniak ’63, one of the reunion’s organizers. “It’s an instant picking up of where we le o . at’s the best part—just watching some of those interplays.” BY LISA TORBIC & BETH KISSINGER, EDITORS PHOTOS BY JEFF VOCK & AMY HAND HISTOR IC TU R NOU T FOR Several Greek houses opened their doors to alumni during the weekend, including Chi Psi. Class of ’63ers came as far as California, and the Paci c Northwest. Harold Shorr ’63 made the trip from Oregon. It felt just wonderful to be back, he said. He reminisced about the struggles he had as a student, and relished the rewards of his hard work. “I felt like I was in over my head,” he said. “It was so good to get that degree. It opened all kinds of doors for me,” said Shorr, whose career ranged from engineering with U.S. Steel to ying jets for U.S. Airways. SUMMER – FALL 2013 21 Graduates of the Last Decade (G.O.L.D.) members also cruised the Hudson as part of a new G.O.L.D. boat cruise, which highlighted the Classes of 2003 and 2008 this year. Cruisers enjoyed the photo opportunities as the boat sailed near the Statue of Liberty. Stevens President Nariman Farvardin gave his “State of Stevens’’ Address on Saturday morning, a er which the Harold R. Fee ’20 Alumni Achievement Awards for the most recent reunion classes were given out. Returning to campus this year by popular demand was the Alumni Cocktails and Dinner Dance, a highlight of the weekend held in the Howe Center’s Bissinger Room, where attendees enjoyed music and dinner against the backdrop of Manhattan’s impressive skyline. e Stevens Alumni Award, given to longtime Stevens supporter John J. Dalton ’60, was presented at the dinner dance. (See accompanying story.) Sunday, June 2, o ered an Alumni Farewell Breakfast and the Stevens Ducks 5K Run. e run, 3.1 miles along the Stevens campus, bene tted the University’s cross-country and track and eld programs. Martin Lobel ’38 made the weekend’s most memorable homecoming, traveling from Connecticut to mark his 75th reunion. Lobel, who walked up to Castle Point from the Hoboken train station with his wife and daughter, also celebrated his 96th birthday that day, June 1. He marveled at the changes all over Hoboken and, of course, on campus. “At that time, we probably got the most thorough education possible,” he said. From tough classes to sophomore gymnastics—where he had to master the horse and the parallel bars—Stevens nurtured a resilience that helped propel him through the Seabees during World War II to a successful engineering career, he said. “No matter what it was, if you really tried, you could do it,” Lobel said. “All you had to do was give it a good shot.” 1 2 Young alumni enjoy a cruise the first night of Alumni Weekend. 3 The Class of ’63 present their gift to Dr. Farvardin. 4 Racers show their stuff during the Lollipop Run on DeBaun Field. 22 THE STEVENS INDICATOR John Dalton ’60 receives Alumni Award At his freshman orientation, John Dalton ’60 heard Stevens President Jess Davis utter those oft-quoted words: “Look to your left. Look to your right. One of you won’t make it through Stevens.” To Dalton’s left was Ed Bielecki, valedictorian of his high school class. And to his right was his identical twin brother, Ed, another valedictorian with “an IQ higher than Einstein’s,” Dalton said. He had a moment of concern. “Clearly, I was the odd man out,” Dalton recalled with a smile, at the Alumni Weekend Dinner Dance this past June. It just proves that on occasion, he said, college presidents can be wrong. Dalton went on to become a successful executive in the healthcare and financial services fields. He is a vigorous champion for talented students who’d like to attend Stevens but don’t have the means. And this year, for his decades of enthusiastic service to Stevens, Dalton received the prestigious Stevens Alumni Award, presented by the Stevens Alumni Association during Alumni Weekend. A General Motors Scholarship winner himself, Dalton has long supported student scholarships and recently stepped up as chair of an important effort to increase alumni giving in this area. Dalton has also made preserving Stevens’ history a personal mission. One of his major projects is collecting photos, mementoes and alumni interviews about Castle Stevens, as he plans to make a video about the historic Stevens family home. A joyful Dalton—accompanied by wife, Ann; daughter, Susan St. Onge and son-in-law, Kevin St. Onge—was full of gratitude as he received his award. He thanked Ann for being a great wife and mom; praised his accomplished classmates; saluted the strong women in his family; and even recalled the wisdom of Sister Mary Magdalen, his high school English teacher at St. Michael’s High School in Jersey City, who instilled in him a philosophy to live by. Dalton said it comes down to two questions: Did you do the best you could with the talents that God gave you? And did you work to make the world better? Dalton also remembered his brother, Ed, who passed away in 2000. A fellow Alumni Award winner, Ed returned to his alma mater to receive the Stevens Honor Award about 20 years ago—his last visit to campus—as he battled multiple sclerosis. “He challenged the audience to dare to dream,” John said. So Dalton shared his own dream: that no worthy student accepted to Stevens will ever have to turn away because they can’t afford the tuition, because so many Stevens alumni will be there to help them. —Beth Kissinger SUMMER – FALL 2013 23 Achievement Awards. From left, David Young ’88; Evelyn Burbano Koehler ’98; Stevens President Nariman Farvardin; and Jon Matos ’08. Adam McKenna ’98 could not attend the event. 3 Dave Manhas ’88 laughs with SAA Executive Director Anita Lang during the Beer Tasting event. 4 John Dalton ’60, second from right, at the Dinner Dance. 1 Selma and Martin Lobel ’38 celebrated his 75th reunion and 96th birthday. 2 Four alumni were honored with Harold R. Fee ’20 Alumni EXECUTIVE DIRECTOR EMERITUS, STEVENS ALUMNI ASSOCIATION I feel so honored and humbled to have a scholarship dedicated in my name and in the name of Future generations of students alumni are there for them, to help them receive a top a ful lling life.î Anita Lang served the Stevens Alumni Association and Stevens for 47 years before retiring as SAA Executive Director in June 2013. To honor her service, support the Anita Lang Scholarship. We congratulate Anita on her service and honor her numerous contributions to the SAA, Stevens and all our alumni. In 2006, Stevens alumni, leadership, faculty and friends created a scholarship in Anita’s honor that has touched the lives of and supported educational opportunities for dozens of students since its founding seven years ago. Recently this scholarship has been renamed The Anita Lang and Stevens Alumni Association Endowed Legacy Scholarship. Please consider supporting this scholarship in honor of all of Anita’s hard work! the Stevens Alumni Association. will always know that Stevens education and aspire toward 24 To give to the ANITA LANG SCHOLARSHIP THE STEVENS INDICATOR Or to learn more about supporting the Stevens Scholarship Program, please contact: Gilian Brannan, Director of Stewardship gilian.brannan@stevens.edu or 201.216.5243 New vice provost for research has big plans for Stevens As a new inductee into the student chapter of the American Society of Mechanical Engineers (ASME) more than 30 years ago, Mo. Dehghani discovered that the society’s first organized meeting was held at Stevens Institute of Technology in 1880. When visiting his mother during a trip to New York City several months later, he crossed the river to visit the society’s birthplace. Dehghani learned of the Stevens family’s donation of land in 1868, which had provided for the founding of a school devoted to the education of mechanical engineers. Back on campus 30 years after that first trip to Castle Point, Dehghani stands at the helm of research activities as the new vice provost for research. BY LISA TORBIC ASSOCIATE EDITOR “ at was my rst and only time on the Ste- of national centers of excellence. We will create Mo. Dehghani is Stevens’ new vens campus, until I got the call a few months our future; we will win national level recognivice provost for research and comes ago about coming for an interview,’’ said De- tion awards through the innovation and crefrom Johns Hopkins University. hghani. “It was kind of a spiritual trip for me. ativity of our research enterprise.’’ I thought of Mr. Stevens and his foresight in During his 30-year career, Dehghani has establishing an innovative, entrepreneurial led and managed dozens of administrative, technical university. For 30 years, I have been technical and business services, as well as en- faculty members), I clearly sensed the energy a mechanical engineer. But I found my roots at gineers. Most recently, he served as the found- here to make a change. I was continuously told, Stevens that day.’’ ing director of the Johns Hopkins Systems ‘We are going places,’ and I knew I wanted to Dehghani, an accomplished expert in inno- Institute, head of engineering, and a member be a part of it,” he said, citing the university’s vation management and technology develop- of the executive council at Johns Hopkins Uni- strategic plan as “a clear roadmap to place Stement, began his new administrative position versity (JHU) Applied Physics Lab in Laurel, vens among the ranks of great innovative and on Aug. 1. His charge: to lead the continuing Md. Previously, he was the division leader for technologically advanced higher education indevelopment of the University’s research pro- the New Technologies Engineering Division of stitutions.’’ grams and implement the research and scholarship component of the Stevens Strategic “ The Strategic Plan calls for adding 100+ faculty. There are already Plan, e Future. Ours to Create. He earned his B.S., M.S. and Ph.D. in mechanical engineer- some great research faculty members at Stevens, but we need to ing, all at Louisiana State University, and held bring more into the Stevens family.’’ — Mo. Dehghani a postdoctoral NSF faculty internship at MIT. Recently, he discussed his plans for Stevens. Lawrence Livermore National Laboratory in Dehghani, who describes himself as a “his“ e Strategic Plan calls for adding 100+ Livermore, Calif. He has continuously taught tory bu ,’’ nds inspiration in the generous befaculty. ere are already some great research engineering, design and design optimization quest of the Stevens family. He thinks back to faculty members at Stevens, but we need to courses at the University of California and that day long ago, when he made his rst visit bring more into the Stevens family,’’ he said. “I Johns Hopkins University. to Stevens and learned of that far-reaching gi . want to help Stevens become the destination of Dehghani spent ve years at JHU before “Today, in my present capacity, I ask myself, choice for the type of faculty and scholars who joining Stevens. It was the positive buzz about ‘What can I do to make Mr. Stevens proud?’” can help us achieve our goal of making Stevens Stevens’ transformation that sparked his ina world-class research institute with a number terest in the position. “(While speaking with SUMMER – FALL 2013 25 LONGTIME EDUC ATOR PREPARES TO MARK HIS 50TH REUNION BY LISA TORBIC, ASSOCIATE EDITOR Peter Astor ’64 describes himself as a “math geek’’ back in high school, way before the word geek was in mainstream speech. “I loved math and hated history,’’ he said simply. “Too many facts and dates in history, not enough logic.’’ He enjoyed math so much that he eventually earned three degrees in mathematics from Stevens, receiving his M.S. in ’66 and Ph.D. in ’70. Astor, the current Class of ’64 vice president and current member of the Class of ’64 reunion committee, spent a career “in the eld,’’ he said, working for several engineering rms before starting his own successful consulting company, Environmental Partners. He also spent more than 20 years in academia – as a college professor and a high school teacher. His time in the classroom has brought him to several universities. He is now retired, except for tutoring math and coaching tennis. equals. “It took me the longest time to master my times tables,’’ he admitted, confessing that he had to fake it while giving a math lesson, eventually allowing the students to shout out the answer. “I guess you could call it ‘on-thejob-training,’ ’’ he laughed. During a career teaching students ranging from middle school to graduate school, Astor, a longtime volunteer with the Stevens Alumni Association, has seen some changes. Technology in the classroom has made the teaching experience more hands-on for students, allowing them to explore for themselves. As a student, he remembers entering a classroom, taking a seat, and then staring at the teacher for the next six hours. ese days, students can move around a classroom and work in groups, voice opinions and collaborate on a project during the day. “We all learn di erently from one another. Some respond to student-centered learning and some respond better to the old model. Teaching has taught me that not everyone is an analytical learner like me. And working together – isn’t that how we work in industry?’’ He valued his general engineering curriculum courses at Stevens and credits them with serving him well during his career. “I realized that other people had di erent realities. When I worked at a big engineering rm, I embraced the practical. My dealings with engineers and scientists were easier because I understood their technical skills and interests from my days at Stevens,’’ he said. Astor has remained active with Stevens since his graduation. Besides his vice presidency, he’s held several positions within the SAA, such as class secretary for 25 years, Council Representative and a Decade Representative for a few terms. While a student, he belonged to Pi Lambda Phi, e Stute, eta Alpha Phi, and Gear & Triangle. He also served as president of the Stevens Dramatic Society and played varsity tennis. He’ll be back on campus in May to celebrate the Golden Anniversary of the Class of ’64 and was seen taking notes during Alumni Weekend 2013. Being a part of a reunion planning committee is a lot of work, which can be made easier with help. “Our reunion committee is expanding and we need volunteers to recruit attendees from speci c geographic areas (such as southern California, Florida, Chicago, Boston area, Texas and the southwest). e fact is, if you know a group of Class of ‘64 alumni near you, you have the makings of a geographic rep. Please contact us via the alumni o ce,’’ he said. “As of July 2013, we have not set our goals for reunion attendance or donations, but we know it will be more than realized by the Class of 1963. We have been talking about a tour of Downtown New York City for Friday, May 30, especially for out-of-towners, which is to include the 9/11 Memorial. ose are tough tickets to get, so we need to make rm reservations. Let us know if you are interested in the tour. I hope to see you at Alumni Weekend,’’ Astor said. He recently recalled a favorite teacher at Stevens. “ ere is no doubt that Dr. Nicholas Rose (Class of ’45) had the greatest in uence on me. Low-key, fair, good-humored, and kind. (One time) before Dr. Rose walked into a class, a fellow freshman wrote that a student was trapped behind the blackboard using reverse script. Nick came in, read the note, erased the note, and began his lecture by writing the word “LIMITS’’ in reverse script. We all laughed together,’’ he said. Peter Astor, right, meets up with Joe Weber, both from the Class of ’64, at a recent Alumni Association meeting. But, as a student, not all math was created equal for him: he shares a humorous story of how multiplication was a struggle for him in grade school, and how, several years out of college, this man with a doctorate degree in applied mathematics, a man who has taught AP Calculus and di erential equations, still had trouble quickly remembering what 7 times 8 26 THE STEVENS INDICATOR DR. JOHN DEUTCH TO SPE AK AT PRESIDENT ´S DISTINGUISHED LECTURE SERIES BY LAURA BUBECK, STEVENS’ ASSISTANT DIRECTOR FOR NEWS & MEDIA RELATIONS Dr. John Deutch, a renowned scientist and academic and government leader, will headline the President’s Distinguished Lecture Series at Stevens on Oct. 30, 2013. His talk, “ e Challenges and Opportunities of Unconventional Oil and Gas Production,” will examine policy measures to minimize the environmental impacts of hydraulic fracture. e lecture will take place in DeBaun Auditorium at 4 p.m. Deutch, institute professor at the Massachusetts Institute of Technology, has been a member of the MIT faculty since 1970, and has served as provost, dean of science, and chairman of the department of chemistry. He has held signi cant Alumni authors sought for new book collection Stevens alumni have made important contributions in numerous fields, from engineering and business to science and education. A sizable number of alumni can also add “author” to their resume. So the university is looking to honor this accomplishment, with a new initiative known as the Stevens Authors Showcase. At the request of President Farvardin, the Stevens Alumni Association, together with the Williams Library and the Office of the Provost, has launched an initiative to exhibit technical, management, and literary books authored by alumni, faculty, and staff. “I am hopeful this effort will strengthen the sense of pride and accomplishment of the Stevens community in the significant contributions that alumni, faculty, and staff have made to many fields—technical and otherwise—through the books they have written,” said President Nariman Farvardin. “I will be proud to display these works in the President’s conference room on the 13th floor of the Howe Center for visitors and the community to peruse.” Authors are encouraged to donate a copy of their books to this initiative. Once the collection has accumulated a number of books, authors will be invited to attend a reception with President Farvardin. Books can be dropped at the Stevens Alumni Office, on the 9th floor of the Howe Center, weekdays from 9 a.m. to 5 p.m. Or they can be mailed to: Stevens Alumni Association, 1 Castle Point, Hoboken, NJ 07030. Please include a note that mentions the Stevens Authors Showcase. For more information, please contact the Stevens Alumni Office at 201-216-5163. Photo by Oleksii Sergieiev Dr. John Deutch, a renowned academic and government leader, will headline the President’s Distinguished Lecture Series at Stevens in October. PRESIDENT'S DISTINGUISHED LECTURE SERIES OCT. 30, 2013 AT 4 P.M., DEBAUN AUDITORIUM production, which has reduced U.S. dependency on imported oil, lowered prices for the consumer and created jobs. “While this is good news for the U.S. in geopolitical, economic and energy terms, hydraulic fracture – the modern technique to produce these unconventional resources – has generated signi cant public opposition given its adverse environmental impacts on air quality, water quality and seismicity,” said Deutch. Deutch’s lecture will o er insights into the role of public regulatory agencies, industry, and university research and education in creating and implementing policy measures to minimize the environmental impacts of hydraulic fracture, enabling the growth of this industry. e series, launched by Dr. Farvardin in October 2012, is free and open to all who register. For more information about the President’s Distinguished Lecture Series or to register, please visit. SUMMER – FALL 2013 27 Seeking solutions from the U.S. to Bangladesh Student researchers from Stevens and the University of Alabama in Huntsville and their advisers gathered in Washington, D.C., this spring to present their SERC research project to their Navy sponsor. A prototype of the team’s Humanitarian Assistance Disaster Relief kit sits at center. F SERC tackles complex problems at home, a broad loods, cyclones and other natural disasters continually plague Bangladesh, claiming many lives. So a team of student researchers, led by Stevens, has worked to create a ferry system that offers both By Beth Kissinger, Editor a safer ferry and a quicker disaster response to save lives. e U.S. will soon face a shortage of systems engineers, with baby boomers retiring and few seasoned engineers to replace them, the government reports. In response, a fouruniversity team, led by Stevens, has developed a so ware program to help systems engineers gain valuable experience in a fraction of the normal time. Both of these exciting research projects are sponsored by the Department of Defense (DoD) and are coming out of the SERC—the Systems Engineering Research Center. e DoD-funded center, designated as a University A liated Research Center, is based at and led by Stevens, which collaborates on projects with 28 THE STEVENS INDICATOR 23 universities across the country. As the SERC works to make valuable contributions to the eld of systems engineering and to society, these two current research projects illustrate a strong commitment to nurturing the next generation of systems engineers, according to the SERC’s executive director Dinesh Verma. “ e Department of Defense has established the Systems Engineering Research Center, with Stevens as the hub, with the intent of nurturing it as a national resource for enhancing the eld of systems engineering,” said Verma, who is also dean of Stevens’ School of Systems and Enterprises (SSE). “ ese two projects represent our collaborative e orts to that end.” A better ferry for Bangladesh e Bangladesh project was actually a Senior Design project in 2012-13 that included undergraduate engineering management and naval engineering students from Stevens, and aerospace and mechanical engineering students from the University of Alabama in Huntsville. e eight-student team, working through the SERC, chose the project from the DoD’s list of proposed multi-disciplinary projects for undergraduate students, known as the Capstone Marketplace. e Marketplace is part of a DoD e ort to increase the number of young people in systems engineering and DoD careers, says Mark Ardis, a Distinguished Service Professor at Stevens and principal investigator for the SERC project, “Capstone Research to Grow Systems Engineering Workforce Capacity.” e project, which was commissioned by the Navy, originated as two projects: one involving the creation of a Humanitarian Assistance Disaster Relief (HADR) kit and another focusing on a dual use ferry. e ferry project team was charged with designing a safe, a ordable ferry suitable for transportation in a developing This ferry design was created by naval engineering students at Stevens, who worked with engineering management students from Stevens and mechanical engineering and aerospace engineering students from the University of Alabama in Huntsville on a safer passenger ferry for Bangladesh that could also provide humanitarian relief. The image shows plans for the main passenger deck. country that could also be used by the DoD. e projects merged, as the team worked on a safer ferry system for Bangladesh as well as an HADR kit that could be e ectively transported by ferry and help improve disaster relief e orts. e projects came together because researchers saw potential synergies between them, said Stevens Lecturer Eirik Hole, the team’s adviser from SSE. A systems engineering component was added to “glue” the projects together and better understand the overall operational requirements on the ferries and the HADR kits in a disaster scenario, Hole said. ree regions of Bangladesh provided the case study, as this low-lying country experi- ferries, said Michael DeLorme, a research associate with Stevens’ Davidson Laboratory who worked with the students. Ferries can be overloaded, making them vulnerable to capsizing and leading to many casualties each year, he said. e project had three components. e naval engineering students at Stevens did an analysis of the ferry system and the country’s waterways, and created a new ferry design that would transport people more safely as well as transport equipment during disaster relief ef efforts. e aerospace and mechanical engineers from Alabama designed a water puri cation system that would be part of an HADR kit, ì The lessons learned from this project have also paved the way for more projects of this kind going forward.î ñ Eirik Hole ences approximately six natural disasters each year, according to the team. Its hundreds of rivers and location on the edge of the Bay of Bengal make the country especially vulnerable. Given this environment, response time to ooding disasters by the U.S. has been slow, the team reports. Use of Bangladesh’s governmentregulated ferries provides a resource for disaster response, the team reported, but the current ferry system is troubled. Old ferries o en not designed for Bangladesh’s shallow water environments have been repurposed as passenger capable of producing at least 1,000 gallons of drinking water per day if a disaster occurred. Stevens’ engineering management students – who made up half of the project team – handled project management; did research on Bangladesh and previous disasters; determined ferry routes during a disaster; handled logistics regarding how many ferries were needed; and evaluated the best way to transport the ferries and HADR kits to areas of need. Engineering management student Jillian Barrett ’13 said that one of the project’s biggest challenges was focusing on a country so far away and unfamiliar as Bangladesh. But mentors were helpful, she said, and included the Navy’s Norbert Doerry and Jonathan Kaskin (retired); Dr. Robert Weisbrod, a longtime maritime transport expert with the Worldwide Ferry Safety Association; and a ferry industry contact in Bangladesh. e students bridged the geographical gap between Hoboken and Alabama using Skype for their weekly meetings, and Barrett learned a great appreciation for what naval and mechanical engineers do, she said. And she acquired many job-related skills, from how to ef effectively communicate with people she’s never met to keeping everyone on schedule and meeting deadlines. “It got me prepared for work,” says Barrett, who now works as a tra c manager with Verizon in West Nyack, N.Y. e team presented their project this past spring at Stevens’ Innovation Expo and in Washington, D.C., to their sponsor, the Naval Sea Systems Command. e Alabama team members completed a prototype of the HADR kit. e Navy may expand upon the project and, in the future, hire such student teams, Ardis says. Having undergraduate students work on important government-sponsored projects, on multi-disciplinary teams from across the country, “is a big deal for Stevens,” Ardis says. “Stevens is unusual in making this happen,” SUMMER – FALL 2013 29 Stevens engineering management students who worked on the Bangladesh ferry project presented their work at the university’s Innovation Expo in April. From left: Jennifer Wojtys, Ben Choe, Jillian Barrett and Jake Piccoli. Ardis says. “ is is clearly the way industry works. People don’t work in silos.” Hole praised the student team. “ ey had to tackle the planning, coordination and especially communication challenges involved, in addition to the purely technical challenges,” he said. “ is has given them a head start on what they will experience every day in their careers. e lessons learned from this project have also paved the way for more projects of this kind going forward.” prototype exposes less seasoned systems engineers to work on simulated projects in which they have to complete tasks, face di erent challenges and make decisions as they would on an actual project. e goal is to accelerate their learning in a compressed period of time. “Much like a ight simulator is for pilots, the Experience Accelerator is not meant to replace experience; but rather supplement it with focused learning experiences,” Wade said. “ e Experience Accelerator allows systems engi- We want people to fail the first time they go through it. People learn far more by failure than success. Jon Wade Accelerating learning for systems engineers Another high pro le project coming out of the SERC also targets this country’s need for more systems engineers. e eld is facing a great challenge, as a large number of systems engineers are baby boomers nearing retirement without a new generation of engineers to take their place, says the SERC’s chief technology o cer Jon Wade. So Wade, who is also an associate dean of SSE, along with his multi-disciplinary team that includes researchers from Purdue, Georgia Tech and the University of Southern California, have developed what they call “ e Experience Accelerator (EA).” e so ware neers to compress time dramatically so that they can experience critical project events that they may take years to encounter in real life.” is program, Wade says, can reduce learning time. e EA is being developed for the Defense Acquisition University, a DoD organization that o ers professional development to the department’s military and civilian workforces. Designed for systems engineers working in logistics and acquisitions, the EA allows the employee to experience, through simulation, work on a complex systems project. For now, the project involves an unmanned aerial vehicle (UAV) system. Users work in sessions, and the skill level is adjusted, as they face numerous situations and problems that are based on the real-life experiences of DoD chief engineers. “We want people to fail the rst time they go through it,” Wade says. “People learn far more by failure than success.” e engineers experience the entire project and they either succeed, the project is cancelled or they are “ red,” and try again. “To mature as a systems engineer, you need to see the project from beginning to end,” Wade says. e EA e ectively compresses time by exposing its users to experiences that they would normally face over a ve-year period on a project, in a mere eight to 10 hours, he says. Stevens graduate and doctoral students are assisting on this three-year project, which began in 2010. e EA prototype will be available for Defense Acquisition University instructors to test in early fall 2013, with more testing at Stevens also in the fall. Brent Cox ’10, a computer science graduate student, is doing programming on the EA project. e project is providing him with a broad technical experience in areas he’s never worked in, he says. Working with team members from di erent academic backgrounds—and relying on each other’s strengths—has also been valuable, he says. Collaboration among the four universities working on the EA—with researchers from the elds of systems engineering, computer science, technology management and mechanical engineering—has been strong, Wade says. “ e research really comes rst,” he said. “ ere’s been a great team spirit of people working together that makes it rewarding for all involved.” The SERC is seeking mentors for its Capstone Marketplace projects. For more information, contact Mark Ardis at mark.ardis@stevens.edu. 30 THE STEVENS INDICATOR !"#$%&'$$ &($)(*+*',- $ !"#$ !"#$%&''() There are many ways you can support our alma mater by giving Stevens some of your time: Be an active member of our alumni community Get Involved Make an Impact Attend a regional club meeting or help to start a new one Volunteer to support the Telethon Attend Alumni Weekend and Homecoming, or join the committee that helps plan these annual events Support your class scholarship or help create a new scholarship program to support our students Attend Alumni Association meetings at Stevens, and join a committee to help make our Association even stronger alumni.volunteers@stevens.edu To get involved today, email us at: …and visit our web site: SPRING 2013 31 stevens.edu/volunteer STEVENS CLUBS FISHING CLUB BY DICK MAGEE ’63 T he Stevens Alumni Association Fishing Club held two striper trips out of Keyport, N.J. The first trip on April 20 had 10 anglers. Fifteen bass were boated, but only five were keepers. The second trip, on May 2, was more successful. Seventeen bass were keepers, boated by seven anglers. Gerry Ferrara ’76 won the fishing pool, with a bass measuring 38 inches. This was Gerry’s second pool-winning fish in two years. Several members also joined the Avaya annual summer family fishing trip, out of Belmar, N.J., in July organized by Emil Stefanacci ’85. (See photo.) The weather was great, and the keeper and short fluke action kept everyone busy. If you are interested in future fishing trips, contact Dick Magee ’63 at rsmagee@rcn.com. BEIJING, CHINA, ALUMNI CLUB CENTRAL NEW JERSEY & PHILADELPHIA CLUBS Stevens alumni in Central New Jersey and Philadelphia enjoyed a night of baseball and picnicking on Aug. 2, when the Trenton Thunder (New York Yankees AA affiliate) took on the Reading Fightin’ Phils (Philadelphia Phillies AA affiliate) at Arm & Hammer Park in Trenton, N.J. The Thunder roared past the Phils 6-2. Stevens alumni in Beijing, China, were invited to a reception in Beijing this past June, hosted by Dean Gregory Prastacos of the Howe School of Technology Management at Stevens. Prastacos, pictured third from right, spoke about new developments at both Stevens and the Howe School. Attendees also got a chance to meet other Stevens alumni in the area and help them form a new alumni chapter in Beijing. To read more about the visit, see news/content/stevens-institute-technology-dean-hosts-chinesealumni-beijing-reception GREATER HARTFORD/NORTHERN CONNECTICUT ALUMNI CLUB The Greater Hartford/Northern Connecticut Alumni Club met recently for a luncheon in Windsor, Conn. All attendees were associated with UTC Aerospace Systems, Windsor Locks, Conn., and several Stevens interns or co-op students also joined. UTC’s co-op/ internship program provides excellent work experience, and for summer 2013, seven students worked there, an all-time record. 32 THE STEVENS INDICATOR STEVENS CLUBS STEVENS METROPOLITAN CLUB BY DONALD E. DAUME ’67, SECRETARY T he Class of 2013 has graduated, eager to help the world and its economy. The Class of 2017 has matriculated, faces aglow with potential. Let’s pay attention to this class and revel in their accomplishments in academics, sports, community life and the fine reputation Stevens enjoys. The Metropolitan Club continues with faithful alumni supporting each other, the Stevens Alumni Association, and their alma mater. Incorporated in 1939, the group now meets for lunch, usually on the fourth Thursday of each month at various New Jersey locations, including Don Quixote in Fairview, Puerto Spain in Hoboken, and Marinero Grill in West New York—a moveable feast, indeed. Our June meeting saw the reelection of officers John Stevens ’72 as president; A. Joseph Schneider ’46 as treasurer; Ed Wittke ’45 as club representative to the SAA; and Don Daume ’67 as secretary. A check was presented to Anita Lang for the endowed scholarship bearing her name. So far, the club has donated more than $38,000 to Stevens scholarships, in addition to the individual donations club members make each year. The club looks forward to many more years meeting in fellowship in support of Stevens and the Alumni Association. Consider meeting with us as you will be most welcome. To attend a meeting, contact the Alumni Office at 201-216-5163. HOUSTON CLUB Frank Roberto ’76, second from right, hosted an informal dinner with three new ExxonMobil employees, all fellow alumni, and one Stevens co-op student, along with other Houston area alumni, on July 19 at Pronto Cucinino in Houston. Pictured, from left, are Abel Alvarez ’11, Gina Joyce ’06, Kelly McGuire ’06, Michelle Gallo ’13, Caitlyn LaBonte ’15, Shawn Flanders ’13, Frank Roberto ’76 and Cecilia (Osterman) Coldham ’13. Ryan Kerrigan ’07 also joined the group but is not pictured. STEVENS D.C., G.O.L.D. The Washington, D.C., G.O.L.D. (Graduates of the Last Decade) Alumni Club hosted an event in June at the Port City Brewing Company in Alexandria, Va. The Stevens young alumni group got a firsthand look at the brewing process during a tour and sampled the company’s handmade, locally crafted ales. For more information on Stevens alumni clubs, contact Priya Vin at priya.vin@stevens.edu SPRING 2013 33 SPORTS UPDATE HOMECOMING OFFERS LITTLE BIT OF EVERYTHING S tevens’ annual Homecoming celebration will take place Oct. 3-6 and will include four Ducks teams in action, theater performances and a free Community BBQ. e university will open Homecoming with a special event to celebrate Stevens being named the Jostens Institution of the Year for 2012-13 by the Eastern College Athletic Conference. An ECAC Award Celebration will be held on ursday, Oct. 3, from 6:30 to 8:30 p.m. in the Schaefer Athletic Center. e award recognizes the highest standards of collegiate academics and athletic performance across all Division I, II and III schools. (See story on p. 6.) Alumni, students, faculty and administration members are invited to this complimentary event. Women’s tennis will be the rst Ducks team in action as they host Houghton College on Friday, Oct. 4, at 3:30 p.m. at night, Stevens will welcome ve new inductees into its Athletics Hall of Fame: Waleed Farid ’08 (men’s basketball); Dawn Herring ’08 (women’s volleyball); Brandon MacWhinnie ’08 HOMECOMING SCHEDULE (wrestling); William MarThursday, Oct. 3 sillo ’94 (men’s soccer); and ECAC Award Celebration, 6:30-8:30 p.m., Tom Sobe ’02 (baseball). Schaefer Athletic Center e event starts at 6 p.m. Friday, Oct. 4 on Oct. 4 with a cocktail reWomen’s Tennis vs. Houghton College, 3:30 p.m. ception in Williams Library Athletic Hall of Fame Dinner, followed by dinner and the Ceremony, 6 p.m., induction ceremony in the “Smokey Joe’s Café,’’ Bissinger Room, Howe Cen8 p.m., De Baun Auditorium ter. Admission is $45. Saturday, Oct. 5 ose interested in a Women’s Lacrosse Alumni Game, 9 a.m., De Baun Athletic Complex non-athletic event on Friday Alumni Legacy Reception, 10 a.m., can head to the De Baun AuBabbio Center Atrium ditorium for a performance Sing-along with the Stevens Choir, of “Smokey Joe’s Café’’ at 8 10 a.m., Ondrick Music Room, 4th floor, p.m. e play, presented by Howe Center the alumni of eta Alpha Stevens Community BBQ, 11:30 a.m. to 1:30 p.m. & 2 to 4:30 p.m., Walker Lawn Phi, will feature songs such as “Jailhouse Rock,” “CharMen’s Soccer vs. Nazareth College, noon lie Brown,” “On Broadway,” Performing Arts Showcase, 2 p.m., De Baun Auditorium “Love Potion #9,” and “Stand Women’s Soccer vs. Hartwick College, 3 p.m. By Me.” Tickets are $5. Field Hockey vs. Utica College, 6 p.m. Saturday is a jam-packed day with events for everySunday, Oct. 6 Women’s Tennis vs. Ithaca College, 10 a.m. one. Women’s lacrosse will Women’s Tennis vs. Elmira College, 2:30 p.m. take on its alumnae team at 9 a.m. Excellent NCAA acVisit stevensducks.com/homecoming tion begins at noon when to register and for more information 34 THE STEVENS INDICATOR Four teams will play during Homecoming, including the field hockey team. the nationally-recognized men’s soccer team hosts Empire 8 rival Nazareth College at the De Baun Athletic Complex. Women’s soccer will follow at 3 p.m. at the same location against conference foe Hartwick College, and eld hockey will close the turf at 6 p.m., battling defending conference champ and archrival Utica College. An Alumni Legacy Reception, hosted by the Stevens Alumni Association, will take place on Saturday at 10 a.m. in the Babbio Center Atrium. e reception brings together generations of alumni families, as brothers, sisters, aunts, uncles, parents and grandchildren celebrate the history of Stevens. Saturday will also feature a free Community BBQ on the Walker Gymnasium Lawn near the De Baun Athletic Complex, featuring plenty of BBQ classics and more, from 11:30 a.m. to 1:30 p.m. and from 2 to 4:30 p.m. All are welcome. A sing-along with the Stevens Choir will be in the Ondrick Music Room, Howe Center, at 10 a.m. and a free Performing Arts Showcase will be held at 2 p.m. in the De Baun Auditorium. Women’s tennis will close the festivities with two conference matches on Sunday, Oct. 6. at the Ninth Street Courts. At 10 a.m., the Ducks will welcome Empire 8 power Ithaca College, and at 2:30 p.m., the team faces o against Elmira College. —By Robert Kulish, Stevens’ Director of Sports Information and Events For a complete schedule and for registration, visit E VERY SPRING BRINGS MEMORABLE EVENTS TO A STUDENT’S LIFE, WITH FINAL PAPERS COMING DUE AND PREPARATIONS FOR COMMENCEMENT OCCURRING. BUT THIS YEAR’S COMMENCE- MENT BROUGHT ONE STUDENT TO A MEMORABLE PLACE THAT VERY FEW PEOPLE WILL VISIT: A SIXTH COLLEGE DEGREE FROM ONE UNIVERSITY. is past May, Luis Ortega put the nal touches on his Ph. D. dissertation in Finance Engineering, which he presented days before commencement. It was his sixth degree from Stevens. In the midst of preparing for his dissertation, he was also prepping for a new job, which he started in June. And weeks before the end of the semester, his wife gave birth to twin daughters, Lucia and Amelia, making him a rst-time parent. “ ere are times when I feel like my head is going to explode because I’m so busy,’’ he said, just days before commencement. “But I will never forget this period in my life.’’ Indeed. is native of Ecuador, who came to the United States when he was ready to begin his undergraduate education, added his Ph.D. to an already impressive list of degrees, including a B.E. in electrical engineering; an M.S. in computer science; an M.S. in management planning; an M.S. in nancial engineering and an M.B.A. in technology management. His Ph.D. is in the eld of nancial engineering. His thesis is titled, “A Neuro-Wavelet Model for the Short-Term Forecasting of HighFrequency Financial Time Series of Stock Returns.’’ Six degrees from Stevens? “What can I say? I want to study and learn all I can,’’ he said this past May, while picking up his cap and gown on campus. He came to the United States a er nishing high school in Ecuador, wanting to attend Stevens, and experience the American culture, he 6 BY LISA TORBIC ASSOCIATE EDITOR DEGREES OF STEVENS Luis Ortega ’85, M.S. ’89, M.S. ’91, M.S. ’09, M.B.A. ’09, obtained his sixth degree from Stevens in May 2013, his Ph.D. working in the telecommunications eld, with Verizon Wireless and with e World Bank. “I was a good planner,’’ he said with a quick smile. But he credits good friends, and of course, his wife Veronica, with their support during this time. He laughs heartily when asked if he is independently wealthy. He started a new job with Goldman Sachs on Wall Street in New York in June, working as an associate in the model risk group. “I can’t say too much about it,’’ as he gestures with a nger to his lips, but he will be doing something with the stock market. Leaving Stevens will be hard, he admits, as “ THERE ARE TIMES WHEN I FEEL LIKE MY HEAD IS GOING TO EXPLODE BECAUSE I’M SO BUSY. BUT I WILL NEVER FORGET THIS PERIOD IN MY LIFE.” —LUIS ORTEGA said. He joined a fraternity (Delta Tau Delta), played varsity soccer and always studied. “Academics came rst,’’ he said of his undergraduate days, which he described as exciting and fun. But he considered the non-academics part of a student’s life as another chance to learn, as guring out how to schedule priorities is an important skill to master. As a student, “I wanted to experience the whole package: academics, sports, a fraternity.’’ And gaining knowledge was a constant in everything he did, even when not in the classroom. “My Delt brothers taught me how to t in, what to eat, life skills,’’ he said. Ortega saved money for his education while he’s spent a good chunk of his time on campus. And he cherishes the friends he’s made here, including some fraternity brothers he’s known almost 20 years. He pointed out several members of the Stevens community (Dean Charles Su el, former Vice President Joseph Moeller ’67 and Coach Nick Mykulak) who have made an impact on him while a student. Having the experience of being a Stevens undergraduate and then a Stevens graduate student has shown him some di erences between the two. “Of course, you have to read more as a graduate student,’’ he said. “My advice to graduate students is to get more involved, experience the Stevens culture, ’’ he said. SUMMER – FALL 2013 35 RECRUIT AND RETAIN TOP FACULTY BUILD A NEW TEACHING AND LEARNING ENVIRONMENT SUPPORT STUDENT SCHOLARSHIPS SUPPORT ATHLETICS AND ACTIVITIES DOUBLE THE IMPACT OF YOUR GIFT INSTANTLY Dr. Lawrence T. Babbio, Jr. ’66, former President of Verizon and Chairman Emeritus of the Stevens Board of Trustees, graciously issued a landmark challenge in 2012 to inspire giving among alumni who wish to reconnect and make a significant impact on their university. Dr. Babbio is personally matching up to $1 million in contributions by eligible alumni. As Stevens moves boldly forward in fulfillment of its new Strategic Plan, The Future. Ours to Create., rise to Chairman Babbio’s challenge today and instantly double the impact of your gift with a single click. GIFTS MAY BE ELIGIBLE FOR MATCHING IF YOU: Are an undergraduate alumna or alumnus who has never given to Stevens Are an undergraduate alumna or alumnus who has not given since July 1, 2010 Are a graduate alumna or alumnus Are a member of one of the last 10 graduating classes (G.O.L.D.) To learn more about the Chairman’s Challenge and your eligibility, visit: GRADUATE LOG Richard Steiner, P.E., M.Eng. ’97, recently joined VHB in New York City as director of Site/Civil Engineering Services, a er more than 20 years of engineering and design experience and leading interdisciplinary teams on industrial, retail and residential projects. He was also elected to a two-year term as Graduate School representative to the Stevens Alumni Association (SAA) on July 1, 2013. e veteran engineer, whose work has been recognized by the American Academy of Environmental Engineers and the American Council of Engineering Companies, re ects at this mid-career moment. What has been your favorite project so far in your career? I’ve been fortunate to have had several career-de ning projects. e rst was a project in Waterbury, Conn., and the second involved many individual projects during the time New Jersey was investigating potential sites for building new schools (roughly 2002 to 2007). e project in Waterbury took place at the abandoned Scovill Brass Manufacturing Facility. Occupying the 100-acre site were 70 abandoned buildings full of asbestos and other contaminants, as well as contaminated soil and groundwater. e purpose of the project was to clear the buildings, clean up the contamination and prepare the site for construction of a new 2.8-million-square-foot regional shopping center. I was the resident engineer in charge of overseeing the proper clean-up and preparation of the site. For the second project, the New Jersey Schools Development Authority was tasked with building new schools or additions to existing schools. For new schools, various potential sites were evaluated. A typical “site feasibility evaluation” involved performing 18 types of feasibility services, and each service was designed to determine whether the particular site was suitable. Examples of individual feasibility services included soil borings for purposes of geotechnical and environmental characterization, boundary survey to determine property lines, and utility capacity analysis to deterRick Steiner, M.Eng. ’97, is the new Graduate School representative to the Stevens Alumni Association. mine whether utility infrastructure existed to serve the new building. As program manager, I learned how to look at potential urban sites through the lens of what makes a site work and why. What has changed in engineering over the past 25 years and what keeps you inspired? Few people had computers at their desks and cell phones in their pockets when I nished my undergraduate degree in 1989. Now each is standard issue. A consequence of these machines is email and texts. Both are very e cient, e ective ways of communicating but only in appropriate circumstances. I’m concerned that email and texts are replacing face-to-face communication—especially with younger people—and basic written communication skills are declining. ere have been many other changes such as advances in computer-aided design so ware. Among the things I like best about what I do is applying my experience to new projects and passing along my experiences to younger people. Why did you choose Stevens? I moved to New Jersey from Arizona in 1991. I met my wife in Tucson at the University of Arizona; she was from New Jersey. I knew I wanted a graduate degree in engineering after earning a bachelor’s in civil engineering. I worked in Bergen County, N.J., and knew that Stevens or NJIT were my two realistic options. I chose Stevens on the basis of recommendations from friends and its reputation. Looking back, my experiences at Stevens were positive. e challenging work of graduate education requires stretching and reaching for new ways of thinking. Has a master’s degree in engineering helped you in your career? I believe a graduate degree helps advance a career. Hiring young engineers with advanced education o ers a rm access to the latest techniques, tools and thinking. I also believe that one’s thinking becomes more mature with the greater depth of understanding that an advanced degree provides. You have served as a volunteer math tutor and are now volunteering with the SAA. Why is this important? I’m at a point in my career—and life—at which I look for opportunities to give back. Education has been important for where I am today, and Stevens is a part of it. — Compiled by Beth Kissinger SUMMER – FALL 2013 37 ALUMNI BUSINESS DIRECTORY Since 1951 Store Hours: 7:00 am – 4:30 pm Store Hours: 7:30 am – 5:00 pm Store Hours: 7:30 am – 5:00 pm 38 THE STEVENS INDICATOR ALUMNI BUSINESS DIRECTORY SUMMER – FALL 2013 39 VITALS MARRIAGES Toby J. Doviak ’03 to Rachel Lee Johnson on July 27, 2013. Adrian W. Jachens ’04 to Samantha P. Herman on Oct. 15, 2011. Matthew C. Edwards ’11 to Regina K. Pynn ’11 on Oct. 6, 2012. Rebecca L. Dietrich ’12 to Daniel Sanchez on June 28, 2013. Juan C. Benitez, M.S. ’09, to Fatima I. on June 3, 2012. OBITUARIES H. Straus ’39............................. 1/10/13 + D. Okrent ’43 .......................... 12/14/12 J.H. Povolny ’43 .......................... 5/7/12 + W.H. Heiser ’44........................... 1/1/13 + C.O. Lindahl ’44 .......................... 6/1/13 + J.P. Runyon ’44......................... 2/16/13 + F.J. Cashin ’46 .......................... 5/22/13 + A.H. Everson, Jr. ’47 ................. 5/31/11 C.D. Martin ’47 ........................... 6/7/13 + A.A. Hein ’48 ............................ 4/20/11 J.L. Arata ’49 ............................ 2/11/13 A.H. Baker ’49 .......................... 6/18/13 A.C. Lawson ’49 ........................ 1/20/13 R. Cechanek ’50 ......................... 1/6/13 L.J. O’Brien ’50 ...................... Unknown G.B. Schaeffer, Jr. ’50 ............... 3/18/13 + D.P. Van Court ’50 ..................... 11/1/12 + E.S. Babich ’51 ......................... 5/19/12 R.T. Pearson ’51 ....................... 6/11/13 + N.A. DeBruyne ’53 .................. 10/24/12 D.H. Lueders ’53....................... 1/20/13 P.R. Rhinehart ’53..................... 1/12/13 H.R. Soederberg ’54 ................. 5/23/13 T.F. Pinelli ’56 ........................... 1/29/13 W.W. Pruss ’57 ............................ 2/1/13 J.P. Larmann ’59 ......................... 3/2/13 J.J. Bertini ’62........................... 5/16/13 D.A. Dragolic ’63 ......................... 6/7/13 C.E. Fauroat ’65 ........................ 6/10/13 + F.J. Vilece ’66.......................... 12/24/12 H.L. Treffinger ’67 ....................... 1/3/13 S.A. Saglibene, Jr. ’69 ............... 5/11/13 + E.J. Casey ’75 ......................... 10/20/12 GRADUATE SCHOOL J.F. Boyce, M.S. ’51 ............... Unknown L.J. Taub, M.S. ’53................. Unknown W.G. Hill, M.S. ’54..................... 2/12/13 D.D. Brill, M.S. ’56 ................... 12/2011 V.J. Logan, M.S. ’61 .................. 2/21/13 E.C. Uphoff, M.S. ’61 ............. Unknown T.L. Tanner, M.S. ’63 ................. 5/12/13 L. Leskowitz, M.S. ’64 ............ Unknown J.K. Steadman, Ph.D. ’81 .......... 6/12/13 F.R. Lautenberg, ......................... 6/3/13 Hon. D.Eng. ’99 G.E. De Baun............................ 3/25/13 Hon. B.E. ’04 + Obituary appears in the Class Logs section of the undergraduate edition. MAY 30, MAY 31, JUNE 1, 2014 MORE DETAILS TO FOLLOW. SAVE THE DATE ALUMNI WEEKEND 40 THE STEVENS INDICATOR THE IRA CHARITABLE ROLLOVER IS BACK! BUT ONLY THROUGH THE END OF 2013 is special opportunity to make a direct, tax-free gift to Stevens from your IRA remains available through December 31, 2013. !"#$%&'$("#()"*+,$-&$.)/#$$ )$0*-12")0)3$4"&.$%&'"$56!7 The IRA Charitable Rollover is back for another year! In 2013, direct gifts to Stevens from your IRA can: 1 2 3 Be an easy and convenient way to make a gift from one of your major assets Be excluded from your gross income: a tax-free rollover Count towards your required minimum distribution 8&"$%&'"$,*4-$-&$9-#:#+;$-&$<')3*4%$$ t You must be 70½ or older at the time of your gift t Your transfer must go directly from your IRA to Stevens t Your total IRA gift(s) to charity cannot exceed $100,000 t Your gift(s) must be outright – transfers to a donoradvised fund, a charitable trust or in exchange for a charitable gift annuity do not qualify In celebration of the 50th year since my graduation, I wanted to increase my annual gift to Stevens. Making an IRA Charitable Rollover gift to support the Class of í 63 Scholarship Fund certainly t the bill. Plus, an outright gift from my IRA just seemed very logical! If the distribution had been transferred to me, Ií d have to pay the taxes and lose a signi cant portion of it. N EV ILLE S ACHS í 63 Would you like to know more about this and other planned giving opportunities? Please contact Michael Governor, Director of Planned Giving at (201) 216-8967 or send him an email at michael.governor@stevens.edu. Please consult your attorney, accountant, or financial advisor to discuss applicability of this information to your personal circumstances. THE STEVENS INDICATOR STEVENS ALUMNI ASSOCIATION STEVENS INSTITUTE OF TECHNOLOGY 1 CASTLE POINT HOBOKEN, NJ 07030 Non-ProďŹ t Org. U.S. Postage Paid Stevens Institute of Technology CHANGE SERVICE REQUESTED Contract Manufacturing for Medical Devices and Assemblies s s s s s s s s s s $ESIGN FOR -ANUFACTURABILITY $EDICATED 0ROTOTYPE 2ESOURCES 2OLLED 3EAMLESS AND $RAWN 4UBES -ETAL )NJECTION -OLDING 0LASTIC )NJECTION AND )NSERT -OLDING 0RECISION -ETAL 3TAMPING 0RECISION 3HARPENING ,ASER #UTTING 7ELDING AND -ARKING %LECTROPOLISHING!NODIZING #USTOM !SSEMBLIES )3/ s )3/ s )3/ To learn how Micro’s team can fulďŹ ll your speciďŹ c needs, please contact us at: sales@micro-co.com. MICRO 140 Belmont Drive, Somerset, NJ 08873 USA 4EL s &AX s WWWMICRO COCOM MSC-3058-R4 THE STEVENS INDICATOR SUMMER – FALL 2013 | http://issuu.com/stevensalumniassociation/docs/stevens_indicator-summer-fall_2013_ | CC-MAIN-2015-14 | en | refinedweb |
IRC log of tagmem on 2005-12-13
Timestamps are in UTC.
18:00:07 [RRSAgent]
RRSAgent has joined #tagmem
18:00:07 [RRSAgent]
logging to
18:00:19 [Zakim]
TAG_Weekly()12:30PM has now started
18:00:26 [Zakim]
+Norm
18:01:06 [Zakim]
+[IBMCambridge]
18:01:06 [Zakim]
+DanC
18:01:14 [noah]
zakim, [IBMCambridge] is me
18:01:14 [Zakim]
+noah; got it
18:01:29 [Zakim]
+Vincent
18:01:41 [noah]
zakim, who is here?
18:01:41 [Zakim]
On the phone I see Norm, noah, DanC, Vincent
18:01:42 [Zakim]
On IRC I see RRSAgent, noah, Vincent, ht, Zakim, timbl, Norm, DanC
18:01:54 [DanC]
Zakim, take up item 1
18:01:54 [Zakim]
agendum 1. "Administrative: roll call, next teleconference, agenda review, review of recors" taken up [from DanC]
18:02:04 [ht]
zakim, please call ht-781
18:02:04 [Zakim]
ok, ht; the call is being made
18:02:05 [Zakim]
+Ht
18:02:27 [DanC]
Zakim, who's on the phone?
18:02:27 [Zakim]
On the phone I see Norm, noah, DanC, Vincent, Ht
18:04:55 [DanC]
agenda + namespaceDocument-8 (maybe)
18:05:53 [DanC]
Regrets: RF
18:06:12 [Zakim]
+TimBL
18:06:40 [DanC]
at risk: Ed (hardware foo)
18:06:51 [DanC]
Scribe: DanC
18:06:57 [DanC]
Chair: VQ
18:07:03 [DanC]
PROPOSED: to meet next 20 Dec
18:07:20 [DanC]
RESOLVED: to meet next 20 Dec; NDW to scribe
18:07:27 [noah]
The 20th is OK for me.
18:07:42 [noah]
I'm unavailable on the 27th
18:07:49 [DanC]
PROPOSED: to cancel 27 Dec
18:07:58 [DanC]
RESOLVED: to cancel 27 Dec 2005
18:08:33 [DanC]
considering... meet 3 Jan 2006?
18:08:53 [noah]
i think i'm OK on the 3rd.
18:09:06 [Zakim]
+DOrchard
18:09:15 [DanC]
3 Dec looks likely (to be confirmed 20 Dec)
18:09:28 [dorchard]
dorchard has joined #tagmem
18:09:43 [DanC]
3 Jan 2006 looks likely (to be confirmed 20 Dec)
18:10:05 [DanC]
agenda?
18:10:50 [DanC]
agenda -4
18:11:16 [DanC]
namespaceDocument-8 for next week
18:12:00 [DanC]
DC: remind me who has the ball on self-describing docs?
18:12:08 [DanC]
NDW: HT and I. I have started something
18:12:43 [DanC]
DC: note speech grammar spec has something relevant
18:14:04 [DanC]
VQ: re ftf minutes... Ed offered to edit day 1, before his laptop went kerflewey...
18:14:33 [DanC]
... NM did part of day 2?
18:14:50 [DanC]
NM: yes; I'd particularly like review of the web service example stuff, as I had to reconstruct it from memory
18:15:27 [DanC]
TBL: yes, I'd like help with the Tue AM stuff, NM, thanks
18:15:58 [DanC]
VQ: so can we approve next week?
18:16:06 [DanC]
HT: I think so; I'm in a position to help Ed
18:16:19 [DanC]
Zakim, next item
18:16:19 [Zakim]
agendum 2. "Escaping the # mark in XQuery 1.0 and XPath 2.0 Functions and Operators" taken up [from DanC]
18:16:45 [DanC]
VQ: see question from Ashok of XSL/XQuery and #...
18:16:50 [DanC]
->
FW: Escaping the # mark
18:17:26 [DanC]
NDW: yes, I agree with Dan: the # should be escaped in encode-for-uri()
18:18:10 [DanC]
NDW: I'm inclined to link dan's msg to the XQuery bug entry, which should move things along
18:18:40 [DanC]
TBL: no argument the other way? no dissent?
18:18:45 [DanC]
NDW: no, just a bug fix.
18:18:49 [DanC]
Zakim, next item
18:18:49 [Zakim]
agendum 3. "issue NamespaceState-48" taken up [from DanC]
18:19:44 [DanC]
NDW: I'm a little surprised that you approved before I finished my actions, HT, but I have since completed them.
18:19:57 [DanC]
HT: I was mostly approving the good practice
18:20:01 [DanC]
q+ to ask about nsuri
18:21:07 [DanC]
NDW: recent changes are... * [missed] * things-change is the norm * [missed]
18:21:35 [dorchard]
you can never enter the same river twice...
18:21:48 [Norm]
"As a general rule, resources on the web can and do change. In the absence of an explicit statement, one cannot infer that a namespace is immutable."
18:21:50 [DanC]
[[ In the absence of an explicit statement, one cannot infer that a namespace is immutable. ]]
18:22:51 [Vincent]
ack DanC
18:22:52 [Zakim]
DanC, you wanted to ask about nsuri
18:23:42 [ht]
Suggest to replace "in the namespace" with "in the namespace named"
18:23:58 [Norm]
Proposed: The proposed definition of a new local name “id” in the namespace identified identified by the namespace name “”
(the xml: namespace) raised a question about the identity of a namespace.
18:24:20 [Norm]
Umh: The proposed definition of a new local name “id” in the namespace identified by the namespace name “”
(the xml: namespace) raised a question about the identity of a namespace.
18:26:16 [Norm]
q+
18:26:58 [ht]
q+ to discuss the 'abc' example
18:26:59 [timbl]
xml:abc
18:27:08 [dorchard]
q+ to discuss the abc example.
18:27:25 [Vincent]
ack Norm
18:28:00 [DanC]
[[Another perspective was that the xml: namespace consisted of all possible local names and that only a finite (but flexible) number of them are defined at any given point in time. ]]
18:29:01 [DanC]
(scribe missed a bunch... sorry...)
18:29:18 [timbl]
q+
18:29:31 [Vincent]
ack ht
18:29:31 [Zakim]
ht, you wanted to discuss the 'abc' example
18:29:33 [DanC]
NM: I see 3 positions: (a) namespaces have finite numbers of name and are immutable (b) [missed] (c) [missed; darn]
18:29:39 [ht]
HST would have preferred for the crucial sentence "Adding a defintiion for the local name "id" in the xml: namespace demonstrates . . ."
18:30:11 [noah]
[missed] = there are a finite number now, but tomorrow I as NS owner may tell you that there are more
18:30:18 [DanC]
DC: if namespace contain all the strings, then "Adding the local name “id” to the xml: namespace" is incoherent
18:30:26 [Norm]
Much better, thank you ht
18:30:31 [DanC]
yes, "adding a definition" is better.
18:31:28 [dorchard]
q?
18:31:41 [ht]
ack timbl
18:31:42 [DanC]
TBL: doesn't appeal to me. People speak of adding things to namespaces, and let's not say otherwise
18:32:25 [noah]
I think I'm hearing Tim take my position (b); the members of the namespace are at a given time only those that have been defined, but the set can change over time
18:32:40 [DanC]
... let's say "N is in ns I iff the owner of I has given N a definition"
18:32:41 [timbl]
It isn
18:32:56 [noah]
Dan: that sounds right to me, or certainly very close
18:33:26 [Vincent]
ack dorchard
18:33:26 [Zakim]
dorchard, you wanted to discuss the abc example.
18:33:54 [DanC]
(I don't care a whole lot which terminology we pick, but please let's pick.)
18:34:41 [DanC]
DO: this seems pretty abstract. Software doesn't change when these changes happen. [disagree!]
18:35:02 [timbl]
A namespace is a set of terms and their definitions.
18:35:02 [DanC]
DO: I can see either way...
18:35:39 [DanC]
NDW: speaking of definitions seems best...
18:35:58 [noah]
q= to talk about definitions
18:36:04 [noah]
q= to talk about definitions
18:36:13 [noah]
q+ to talk about definitions
18:36:15 [dorchard]
DO: this seems pretty abstract. If we pick the "add a definition to namespace" versus "add a name + definition to namespace", no software changes because of which option we pick.
18:36:23 [DanC]
DC: how about a gloss? ala: "people speaking of adding a name to a namespace; we prefer to speak of adding definitions..."
18:36:44 [noah]
q?
18:37:10 [DanC]
TBL: that's pushing water up-hill. It seems to me that a namespace is like a python dictionary: it's a mapping of terms to meanings/definitions/values
18:37:14 [timbl]
for term in { "sdf": gfooo, "sdf": bar }
18:37:47 [timbl]
q?
18:37:51 [Vincent]
ack noah
18:37:52 [Zakim]
noah, you wanted to talk about definitions
18:38:00 [DanC]
NDW: I think I can find a middle-ground, offline
18:38:41 [DanC]
NM: umm... "define"... that's one thing that we do, but take the example of a C program...
18:39:17 [timbl]
Nooah is very right here ... you can define a namespace as an infinite set
18:39:24 [DanC]
NM: perhaps "license certain uses" is more general than define
18:39:34 [Vincent]
ack DanC
18:39:34 [Zakim]
DanC, you wanted to noodle about "encourage use"; yeah...
18:39:34 [timbl]
... can be a function rather than a dictionary in python terms.
18:39:40 [timbl]
+1
18:40:46 [DanC]
some examples: all the prime numbers, all the lat/longs, all the HTML terms with _ appended
18:41:14 [timbl]
the sort of namespace any self-respectig self-describing programmer would declare twice before breakfast.
18:41:29 [noah]
My C language example was: let's make sure we don't have to individually define the terms in a NS. e.g. I could say my NS has in it all possible identifiers in any C program you can write.
18:41:30 [DanC]
ACTION NDW: revise namespaceState.html w.r.t. "in a namespace" and "define"
18:41:41 [noah]
I believe Tim's functional approach is a more formal way of getting at the same thing.
18:42:01 [DanC]
Zakim, next item
18:42:01 [Zakim]
I do not see any non-closed agenda items, DanC
18:42:16 [DanC]
Topic: Update on some issues
18:42:23 [DanC]
VQ: we didn't get to this at the ftf...
18:43:21 [DanC]
Topic: IRIEverywhere-27 status check
18:44:25 [DanC]
DC: I don't want to change its priority; I don't mind if we make progress on it, but I don't want it to preempt self-describing documents, versioning, etc.
18:45:06 [DanC]
HT: meanwhile, Bjoern seems to have made some very detailed points. We'll need a "microscope" when we get to this
18:45:17 [noah]
zakim, who is here?
18:45:17 [Zakim]
On the phone I see Norm, noah, DanC, Vincent, Ht, TimBL, DOrchard
18:45:18 [Zakim]
On IRC I see dorchard, RRSAgent, noah, Vincent, ht, Zakim, timbl, Norm, DanC
18:45:20 [DanC]
Topic: metadataInURI-31 status check
18:45:38 [DanC]
VQ: from Sep, action was on Roy and Noah...
18:46:21 [DanC]
Noah: much of what I said in Sep was "most of this was before my time" but somehow I ended up with the action
18:47:05 [DanC]
NM: I'm more swapped in on principle-of-least-power
18:48:01 [DanC]
NM: I'd need help from Roy... VQ: he's only around for another month...
18:48:32 [ht]
q+
18:48:40 [DanC]
ack ht
18:48:43 [Vincent]
ack ht
18:48:59 [noah]
Noah feels he doesn't have the context on all the work that happened on this before he joined the TAG.
18:49:01 [DanC]
HT: this issue has come up in xml-dev recently, indirectly...
18:49:17 [noah]
Maybe or maybe not I'm the right person to carry this forward, by myself or with help.
18:49:33 [DanC]
... somebody asked: is foo/bar any different from ?x=foo;y=bar , and various people said yes/no/maybe...
18:49:36 [noah]
At the very least, I'd appreciate email reminding me of what the progress to date has been and what remains to be done.
18:49:55 [timbl]
q+
18:49:58 [DanC]
... meanwhile, we have the case of the guy who got arrested for typing ../../ into his browser... does the use of foo/bar imply something about ../../ ?
18:50:16 [DanC]
... seems to raise some questions about opacity
18:50:36 [DanC]
(Jim Gettys wrote some good stuff on this... on relative URI refs; I think it got stored in /DesignIssues/ )
18:50:46 [timbl]
1. The existence of something with URI /a/b/c/d does not give you licence to conclude ANYTHING.
18:51:00 [Vincent]
ack timbl
18:51:03 [DanC]
... and there's this stuff with checksums in URIs, which seems to be a counter-point to [?]
18:51:42 [DanC]
TBL: The existence of something with URI /a/b/c/d does not give you licence to conclude ANYTHING.
18:51:47 [DanC]
HT: ppl seem to believe otherwise
18:52:18 [timbl]
2. He didn't get arrested for making a valid URI, he got arrested for doing something like
18:52:23 [DanC]
TBL: he didn't get arrested for just ../../ , but for using too many ..'s; that make an illegal URI
18:52:33 [timbl]
GET /a/.../.../../..
18:52:44 [timbl]
GET /a/.../.../../../etc/passwd
18:53:10 [ht]
zakim, please call ht-781
18:53:10 [Zakim]
ok, ht; the call is being made
18:53:29 [Zakim]
-Ht
18:53:32 [ht]
zakim, disconnect ht
18:53:32 [Zakim]
sorry, ht, I do not see a party named 'ht'
18:53:42 [ht]
zakim, please call ht-781
18:53:42 [Zakim]
ok, ht; the call is being made
18:53:44 [Zakim]
+Ht
18:53:44 [DanC]
Topic: Issue RDFinXHTML-35 status check
18:54:06 [DanC]
VQ: I don't know anything about this one at all
18:54:37 [DanC]
->
Storing Data in Documents: The Design History and Rationale for GRDDL
18:55:07 [ht]
is an interesting and well-thought-out design for a class of URIs which include checksums in the URI. . .
18:55:33 [ht]
ref. metadataInURI-31
18:56:01 [DanC]
DC: remains in my someday pile
18:56:21 [DanC]
Topic: Issue siteData-36 status check
18:57:29 [DanC]
->;list=www-tag
google sitemaps and some history of sitemaps [siteData-36] Jun 2005
18:57:48 [DanC]
rather...
18:58:44 [timbl]
ls-LR
18:59:49 [noah]
Dan says: "wonder whether Google considered using RDF for site maps". Now that we have GRDDL, might it be better to make the goal be: whatever format you choose should yield truly useful RDF when GRDDL'd.
19:00:09 [timbl]
You can submit a Sitemap to Google in a number of formats:
19:00:19 [DanC]
TimBL: remember ls-LR? you put it at the top of your ftp site if you didn't want archie to crawl it, and it made things faster
19:00:28 [timbl]
19:01:07 [DanC]
VQ: so it remains in the someday pile...
19:01:32 [DanC]
Topic: Issue rdfURIMeaning-39 status check
19:01:43 [DanC]
VQ: anything new since Sep/EDI?
19:03:51 [timbl]
Link rel=icon in Mozilla
19:04:03 [timbl]
Possibel design Link rel=meta foo.rdf
19:04:05 [DanC]
(back to siteData for a bit)
19:04:33 [timbl]
Link rel=sitedata /data.rdf
19:05:14 [DanC]
TBL seems to lament that nobody's working on siteData; DC suggest TBL wish into a blog
19:05:49 [DanC]
Topic: Issue rdfURIMeaning-39 status check
19:07:25 [DanC]
DC: seems nearby to self-describing documents, and to abstractcompnentrefs; where is component designators, these days?
19:07:41 [DanC]
HT: component designators is not a top priority in the WG these days
19:07:48 [DanC]
"Last Call Ends 26 April 2005"
19:07:58 [DanC]
--
19:08:18 [DanC]
HT: yes, DC's comments are still outstanding
19:09:08 [DanC]
Topic: Issue URIGoodPractice-40 status check
19:09:41 [DanC]
VQ: any news since Feb?
19:10:22 [DanC]
NM: RF was going to contact DO a while ago... did that happen?
19:10:24 [DanC]
DO: no
19:12:15 [DanC]
DC: this came up in WSDL recently; I dissented to the WSDL design that implies that the SPARQL interface URI ends in )
19:13:32 [DanC]
TBL: yes, the WSDL WG saw the desire to use foo#bar as just an RDF thing...
19:14:02 [DanC]
... and without, e.g. a TAG decision, there isn't anything that says flat namespaces and foo#bar is a good thing
19:14:54 [DanC]
TBL: I get the impression that the WSDL WG didn't mind the long URIs because they don't really use the URIs; they identify things in context using other syntaxes
19:15:13 [DanC]
... maybe we should say "give things URIs, and use it!"
19:16:29 [DanC]
DO: we were asked to make URIs for all these things, and we followed all the constraints that are established
19:16:49 [DanC]
s/we/the WSDL WG/
19:17:50 [noah]
q+ to say that you can't always expect people to use URI's internally
19:18:41 [noah]
q?
19:18:50 [Vincent]
ack noah
19:18:50 [Zakim]
noah, you wanted to say that you can't always expect people to use URI's internally
19:18:58 [ht]
q+ to draw the XML Schema ||
19:19:13 [DanC]
(the topic is more like: URIGoodPractice-40 and WSDL )
19:21:18 [Vincent]
ack ht
19:21:18 [Zakim]
ht, you wanted to draw the XML Schema ||
19:21:47 [DanC]
DO: the flat namespace option was one of the options brought to the TAG ages ago, and the TAG said the () design is fine
19:21:52 [timbl]
q+
19:21:57 [DanC]
TBL: really? I guess we blew it
19:22:51 [DanC]
HT: in RDF, there's one big domain, so it's natural to have one flat namespace. In other domains, there's no basis for saying "you must use a flat namespace" because their space isn't flat
19:23:18 [noah]
Henry repeats my example of elements and attributes in XML, which in turn leads to symbol spaces in schema.
19:23:57 [noah]
I think that many programming languages have parallels: for example, in Java, we do not insist that class names and member names be distinct
19:24:03 [Vincent]
ack timbl
19:24:12 [DanC]
TBL: the RDF space isn't flat either; there's all sorts of structure to the classes in RDF, but RDF accepts the flat namespace constraint
19:24:21 [ht]
q+ to note that Python has a package system!
19:25:02 [DanC]
(XML and python are both in the web. and URIs have all sorts of hierarchy like python's package systems)
19:25:19 [ht]
q-
19:25:29 [ht]
q+ to say Noah and I said XML, not XML Schema!
19:25:35 [Vincent]
ack ht
19:25:35 [Zakim]
ht, you wanted to say Noah and I said XML, not XML Schema!
19:26:04 [DanC]
TBL: the multiple-symbols-space aspect of the XML Schema design is really sub-optimal
19:26:25 [DanC]
yes, that was a bug.
19:26:48 [ht]
The _only_ think we ever discussed was saying you couldn't name a type with the same name as an element
19:27:04 [ht]
We _never_ considered not allowing you to name elements and attributes with the same local name
19:27:54 [DanC]
right, but we discussed schema languages that just had one flat namespace per schema; if you wanted a element and attribute with the same name, only one of them would get a #foo name
19:27:57 [ht]
q+ to agree with Tim about the origin of all this
19:28:29 [Vincent]
ack ht
19:28:29 [Zakim]
ht, you wanted to agree with Tim about the origin of all this
19:29:05 [DanC]
HT: yes, it's the contextualized names/references that is the root of this stuff
19:30:09 [DanC]
VQ: lacking near-term actions...
19:30:54 [DanC]
HT: I'm very interested in this design space, and I intend to write, in some context, something on the value of multiple symbol spaces
19:31:26 [DanC]
(tim, I think the issues are only connected if you take the "flat namespaces are good" position)
19:31:33 [DanC]
(Which I do)
19:31:44 [Zakim]
-DOrchard
19:31:45 [DanC]
ADJOURN.
19:31:49 [Zakim]
-noah
19:31:51 [Zakim]
-Ht
19:31:51 [Zakim]
-Vincent
19:31:52 [Zakim]
-Norm
19:31:57 [Zakim]
-DanC
19:32:00 [Zakim]
-TimBL
19:32:01 [Zakim]
TAG_Weekly()12:30PM has ended
19:32:02 [Zakim]
Attendees were Norm, DanC, noah, Vincent, Ht, TimBL, DOrchard
20:58:44 [Norm]
Norm has joined #tagmem
21:33:46 [Zakim]
Zakim has left #tagmem | http://www.w3.org/2005/12/13-tagmem-irc | CC-MAIN-2015-14 | en | refinedweb |
argcomplete 0.8.0
Bash tab completion for argparse
Tab complete all the things!
Argcomplete provides easy, extensible command line tab completion of arguments for your Python script.
It makes two assumptions:
- You’re using bash or zsh).
Installation
pip install argcomplete activate-global-python-argcomplete
See Activating global completion below for details about the second step (or if it reports an error).
Refresh your bash environment (start a new shell or source /etc/profile).
Synopsis
Python code (e.g. my-awesome-script.py):
#!/usr/bin/env python # PYTHON_ARGCOMPLETE_OK import argcomplete, argparse parser = argparse.ArgumentParser() ... argcomplete.autocomplete(parser) args = parser.parse_args() ...
Shellcode (only necessary if global completion is not activated - see("{m}".format(m=args.member)).json())
./describe_github_user.py --organization heroku --member <TAB>
If you have a useful completer to add to the completer library, send a pull request!()
Printing warnings in completers
Normal stdout/stderr output is suspended when argcomplete runs. Sometimes, though, when the user presses <TAB>, it’s appropriate to print information about why completions generation failed. To do this, use warn:
from argcomplete import warn def AwesomeWebServiceCompleter(prefix, **kwargs): if login_failed: warn("Please log in to Awesome Web Service to use autocompletion") return completions
Using a custom completion validator.
Bash version compatibility.
Debugging
Inspired and informed by the optcomplete module by Martin Blais.
Links
License
Licensed under the terms of the Apache License, Version 2.0.
- Downloads (All Versions):
- 1262 downloads in the last day
- 9051 downloads in the last week
- 28151 downloads in the last month
- Author: Andrey Kislyuk
- License: Apache Software License
- Platform: MacOS X,Posix
- Categories
- Development Status :: 5 - Production/Stable
- Environment :: Console
-: argcomplete-0.8.0.xml | https://pypi.python.org/pypi/argcomplete/0.8.0 | CC-MAIN-2015-14 | en | refinedweb |
Microsoft Unveils Open Source Exploit Finder 310
Houston 2600 sends this excerpt from the Register about an open-source security assessment tool Microsoft presented at."
Open Source?! Wait for it... (Score:3, Funny)
'hellfrozeover' tag in 3... 2... 1...
Re: (Score:3, Insightful)
Definitely not.
Microsoft doesn't have anything about open source actually. They're perfectly fine with the BSD for instance, which they can incorporate in their products. They're also fine with their own "shared source" deal, which goes from "non commercial" to "you can only look at it".
What MS really despises is the GPL. They can't use it, and can't buy the source out in many cases. Of course they could technically use it, but they could apply the "embrace and extend" tactics, and would have to give out an
really? (Score:3, Informative)
Are you sure, Coward? [opensource.org]
Or you say it won't be released under ms-pl?
Re: (Score:2, Funny)
Are you sure, Coward?
Please, no need for the formality. You can call me Anonymous...
Re: (Score:2, Interesting)
Wrong? Maybe... Note that MS-PL is not compatible with GNU GPL. That may have been just a coincidence from other requirements they had, but it may also have been #1 requirement for all MS-* licenses.
As far as I can tell MS-PL is exactly like BSD license, except it has a clause that makes it GPL-incompatible. MS-RL is very much like GPL plus a clause that makes it GPL-incompatible. I notice a trend here and it fits parents comment quite well.
Note that I'm not saying everything needs to be GPL-incompatible, I
Re:Open Source?! Wait for it... (Score:4, Interesting)
So what? The viral GPL license is not the only one that makes your software free.
What you say is factually correct yet it misses the point entirely. I like benefit of doubt so I'll assume that you were not being deliberately obtuse. If Microsoft really wanted to release source in a way that is useful for the community, then they would be compatible with the GPL or would simply use the unmodified GPL. They know very well that the vast majority of Free Software, especially that which is available for Unix-like operating systems, is GPL.
So a developer who maintains GPL software has two choices regarding the code that Microsoft releases. The first choice is to ignore it and avoid using it, because I would certainly expect Microsoft to vigorously pursue anyone who violates their license. The second choice is to abandon the GPL and release the software under the Microsoft license so that Microsoft's code could be incorporated into the project. This has two benefits for Microsoft. At the very least, they can talk a good game about how "open" they are becoming while actually doing very little for the community. At the most, they can tempt people to stop using the GPL.
The GPL and Free Software in general is perhaps Microsoft's first experience with a potential competitor that they cannot buy out and cannot embrace-and-extend, so their huge resources and preferred tactics are rendered useless. Either they just give up or they realize that they cannot use the "direct approach". I would not expect them to just give up. The saying that comes to mind is "if you get into bed with Microsoft, you're going to get fucked." Anyone who really believes that Microsoft has had a change of heart and is now a trustworthy ally of Free Software is frankly rather naive. You're dealing with an entity that became so dominant in its industry by means of shrewd business decisions and Machiavellian strategy. I would expect a close-source software company with even half of their willingness and ability to dominate to see Free Software as an implacable enemy that requires new tactics. If anyone believes it could possibly be otherwise, the evidence against you is strong but I'd like to know why you feel that way.
Re:Open Source?! Wait for it... (Score:5, Insightful)
If Microsoft really wanted to release source in a way that is useful for the community, then they would be compatible with the GPL or would simply use the unmodified GPL.
Oh bullshit. Something doesn't have to be GPL to be useful for the community - take FreeBSD for instance. Demons, GPL zealots are as bad as Apple zealots!
Re: (Score:3, Insightful)
If you believe that recognizing the strategic aspects of Microsoft's business decisions makes one a zealot, then you are fortunate. You are fortunate because you have never seen a real zealot.
The same thing that would happen if a Free Software developer were found using Microsoft's non-GPL code in their GPL software: a legal problem. The incompatibility of the licenses is mutual
Re: (Score:3, Insightful)
demonizing me and calling me "zealot" and other names because I dared to make observations and support them with reason
Sorry, your long winded response isn't going to convince me otherwise. The article and summary simply stated that Microsoft had released open-source software, which they did. You're an evangelist of a particular open source license that has all sorts of religion behind it, preaching down other licenses that don't align themselves with your principles. To say that nobody will find this useful is ridiculous. Sure, your "community" might not have any use for it. What is it with your community and their sense of entitlement?
Eh let's make one observation that should be fairly obvious: if not for the success of Open Source software under the GNU Public License, of which the most prevalent expression is the GNU/Linux operating system and its associated applications, then Microsoft would not now show any interest in publically releasing any code of theirs. As much as they talk of innovation, and as many new things as they have genuinely innovated, Microsoft is just following someone else's lead on this one.
So, Microsoft sees
Re: (Score:3, Insightful)
That's quite trivial, though "holy" is your word, not mine. You just can't get over the fact that someone can appreciate freedom, including software freedom, without being a zealot and so you feel the need to insert words that I clearly never used. Feel free to perform a text search on this thread if you don't believe me; you won't find me calling it "holy" anywhere, nor
Re: (Score:3, Insightful)
So Microsoft can't use GPL code, and you're totally cool with that.
What an asinine assertion! Of course MS can use GPLed code, just like anyone else can. They just have to abide by the terms of the license... you know, just like anyone else.
Re:Libre? (Score:5, Informative)
It's released under the Ms-PL, which is OSI-approved.
Re: (Score:2)
The proper way to say it is "it's not open source compatible (gpl/others)", and even OSI knows that.
Just because its close in name, doesn't mean it's still not as proprietary as possible.
This is like putting an open source bumper sticker on a car and saying it's open source.
Re:Libre? (Score:5, Informative)
Re:Libre? (Score:4, Interesting)
The GPL maximises protection against software patents and forbids distribution as proprietary-only software. The Ms-PL minimizes protection against software patents and forbids distribution as libre-only software. The Ms-PL formally fulfills the requirements for an OSI approval but apart from that it is everything what you would expect a license from Microsoft to be. To understand the Ms-PL just imagine the Venn diagram for the following equation: MsPL = ( OSI - GPL ) & Microsoft
Re: (Score:2)
In other words, the GP is right.
Re: (Score:3, Insightful)
The definition of Open Source compatible is not: a license which can be used interchangeably with any other Open Source license.Some licenses are compatible with each other and others aren't. It is called freed
Re: (Score:3, Insightful)
The GPL license is just about protecting individuals who want to develop and use software in freedom. It's up to you to take advantage of this protection or not
The best protection is public domain. Retaining ownership to force an ideological end is silly. The GPL was born out of emacs getting "ripped off" by other people... but did that stop emacs at all? Nope, we're still stuck with it, even though everyone knows vi is better....
Re:Libre? (Score:4, Insightful)
You mean, "It's from Microsoft! It must not be labeled as open source, even if it is!"
If you aren't saying this, then maybe you can say in what aspect the license doesn't meet the Open Source Definition [opensource.org]
.
Re:Libre? (Score:5, Informative)
Is that the license OSI approved which got a lot of flak because it says the source can only be run on windows or did they remove that use clause from their OSI licenses?
No. Those are the MS-LPL and MS-LRL licenses. The MS-PL license is fairly innocuous excepting the patent clause which is debatable. It allows the distribution of the source under this license and distribution of binaries for commercial use with a different license.
Re: (Score:3, Informative)
Or is that a senseless question anyway since it runs under Windows?
SVN runs under Windows. GCC runs under Windows. Gimp runs under Windows. Apache runs under Windows. Hell, just about any project with a configure script will either compile for Windows as-is, or will after slight modifications. FOSS has nothing to do with whether it runs under Windows or not.
Re: (Score:3, Insightful)
Also, your suicide joke wasn't funny..
Re:This is M$ double speak for "Finding Free Sofwa (Score:4, Insightful)
To a passionate free software advocate, M$ is a concise, efficient and - IMO - accurate moniker.
It's also meaningless, since every business is out for dollars. You might as well say $un too, and same goes for any business with an "s" in its name.
Re: (Score:2)
While an argument shouldn't be cast aside just because someone uses M$, I don't agree that it is "a concise, efficient and - IMO - accurate moniker". It's really just an irrelevant and off-topic device unless the conversation is specifically about cost of software.
It would be like constantly referring to RMS as "The Great Unwashed Guru" in a discussion that had nothing to do with personal hygiene or delusions of Godhood.
Re: (Score:3, Insightful)
While an argument shouldn't be cast aside just because someone uses M$, I don't agree that it is "a concise, efficient and - IMO - accurate moniker".
You don't agree that text in bold is HIS opinion? I don't agree with your disagreement
:P.
I beg to differ. If you're so puerile to have the need to use "M$ Winbloze" or "open sores software" in a rational discussion, it seems as if you're trying to sidestep the issue with colorful language. Call things by their name and focus on arguments rather than taking trite potshots.
As for identifying corporations by their stock ticker symbols, it allows to easily differentiate between corporations who would have otherwise similar names(for example, an article talking about the Royal Bank could refer to both RY and RBS) and to look them up quickly and unambiguously.
Re: (Score:2)
I don't generally use "M$" but I wanted to tell you how I see it. I see it as a way to separate the petty members of the audience who cannot overlook a small and harmless "transgression" (even that word is too strong for it) from those who are less superficial. I prefer to directly deal with wrong responses so this does not tempt me, but this is something that I wish more people understood. If
Re:This is M$ double speak for "Finding Free Sofwa (Score:2)
Re:This is M$ double speak for "Finding Free Sofwa (Score:5, Insightful)
yeah, FOSS exploits are cuddlier
But strange that in the 20 years I've been using Microsoft OSes, I've never had a virus or trojan or malware. I must be doing something wrong.
Re: (Score:2)
Re: (Score:2)
If you don't connect your computer to the net it does not count
:)
Alternatively, it's a bit like a poker game, if you don't know who the idiot is, it's you. In other words, the chances are big that you were at some point virused, trojanned or malwared but you did not detect it.
When adaware first came out I ran it on the machines of some friends and it was quite surprising how much crap there was on these so-called clean machines.
Probably you install very little software on your machines, that alone would be
Re:This is M$ double speak for "Finding Free Sofwa (Score:4, Insightful)
Every time I hear anyone using any system say, "I've never had a virus or trojan or malware," I always think, "there is a guy who doesn't know how to detect malware on his machine." And it's usually true.
I'm not saying you don't know how, but you said a genuinely stupid thing right there. It's possible that right now you're computer has been rooted, covered up, and you don't even know it. Because Microsoft sure wasn't protecting you for the last 20 years.
auto-hack or brute force? (Score:5, Insightful)
Does this bombard all exposed functions with garbage data and look for overflows, or does it actually comb source code, look for off-by-one bugs and try to outwit the code by using boundary conditions? It's nice for Kaminsky to praise his pimps, but how does this tool really differ from any of the other leak-detectors and bug-finding tools that already exist?
Re:auto-hack or brute force? (Score:5, Informative)
Re: (Score:2, Informative)
The article mentions it does fuzz testing, so it'd be the former.
Actually, the article says it's used during fuzz testing, not that it does fuzz testing.
It sounds more like an automated crash dump analyzer used after a fuzzer has caused the program to crash.
AFAICT, Neither (Score:3, Informative)
Re: (Score:2)
Am I the only one that thinks it's ridiculous we still have programs crash? It's 2009, why are we still programming in C? It's certainly possible to have the same speed and low level expressiveness and include assurances against crashes and buffer overflows.
Re: (Score:2)
That's why I am very happy to completely steer around C/C++. I never liked its messy syntax anyway. ^^
I used Pascal, Java, and now Haskell. And in 20 years of experience, I never have seen such an impressive beast of a compiler as the GHC (Glasgow Haskell Compiler).
Sure, you can fuck up things in Haskell too. But you have fuck up explicitely. By doing something very stupid. Not by not doing tons of checks right and left.
I also found the tradeoff of slowness for stability in Java, a good thing. But Haskell s
Re: (Score:2)
Yeah, there isn't really an alternative to C for low level things, which is what bothers me. It seems like an alternative language is the obvious solution to huge classes of security problems.
ATS [ats-lang.org] looks interesting, they even have a paper on writing linux device drivers in ATS. Maybe the alternative will turn out to be ATS [ats-lang.org], or maybe BitC [bitc-lang.org], but it needs to hurry up and people need to start abandoning C/C++.
Re:auto-hack or brute force? (Score:5, Informative)
Sup Goth, this *is* Dan.
!exploitable isn't about finding bugs -- it's not a fuzzer, it's not a static analyzer, etc. It's about looking at a crash and saying, "Heh, this isn't just a Null Pointer Deref, you got EIP." Sure, that's obviously exploitable to you, but to some junior tester, that's not obvious at all.
That's why it's a game changer. The dev writing the buggy code can't just say, meh, prove it's exploitable. Now the tester can point out the output of !exploitable and say, prove Microsoft is wrong. Shifts the burden of proof in the exact direction you'd want.
I'm feeling quite dizzy... (Score:4, Funny)
Microsoft has released an open source product that detects security flaws in code... my irony detector just exploded.
:)
Re: (Score:2, Funny)
Things that make you go hmmm... (Score:5, Funny)
Could Microsoft be purposely trying to confuse people and associate the terms "open source" and exploits?
Direct link to explanation (Score:5, Informative)
There's a presentation that explains how it works: [microsoft.com]
Re: (Score:3, Insightful)
Naturally, that's an OOXML file that OpenOffice doesn't quite display properly. Outline view seems to be the best.
It's nice to see... (Score:3, Funny)
Microsoft releasing their internal tools finally. I myself am waiting for their '!MakePortedAppsSuck' and '!CrushAllResistance' apps with baited breath...
Re:It's nice to see... (Score:4, Funny)
with baited breath...
Speaking of Microsoft and security, I think you've picked up a worm.
Re: (Score:3, Funny)
pronounced 'bang exploitable crash analyzer' (Score:2, Funny)
interesting excerpt from bang source code (Score:5, Funny)
int assess_severity( struct* bug )
{
string vendor = get_application_vendor( bug );
if ((vendor == "Google") ||
(vendor == "Adobe") ||
(vendor == "Mozilla"))
return MAJOR_RISK_UNINSTALL_IMMEDIATELY;
else if (vendor == "Microsoft")
return TRIVIAL_SECURITY_RISK;
else
return MODERATE_SECURITY_RISK;
}
There's already proof that this can't work (Score:2)
Re:There's already proof that this can't work (Score:5, Informative)
Re: (Score:3, Insightful)
Re:There's already proof that this can't work (Score:5, Funny)
Exactly. That's why I'm also against railroad crossing gates, smoke detectors, and those silly "Bridge Out" warning signs.
Re: (Score:3, Insightful)
Has anybody every told you "'Perfect' is the enemy of 'good enough'."? Perhaps after listening to you explain why your project is behind schedule, then sighing and face-palming?
The halting problem says that there cannot be a GENERAL ALGORITHM that works in all cases, for any of the infinity of possible programs that can exist.
That proves ZERO about, say, whether I can write an algorithm that covers 99% of the common cases. The lack of a general solution doesn't imply that it can't be done often enough, in
Re: (Score:2)
What part of the word "common" are you unable to comprehend?
Re: (Score:2)
Re: (Score:2)
Since you're mentally challenged I'll spell this out for you. The set of common cases is not infinite, especially since the creator of the algorithm gets to define "common".
Did you flunk the part of automata where they explained that not all sets are infinite and there exist such a thing as a finite subset of an infinite set?
Re: (Score:2)
Re: (Score:2)
You incorrectly assume that "an infinite number of different programs" and "all possible programs" are the same set. They are not.
Turing's proof shows that no algorithm can solve the Halting Problem for *all possible* programs. But there ARE proven algorithms that solve the the Halting Problem for certain classes of programs, that is, subsets of "all possible" programs.
Many of those subsets (all the interesting ones, really) containg an infinite number of possible programs. Not *all* possible programs, m
Re: (Score:2)
The halting problem is solvable in the general case if you restrict the inputs to finite programs running on finite inputs (any such program can be represented by a DFA, and then you just have a graph colouring algorithm to find non-terminating states). Although this is a tiny subset of the infinite number of possible programs, it does include all programs that can run on computers that will fit inside the universe, which is a sufficiently large set for most uses (of course, in some cases, you will need a
Re: (Score:2)
No, all it states is that it cannot prove the program is bug free. It can, however, keep running and finding as many bugs as possible.
If you get to a stage where you don't find bugs after a long enough period of time, you've probably reached the limits of that particular testing method's ability to provide any useful data about the application. That or the bugs are now awkward to find and probably won't be found by the majority of user input either.
On the halting problem basis, users will never find every b
Re: (Score:2)
A bounty for first exploit of !exploit (Score:2)
!static code analyzer (Score:2)
I would be more impressed if they released a free and open static code analyzer to include for their compilers that may also compile to native code (e.g. Visual C++).
That said, I'll be nice and applaud this effort. But if anywhere possible, use managed code (scripting or a secure VM) instead of relying on this kind of analysis. With this rate, it will take centuries to get rid of all the buffer overflows and other rather inexcusable code out there. I would be very amazed if this tool would (help to) remove
Re: (Score:2)
It's not "free and open" but do you mean a source code analyzer like this one [microsoft.com] which is available in Visual Studio 2005?
Open Source Exploit Finder? (Score:2)
So...let me get this straight...they're open sourcing their Windows code base?
I'm here all week. The veal is amazing!
windbg needs PDB so app must compile in MSVS (Score:5, Informative)
Since.
bang exploitable or unexploitable? (Score:2)
Not that this is important, but was it really pronounced "bang exploitable" when it started its life? It sounds to me like some top brass (or a journalist) wanted to show off that they know how "!" was pronounced in old UNIX speak, but without a real understanding of what it meant. You know, as in, "I am one of you, but I have no idea what the hell I am talking about".
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
In other words, irrelevant bullshit but it's their stuff so they get to pick the name.
Wow. Awesome headline. (Score:2)
Did anyone else misread this (before reading the summary) as Microsoft is working on an automated program to find *security exploits in open-source projects*?
Man, I had to readjust my tinfoil hat for a second there.
--
Toro
A related interesting project (Score:2)
Here is the code (Score:3, Funny)
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
#ifdef WIN32
fprintf(stderr, "Your system is not secure\n");
#else
fprintf(stderr, "Your system is not popular enough to be targetted, therefore it is secure\n");
#endif
return 0;
}
Re:Bang exploitable (Score:5, Informative)
"bang" is ancient history. [wikipedia.org] [catb.org]
Re:Bang exploitable (Score:5, Funny)
Every time they see "!=" they interpret is as "bang equals". That sounds like definitely equals, doesn't it? Like, dude, those are so equal it's not even funny, equal.
No wonder they have all those buffer overflow exploits. Their logic checks that include the not modifier are all wrong.
Rules of Open Source club (Score:5, Funny)
1. Fork the project
2. Change the name
Re: (Score:2, Funny)
Bang Exploitable Crash Analyzer, programmed in C Pound Point Net.
Re: (Score:2)
Re: (Score:2)
Here's a better idea... Fix all the bugs and then you're sure you've fixed all the big bugs.
Well, that's a nice idea, but it takes a finite nonzero amount of time to do so.
You both make good points. MS's security culture is fairly awful in that when developers find bugs that are potential security issues, they have to fight the system to get them prioritized for fixes and most are considered "low risk" and ignored. Anything that helps prioritize bug fixes is good, provided it is not used a an automated way to ignore a huge number of bugs in an effort to produce a mediocre and "good enough" product in terms of security.
Re:THOUSANDS OF BUGS? (Score:5, Insightful)
How large of a programming team do you work with? And how big are the projects to which you contribute code? And what kind of development model do you use (waterfall, Agile, ad-hoc, etc.)?
Shipping a large project with 1,000 bugs might be a perfectly valid decision. Are any of those 1,000 bugs deal-breakers for your install base? If so, how many clients does it affect? Are these "real bugs", or just incomplete/unpolished functions, or documentation issues, or output typos, or what?
And what kind of software is this? Are you building a time & expense web application, or a filesystem driver? In the former case, most bugs will be interface glitches--ugly, annoying, and harmless. In the latter case, even one bug could easily cause silent data corruption..
Re: (Score:3, Interesting)
Shipping a large project with 1,000 bugs might be a perfectly valid decision
Why don't we just change that to Shipping a large project with 1,000 bugs might be a perfectly valid business decision
Re: (Score:2)
While I agree that people could do better, your overall attitude of EVERY BUG MUST GO BEFORE WE RELEASE is probably why you have to say "if I had a big project" rather than "the big project I'm on now..."
"Software should work out of the box. You shouldn't have to wait for an update or two for it to become stable enough to use."
Agreed, we're not talking about bugs that prevent use of the software here. Your inability to distinguish possibly hinders you professionally.
Re: (Score:3, Insightful)
Not all software is a product for sale, and in the real world there are deadlines and budgets. Users can deal with bugs, business owners can't deal with late, over-budget projects.
Re: (Score:3, Interesting)'s a fair number of vendors that play that game doesn't mean it's the rule.
There's a balance, there are also those people that think that perfect software can be created in some kind of bubble and you might be one of them, I think. In a large project I can assure, with 100% certainty, that between the start of the project and the final release the requirements have changed. A lot. It does not matter if you design up a perfect software development method, not that I think such a thing exists, because people are very poor at specifying in an abstract specification what it is they wa
Re: (Score:2)
You don't ship *anything*.
Also, !exploitable can check for bugs in beta software. And it can check for bugs in internal builds. You do *not* need to have released to get bug reports on major projects -- testers, fellow developers, and even yourself can run into bugs to investigate later.
Firefox 3.5 is supposed to have fixed over 1000 bugs so far in its release cycle, and that was supposed to be a short-cycle release -- and there are still bugs that are WONTFIX or even still active from years and years ago
Re: (Score:2).
I forgot to address this. Yes, early adopters and capturing your market are important. I can see where "version 1" could be considered beta for the purposes of getting your foot in the door. I don't think anyo
Re: (Score:2)
Who said we were talking about MS and Windows? You just brought that up, right now. I don't think it proves anything, one way or another, that one company has a crappy process.
Honestly, it seems like you just tried to "move the goalposts", redefining the terms of an argument you were losing so you can feel like you're winning.
That's lame, and I'm calling you out on it.
Re: (Score:2)
Thousands of bugs? They must have tested it against their office suite
:)
But seriously, Microsoft must have loads of legacy code lying around, so thousands of bugs are to be expected. Office just happens one of them (and the number of Word related crashes on my office computer is just about hopeless).
Re: (Score:3, Informative)
This is Dan.
OK, my DNS bug took two days to find, and six months to fix. I'm not sure what universe you're in; in mine, we have to actually test.
Re: (Score:3, Informative)
Why do you believe that Microsoft doesn't run it on their own code?
Remember that !exploitable is a debugger extension that is used on a crash dump to determine if it's possible that the crash was caused by an exploitable bug. It's not a source code analyzer - it's purely a post-mortem analysis tool.
From the paper I would expect that Microsoft routinely runs this tool over crashes, especially over the crashes that are found by its internal fuzzing tests (the paper says that they ran over 350 Million fuzzing
Re:Enough problems of their own (Score:5, Insightful)
So, why doesn't Microsoft produce these tools for Windows, so the mass populace can help identify, log steps to reproduce, and report the exploits? Why are they using their resources to create tools for testing open source software for exploits? It is so they can give windows fanbois tools to create yet more anti-Linux and anti-F/OSS FUD, pure and simple.
Are you retarded? This tool isn't a "find exploits in open source software tool." It's an open source "find exploits in software tool". So Microsoft has an internal tool that they've developed to search for exploits in their software like Windows and Office, but they decided to open source that tool and share it with everyone else. It has nothing to do with Windows versus Linux.
As far as your ridiculous rant regarding Windows and programs running as Administrator, if you actually looked at the most recent versions of Windows, the number of system services that run under NETWORK SERVICE and other less privileged accounts has been increased, and with UAC, running users as non-admin is actually feasible. I don't know if you'd ever tried running as non-admin under XP, but the idea of logging out and logging back in to make a change, or hoping to hell that runas will actually work, just makes no sense. In addition, their work on Protected Mode where IE runs in a sandbox is another example of MS working to implement the least privilege principle.
Microsoft has made *considerable* progress on the non-admin front, and continues to work on that.
Oh, and whoever modded you up for this nonsensical misinterpretation of the tool needs a meta-mod down.
Re: (Score:2)
I wish i could mod you up.. i'm not sure what high horse the OP was on, but i'd like some of what he is smoking!
Re: (Score:2)
MODS: how is this flamebait?
It can validly be considered flamebait because it starts with, "Are you retarded?" This is unfortunate because, it is factual and corrects the misconceptions of a highly modded post that is, well a little retarded. That's a harsh way to phrase it as well as offensive. In truth the original poster was not retarded, just uninformed and "ranty".
WinDbg (Score:2)
So, why doesn't Microsoft produce these tools for Windows
The tool in question is a debugger extension for WinDbg. I'm not sure how many people are debugging their Unix/Linux applications with WinDbg, but I'm guessing it's not a large number.
Mod down please (Score:2)
Could somebody please mod this clown down? He couldn't be more wrong.
Or, in short:
So, why doesn't Microsoft produce these tools for Windows, so the mass populace can help identify, log steps to reproduce, and report the exploits?
This tool is for Windows you dumbshit.
Re: (Score:2)
Microsoft Unveils Open Source Exploit Finder? (Score:3, Funny)
What! You mean they Open Sourced Windows!??! | http://it.slashdot.org/story/09/03/22/147202/microsoft-unveils-open-source-exploit-finder | CC-MAIN-2015-14 | en | refinedweb |
The idea is to encode a unit-length quaternion in a manner analogous to cube mapping, but with one extra dimension and taking advantage of the property that a sign flip doesn't change which rotation is being represented.
One can think of cube mapping as consisting of an encoding of points in a sphere by first indicating which coordinate has largest absolute value and what sign it has (i.e., which of the 6 faces of the axis-aligned cube the point projects to) and then the remaining coordinates divided by the largest one (i.e., what point of the face the point projects to).
In our case, we don't need to encode the sign of the largest component, so we only need to use 2 bits to encode what the largest component is, and we can use the remaining bits to encode the other three components.
I think 32 bits is probably good enough for animation data in a game, and it's convenient that 30 is a multiple of 3, so it's easy to encode the other components. Actually, even if we didn't have that convenience, it wouldn't be a big deal to use a resolution that is not a power of 2, but some integer divisions would be involved in the unpacking code.
Here's the code, together with a main program that generates random rotations and measures how bad the dot product between the original and the packed and unpacked gets (the dot product seems to be > 0.999993, although I haven't made a theorem out of it):
#include <iostream> #include <cstdlib> #include <boost/math/quaternion.hpp> typedef boost::math::quaternion<double> quaternion; int double_to_int(double x) { return static_cast<int>(std::floor(0.5 * (x + 1.0) * 1023.0 + 0.5)); } double int_to_double(int x) { return (x - 512) * (1.0 / 1023.0) * 2.0; } struct PackedQuaternion { // 2 bits to indicate which component was largest // 10 bits for each of the other components unsigned u; PackedQuaternion(quaternion q) { int largest_index = 0; double largest_component = q.R_component_1(); if (std::abs(q.R_component_2()) > std::abs(largest_component)) { largest_index = 1; largest_component = q.R_component_2(); } if (std::abs(q.R_component_3()) > std::abs(largest_component)) { largest_index = 2; largest_component = q.R_component_3(); } if (std::abs(q.R_component_4()) > std::abs(largest_component)) { largest_index = 3; largest_component = q.R_component_4(); } q *= 1.0 / largest_component; int a = double_to_int(q.R_component_1()); int b = double_to_int(q.R_component_2()); int c = double_to_int(q.R_component_3()); int d = double_to_int(q.R_component_4()); u = largest_index; if (largest_index != 0) u = (u << 10) + a; if (largest_index != 1) u = (u << 10) + b; if (largest_index != 2) u = (u << 10) + c; if (largest_index != 3) u = (u << 10) + d; } quaternion get() const { int largest_index = u >> 30; double x = int_to_double((u >> 20) & 1023); double y = int_to_double((u >> 10) & 1023); double z = int_to_double(u & 1023); quaternion result; switch (largest_index) { case 0: result = quaternion(1.0, x, y, z); break; case 1: result = quaternion(x, 1.0, y, z); break; case 2: result = quaternion(x, y, 1.0, z); break; case 3: result = quaternion(x, y, z, 1.0); break; } return result * (1.0 / abs(result)); } }; double rand_U_0_1() { return std::rand() / (RAND_MAX + 1.0); } quaternion random_rotation() { quaternion result; do { result = quaternion(rand_U_0_1()*2.0-1.0, rand_U_0_1()*2.0-1.0, rand_U_0_1()*2.0-1.0, rand_U_0_1()*2.0-1.0); } while (norm(result) > 1.0); return result*(1.0/abs(result)); } double dot_product(quaternion q, quaternion p) { return q.R_component_1() * p.R_component_1() + q.R_component_2() * p.R_component_2() + q.R_component_3() * p.R_component_3() + q.R_component_4() * p.R_component_4(); } int main() { double worst_dot_product = 1.0; for (int i=0; i<1000000000; ++i) { quaternion q = random_rotation(); PackedQuaternion pq(q); quaternion p = pq.get(); if (dot_product(p,q) < 0) p *= -1.0; if (dot_product(p,q) < worst_dot_product) { worst_dot_product = dot_product(p,q); std::cout << i << ' ' << q << ' ' << p << ' ' << worst_dot_product << '\n'; } } }
Any comments are welcome, and feel free to use the idea or the code if you find them useful.
Edited by alvaro, 05 July 2012 - 09:01 AM. | http://www.gamedev.net/topic/627484-packing-a-3d-rotation-into-32-bits/ | CC-MAIN-2015-14 | en | refinedweb |
Copyright ©2003 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
This document contains a list of requirements and desiderata for version 1.1 of XML Schema. its publication does not imply endorsement by the W3C membership..
The requirements included in this document were culled from public and member-only messages sent to the XML Schema Working Group. Each message was acknowledged. The requirements were then discussed by the Working Group and accepted requirements were classified into the three categories described in 1 Introduction.
It is the intention of the Working Group that this requirement gathering and classification should continue and that updated versions of this document should be published from time to time. This is not a Last-Call publication of this Working Draft.
This document has been produced by the W3C XML Schema Working Group as part of the W3C XML Activity. The authors of this document are the members of the XML Schema Working Group.
Patent disclosures relevant to this specification may be found on the XML Schema Working Group's patent disclosure page at.
Please report comments on this document to www-xml-schema-comments@w3.org (archive). A list of current W3C Recommendations and other technical documents can be found at.
1 Introduction
2 Structures Requirements and Desiderata
2.1 Complex Types
2.1.1 Requirements
2.1.2 Desiderata
2.1.3 Opportunistic Desiderata
2.2 Constraints and Keys
2.2.1 Requirements
2.2.2 Desiderata
2.2.3 Opportunistic Desiderata
2.3 PSVI
2.3.1 Requirements
2.3.2 Desiderata
2.4 Restrictions
2.4.1 Requirements
2.5 Structures (General)
2.5.1 Desiderata
2.5.2 Opportunistic Desiderata
2.6 Substitution Groups
2.6.1 Requirements
2.6.2 Desiderata
2.7 Wildcards
2.7.1 Requirements
2.7.2 Opportunistic Desiderata
3 Datatypes Requirements and Desiderata
3.1 Datatypes (General)
3.1.1 Requirements
3.1.2 Desiderata
3.1.3 Opportunistic Desiderata
3.2 Date and Time Types
3.2.1 Requirements
3.2.2 Desiderata
3.3 Numeric Types
3.3.1 Requirements
3.3.2 Desiderata
3.3.3 Opportunistic Desiderata
This document contains a list of requirements and desiderata for XML Schema 1.1.:
A requirement must be met in XML Schema 1.1.
A desideratum should be met in XML Schema 1.1.
An opportunistic desideratum may be met in XML Schema 1.1.
Clarify the expected processor behavior if an attribute has both use="prohibited", and a fixed value specified.
See (member-only link) .
We need to be clear where the XML Schema spec depends on component identity. We need a language to talk about identity of types, in general, and particularly with respect to anonymous types. Can an inherited type have an anonymous type? Are anonymous types that appear multiple types in a model group the same type?
See (member-only link) minutes of 10/24 telcon .
Clean up named model group syntax and component.
See .
Change the XML representation (and possibly the component structure) of local element declarations to at least allow, if not require, all particles to be references, with scope, i.e. put the local declarations directly under <complexType>
Proposal
Asir Vedamuthu (member-only link)
Provide a [schema normalized value] for all valid element infoitems, not just those of simple type, and address the question of typing the characters in mixed content.
Relax the constraint that a complex type may contain at most one attribute of type ID.
See .
Proposal
Henry Thompson. (member-only link)
The XML representation for field and selector allows an annotation, but there is no schema component to which this annotation can adhere. This inconsistency must be resolved.
See : R-46.
Resolve the issues associated with restricting types whose elements include identity constraints. Specifically, (1) the rule must changed to state that the restricted type must have a superset rather than a subset of identity constraints, (2) the term superset must be clearly defined, and (3) there must be a way to redefine identity constraints in the restricted type without causing duplicate name problems.
See : R-94.
Add the ability to define and enforce co-constraints on attribute values, or on attribute values and sub-elements. For example, if attribute a has value foo, the attribute b must have one of the values fuzz, duz, or buzz; but if attribute a has value bar, the attribute b must have one of the values car, far, or tar. Or: if attribute href occurs, the element must be empty; if it does not occur, then it must have type phrase-level-content.
See : LC-193 Response.
Key constraints to restrict which element types can be pointed to: Allow a schema author use key constaints to specify that a value (which otherwise behaves like an SGML or XML ID) is restricted to pointing at one (or more) particular element type(s)?
See (member-only link) : LC-151.
Revise the derivation of complex-type restriction so as to eliminate the problems with pointless occurrences. Currently, it eliminates some derivations that should otherwise be valid.
See : R-24.
Proposal
Microsoft proposals, item 1.2 (member-only link)
Revise the particle derivation rules so as to eliminate the problems with choice/choice rules.
See : R-42.
Remove the current rules on derivation by restriction; define legal restrictions in terms of their effect on the language, not in terms of a structural relationship between the base type and the derived type.
See (member-only link) ..
See .
Address localization concerns regarding Part 1: Structures.
See (member-only link) : LC-206.
Specify a manner in which schema documents can be included in-line in instances.
See (member-only link) : Issue 42.
Improve interaction between substitution group exclusions and disallowed substitutions in the element component.
See (member-only link) .
Allow an element declaration to be in more than one substitution group.
See (member-only link) .
Address problems with the interaction between wildcards and substitution groups. Specifically, resolve the bug where if complex type A has a wildcard, and B restricts A, then it can restrict the wildcard to a set of elements that match the wildcard. Not all elements in the substitution groups of those elements necessarily match the wildcard - so B is not a subset of A.
See (member-only link) .
The namespace constraints on wildcards must be more expressive in order to be able to express the union or intersection of any two wildcards. Specifically, it must be possible to express "any namespace except those in the following list."
See : CR-20.
Proposals
Microsoft proposals, item 1.3 (member-only link)
Asir Vedamuthu (member-only link)
Allow a wildcard to indicate that it will allow any element that conforms to a specified type.
See .
Proposal
Xan Gregg (member-only link)
Address localization concerns regarding Part 2: Datatypes.
See (member-only link) : LC-207.
Unit of length must be defined for the all primitive types, including anyURI, QName, and NOTATION.
See .
See (member-only link) .
Proposal
Dave Peterson and subsequent thread (member-only links)
Provide regular expressions or BNF productions to express (1) the valid lexical representations and (2) the canonical lexical representation of each primitive built-in type.
Proposal
Alexander Falk (member-only link).
Interaction between uniqueness and referential integrity constraints on legacy types and union types.
See : CR-50 (broken link).
XML Schema Part 1 (Structure) and XML Schema Part 2 (Datatypes) have different notions of "derived" for simple types, specifically with regard to list and union types.
See .
Proposal
Allow abstract simple types.
See : CR-47.
Proposals
Dave Peterson (member-only link)
Microsoft proposals, item 2.2 (member-only link)
Add a "URI" type that allows only a URI, not a URI reference. The current anyURI type allows a URI reference.
See .
Support for extensible enumerations such as allowed in Java.
See .
There must be a canonical representation of duration, and a process for calculating the canonical representation from any other lexical representation. Currently, a period of one day and a period of 24 hours are considered two different values in the value space. They should be considered two different lexical representations of the same value.
See (member-only link) . See also : R-170.
Proposals
Microsoft proposals, item 1.1 (member-only link)
Michael Kay (member-only link)
Address localization concerns regarding the date and time types.
See (member-only link) : LC-221.
Resolve the issue that relates to timezone normalization resulting in a time crossing over the date line.
See .
The definition of the dateTime value space does not reference a part of ISO 8601 that defines dateTime values, only lexical representations. The reference should be corrected, and the recommendation should explain or fix the fuzziness and/or gaps in the definitions referenced.
See (member-only link) .
Proposal
Dave Peterson erratum for R-120 (member-only link)
The year 0000 should be allowed in the types date, dateTime, gYear and gYearMonth.
See (member-only link) .
Proposal
Dave Peterson erratum for R-120 (member-only link)
Provide totally ordered duration types, specifically one that is expressed only in years and months, and one that is expressed only in days, hours, minutes, and seconds (ignoring leap seconds.) Possibly define other totally ordered duration types such as day/hour/minute and hour/minute/second duration.
Proposal
yearMonthDuration and dayTimeDuration as defined in XQuery and XPath Function and Operators
The canonical representation of float and double must be refined because it currently maps several lexical representations into a single legal value. Specifically, the description of the canonical representation must address (1) signed exponents, and (2) trailing zeroes in the mantissa.
See (member-only link) .
Proposal
Ashok Malhotra and subsequent thread (member-only links)
Provide a datatype which retains trailing zeroes in the lexical representation of decimal numbers.
See : CR-42.
Proposals
Dave Peterson and Mike Cowlishaw (member-only link)
Latest from Dave Peterson (member-only link)
Allow scientific notation for decimals.
See : CR-23.
Allow negative values for the fractionDigits facet.
See : CR-22. | http://www.w3.org/TR/xmlschema-11-req/ | CC-MAIN-2015-14 | en | refinedweb |
On 2005-07-18T14:15:53, David Teigland <teigland redhat com>. > Currently, the dlm uses an ioctl on a misc device and ocfs2 uses a > separate kernel module called "ocfs2_nodemanager" that's based on > configfs. > >. Hi Dave, I finally found time to read through this. Yes, I most definetely like where this is going! > +/* TODO: > + - generic addresses (IPV4/6) > + - multiple addresses per node The nodeid, I thought, was relative to a given DLM namespace, no? This concept seems to be missing here, or are you suggesting the nodeid to be global across namespaces? Also, eventually we obviously need to have state for the nodes - up/down et cetera. I think the node manager also ought to track this. How would kernel components use this and be notified about changes to the configuration / membership state? Sincerely, Lars Marowsky-Brée <lmb suse de> -- High Availability & Clustering SUSE Labs, Research and Development SUSE LINUX Products GmbH - A Novell Business -- Charles Darwin "Ignorance more frequently begets confidence than does knowledge" | https://www.redhat.com/archives/linux-cluster/2005-July/msg00175.html | CC-MAIN-2015-14 | en | refinedweb |
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
R
>>
r dataframe filter endswith
“r dataframe filter endswith” Code Answer
r dataframe filter endswith
r by
Successful Salmon
on Oct 28 2021
Comment
1
df %>% filter(grepl(".cpp|.h", File_name)) File_name Folder 1 ord.cpp 1 2 ppol.h 2 3 lko.cpp 3 4 t_po..lo.cpp 4
Add a Grepper Answer
R answers related to “r dataframe filter endswith”
filter only NA column in R
r remove all string before : in r data frame
r stack data frames
reverse row order dataframe R
filter date in r
R queries related to “r dataframe filter endswith”
r dataframe filter endswith
More “Kinda” Related R Answers
View All R Answers »
how to count the number of NA in r
r define nested empty list
add a vertical line in ggplot
vertical line in ggplot2
residual plot in r
remove elements from character vector in r
write csv in r
r clean environment
r convert accented characters
convert latin accents to ascii R
how to add random numbers randomly in a dataframe in r
random integer in r
R random number
r ggplot regression line
linetype ggplot in r
R string ascii accents
normalization in r
r how to import tsv file
loop through list in r
drop columns by index r
check type of column in r
r count number of na
r - remove scientific notations
r last value of vector
r load packages
paste no space r
remove null element from list r
R factors as numeric
empty plot in r
scale between 0 and 1 r
find data type of vector r
automatically wrap r text label ggplot
how to transform days in years R
r convert string to list of characters
r type of all columns
dplyr colnames r
r remove na from dataset
ggplot increase label font size
remove item from a list r
replace accented characters in r
suppress error r
R p value star
use packages in r
r convert date as timestamp
name elements in vector r
variable assignment in r
if not i startswith R
dplyr replace na
how to count the true values in r
need R code
create folder in r
r convert matrix to list of column vectors
how to extract weekday from date in r
why is correlation na in r
select all columns except one by name in r
r import table with readr
r dataframe filter endswith
how to match two time series in r
r create new calculated column
find data types in list r
r calculate for bias
glyph in r
how to re-arrange weekdays in r
how to read number of excel sheet in r
R regress one variable on all the other variables
how to rename columns in r using index
rmarkdown section title as variable ##
r optim
make a sequence of timeseries per day in r
r double and triple colons
"R" concatinate rows
r extract everything before character
r range with step
r find elements in common between vectors
knnImputation in r
view table/dataframe in r
r return index of rows that have NA in dataframe
size of ticks labels in r ggplot
types of vectors in r
how to get the mode in r
how to change column names in r
r stack data frames
R squared regression in r with ggplot
list all installed packages in r
r convert list to comma separated string
options kable NA
r dataframe column factor
read csv online r
r strsplit in mutate
how to import csv file in r
R remove directory
create file in r
how to calculate variance in r
list to vector r
r print concatenate
r test normality
read excel in r
use summarize multiple columns r
cex label in r
expression in r
superscript in r
how to use recursion in r
convert datetime from string r
number of rows by group in r
how to select certain rows containing a word in r
setwd in r
r split string column by delimiter
remove na from vector r
descending order a list in r
find length of a list or vector in r
create dataframe or table in r
how to get number of columns in matrix R
plot time in r
R rename singl edf column
remove line with na r
how to remove nas in r
r remove row dataframe
r change row names of a dataframe
rename column in r
rename columns based on a variable in r
to numeric in r
rename variables in r
r rename columns
add column in r
r pipe
remove rows in r based on row number using dplyr
how to combine all columns into one column in r
nls in r
How to Convert a Factor in R
how to do logistic regression in r
try catch in r
r - change column name
list to dataframe in r
ggplot - blank title of axis
r lists vs vectors
reverse row order dataframe R
create a table from dataframe in r
change from matrix to a dataframe in r
r combine strings
vars() in R
r mean by group
principal component analysis in r
how to read csv file in r
r merge inner join
rep in r
r ddply
r merge columns
comment in r
r language comment
r box plots
create a dataframe with column names in r
rename columns in table r
sort dataframe dplyr
r heatmap
ggplot2 multiple lines geom_line
r dataframe append row
combine columns in r
how to build random forest in r
r change a single value in a dataframe
drop na in R
r standard deviation
plot3d in r
character to integer in r
export csv file in r
how to set the first column as row names in r
create vector in r
how to do linear regression in r
diff days R lubridate
remove rownumbers r
r remove row names
multiple comment code in R
how to convert matrix to numeric in r
r - transform as factor
get quantile value in r
replace na with 0 in R
show factor levels in r
remove row from matrix r
r merge by two columns
convert a matrix to a vector in r
how to split a column in r
how to change the index of a dataframe in r
how to change the numbering of rows in r
how to fill na values in r
correlation matrix in r
what is a vector in r
r append to vector
hackerrank input r
r replace na with 0
delete first three lines dataframe R
how to remove all attributes from a variables in r
r - remove na
how to itterate through a character in r
grid.arrange
create list in r
rmarkdown put date
count word in a string r
text in ggplot2
R make column of rownames
if not na in r
rstudio could not find function ggplot
set row names in r
convert a row to a column in r
how to return the date with only the day in it in r
convert a datetime to date r
r - extracting specific columns from a data frame
split strings by space in r
imputation in r
r: rename a column
r na if
Extract number from string field R
R get specific character from string
R df space in column name
r sapply syntax
remove all empty strings from R
round multiple columns in r
import excel into R
copy a dataframe in r
r remove all string before : in r data frame
percent of missing data in df r
R darekn color
repeat each value in a vector in r
libpath r
how to change the font of the xlab in plot in r
na by column r
convert first row to header in r
remove first and last character from string R
lubridate sequence of dates
r - reorder columns in data frame
r number of blanks in the data
r make directory
r first row by group
write text r
sequence r
R store data to text file
how to read multiple csv files from a directory in r
turn row names into column in r
get matrix row name r
error installing devtools r
r value statistics
conditional mean statement r
sort R
r extract top values from data frame
median of a row dataframe in r
mean of a row dataframe in r
quartile in r
delete all rows that contain a string in R
r - remove NA from a coulm
paste in r
r dot product
ggplot abline thickness
r change column based on condition
R find index where
r set dataframe column names
remove column from matrix r
how to summarise data but keep columns R
for in r
ggplot: boxplot with trendline
for R
r remove inf values
color code in r
strtrim in r
r prepend to a list
repeat sample in r
remove_all_labels(x)
r delete rowif number higher than
attr(* label )= chr in r
R vector all but last
how to wait for a key press in R
convert na to 0 in r
select columns without na in r
print in r
rnorm in r
r select rows
how to randomly select R
stat_poly_eq position
disable the y axis in plot r
replace character with na r
r count distinct dplyr
how to create dates in a range in R
remove all trailing whitspaces R
update r from rstudio
truncate string in r
ggplot_regression_line
replace any NA in a data frame in r
r summary
R add row to dataframe
r rename column based on variable
Rename a variable in R
How to convert numeric to date in R
how many pairwise combinations
predict in r stack
r seq
get list of words that are in two lists using set
how to substring in R from position
convert dataframe row to vector r
insert character into string r
insert character into string
how to format a number in r
combine ro columns in r
decision tree r
r tibble select rows containing string
angular material number picker
rstudio working directory
how to iterate through a list in r
filter only NA column in R
remove name of a column
calculated defualt values in R function parameters
r remove spaces in column names
how to convert categorical data to numerical data in r
read xlsx in r
if string in r
scatter plot
factor in r
r produce 10 alphabet
del row matrix r
figure caption in r markdown
summary metrics of confusion matrix
r create intervals cut
make the first row as header in r
two string in one string r
diff division R
how to load html file to r studio
geom_abline vertical line
add another 'color' condition to jitter with ggboxplot
%in% in R
print data type R
R grid all possibilites between two vectors
how to bootstrap in r
R for loop append to vector
Remove , in numbers in r
read delim in r with lapply read.delim sep
how to set a dataframe as a value in a list in r
logistic distribution CDF in r
base R change axis line width
cbind vectors of different lengths r
get plot title over two lines R
str_extract all using mutate and toString
how to get the r2 value in r
r predict type
no redundant combination expand grid
change the y ticks in r plot
pdlyr mutate string extract
R concaat string and number
r replace blank string with na
extract r squared from lm in r
How to remove duplicates based on the combinations of two columns r
count certain number of alphabets in a string r
logistic inverse CDF in r
how to make the minutes zero in r
open('data.txt', 'r')
r2 in rstudio
r predict type = prob
How to use group_by inside a function?
Getting rid of row names in R
list objects R
seperate words in column r
convert all numeric columns to percentages R
filter na in r
extract rse from lm in r
bioFabric r
how to filter a vector by location in r
delete variable from environment r
outlier tagging boxplot r
L in r
vector with real numbers R
r select column names starting with
log likelihood in r
save data frames in a loop r
r reverse vector
tidytext extract url r
add padding to number r
Non-redundant version of expand.grid
how to source all fies from a directory in r
link excel to r
unset par mar
ggplot - subset top 10 in a stack bar plot
how to exclude inf in r
r last day of previous month
3d scatter plot in r
decompose function in r
required in r
r change column value conditionally
order barplot ggplot2 by value
square root calculation r
find q1, q3 and iqr in r
read.table tab separator
change to posixct in r
R check object dimensions
r code mutate
change labels in legend R
r ggplot hide one legend group from multiple legends
ggplot glm
connect excel to r
R new column t test p-value
open xlsx with r
skewness in r
count r
st_combine by variables R
moving average in r
reduce ggtitle size
r glm select all variables
utils vs readr and fread
linear regression r text label coefficient ggplot
apply on a vector in r
interquartile in r
how to count the number of non NA values in R
r most likely outcome
XLConnect
norm,s,inv in r
null count in r
replace_na
geom_point transparency
why is only mean and max infinity in r
detect rank deficient in r
rstudio github upload
how to count unique values in r
significance codes
reorder columns in r
read csv in r
R tutorial
r - create a new empty variable in a dataset
excecute a for loop line by line in r
predict in r
rstudio refactor hotkey
how to get the number of individual numbers in a vector in r
dotted y intercept line in ggplot
turn a numeric dataframe to binary in r
next element in a loop if error in r
bar plot R
geom_jitter transparency
ggplot2 color gradient
get string without ending in R
add column value based on row index r
r: network randomization test igprah stakoverflow
histogram r add line
Derive end of the week date in r
ggplot2 barplot order of bars
timestamp conversion from excel R
change the font of the title in a plot in r
correlation matrix using factors r
number of days in a data set in r
r confusion matrix
r - split the data in queal ranges
r count list
R find n largest
generate all possible combinations of a set of characters r
Plot two variables
Pass argument to group_by (2)
filter date in r
ggplot2 black and white theme
animated bar chart race in r
modulo in r
store list in data.frame R
index in r
identify multiple spellings in R
how to tell if a variable is discrete or continuous in r
ggplot2 font times new roman
extract hyperlinks in r
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:
save link tweet in new column in R
infinite in r
how to add columns to a flextable in r
Edit axis labels R
replace na in a column with values from another df
r yardstick confusion matrix
r library tidyverse
rmarkdown how to load workspace
end the program in r
r suppress package loading messages
not displaying prints and on.exit in r
str_detect multiple patterns
logistic regression in r
r remove insignificant coefficient in output
linear model remove variables in R
select last child with class in r
R difference | and ||
kable thousand mark
Print the names of all worksheets in r
grep string that ends with R
return the name of the dataset in r
slope by row r
r read.csv tab delimited
save large nested list to text R
"R" bind rows
stacked bar plot r with age groups
r performance matrix for confusion matrix
calculating RMSE, MAE, MSE, Rsquared manually in R
dplyr average columns
Join matching values from x to y
R: foreach multiple argument
collapse text by group in dataframe r
insert a png in R
remove the colour name from ggplot
autoplot confusion matrix
R construct a named list
Extract the text of all list elements in r from html
gsub special characters r
extract first element before a character stringr
R type of object
r function to get class of all columns
find row with na r
meaning of %>% R
calculating RMSE, Rsquared with caret in R
read file in r EOF within quoted string
how to throw an error in R
Join matching values from y to x in DPLYR
r language
R drop columns
pivot table in r dplyr
ggplot2 graph in r
load multiple packages in r
how to remove null values in r
how to round all numeric column types in r
legend in R
mutual information in r
how to get quantile summary statistics in r summarise
r alluvial chart with NA
combine scripts into a pipeline
R total line text file
emf from r plot
ggplot box plot without outliers poins
regex last word in string r
vlookup in r dplyr
layar names in R worldclim
combine row for every element of vector r
ts object to data frame
What percentage of the row will the following column occupy on desktop devices: <div class="col-xs-3 col-sm-4 col-md-6 col-lg-12"></div>
r - if value in a df is between two number then add 1
select all columns except one by name in
. | https://www.codegrepper.com/code-examples/r/r+dataframe+filter+endswith | CC-MAIN-2022-05 | en | refinedweb |
In our last article, we have introduced the usage of Jest to test our JavaScript code Start Testing Your JavaScript Code with Jest. We are going to further extend the topic to introduce how do we use React Testing Library and Jest to test our React.
create-react-app uses Jest as its test runner. Jest will look for the test files name with the following naming conventions (according to official site):
- Files with .js suffix in __tests__ folders.
- Files with .test.js suffix.
- Files with .spec.js suffix.
Today we are going to explore how render our components to be tested, finding the right element in the component, and performing snapshot testing. Let's get started with creating a new
create-react-app project:
npx create-react-app testing-react-demo
After the creation, change directory into the app that you created and open the directory in your desired code editor.
You should already see an App.test.js in the src folder.
import { render, screen } from '@testing-library/react'; import App from './App'; test('renders learn react link', () => { render(<App />); const linkElement = screen.getByText(/learn react/i); expect(linkElement).toBeInTheDocument(); });
You may remove this file, or leave it. I will remove it for this demonstration and therefore you will not see it being included in the test suites.
What I will normally do next is that I will create a components folder and include the files (such as css and test files) that belong to this component inside this folder. After created components folder, create two more folders called SubscribeForm and PokeSearch. These are the two components that we want to write some tests on today.
Let's create our first simple component in SubscribeForm folder:
SubscribeForm.js
import React, { useState } from 'react'; import "./SubscribeForm.css"; const SubscribeForm = () => { const [isDisabled, setIsDisabled] = useState(true); const [email, setEmail] = useState(""); function handleChange(e){ setEmail(e.target.value); setIsDisabled(e.target.value === ""); } return ( <div className="container"> <h1>Subscribe To Our Newsletter</h1> <form className="form"> <label htmlFor="email">Email Address</label> <input onChange={handleChange} <label htmlFor="agreement_checkbox">I agree to disagree whatever the terms and conditions are.</label> <button name="subscribe-button" type="submit" className="button" disabled={isDisabled} >Subscribe</button> </form> </div> ); }; export default SubscribeForm;
This is a simple component, where we have an input field for email address, and a button to hit "subscribe". The button is first disabled and prevent clicking before any text is entered to the input field. This button seems to be one of the perfect test cases that we can create.
Button is disabled before text input
Button is enabled after text input
Followed by this, we are going to create another component called PokeSearch (I am not a Pokemon fan, but Poke API is good for demonstration). As another simple enough example, we have a component that has a useEffect hook to fetch information from an API, and display it (Pokemon name) to the screen. Before the result is fetched, we display a "...Loading..." text to users.
PokeSearch.js
import React, { useEffect, useState } from 'react'; const PokeSearch = () => { const [pokemon, setPokemon] = useState({}); const [isLoading, setIsLoading] = useState(true); useEffect(() => { fetch(``) .then((res) => res.json()) .then((result) => { setPokemon(result); setIsLoading(false); }) .catch((err) => console.log(err)); }, []) return ( <div> {isLoading ? <h3>...Loading...</h3> : <p>{pokemon.name}</p> } </div> ); } export default PokeSearch;
Let's jump into testing these two components. For our first component, SubscribeForm component, we create a new file called SubscribeForm.test.js. We followed the naming convention, so that it could be recognized by our test runner. In order to create tests, we will need
render and
screen from testing-library/react and the user events from testing-library/user-event. Besides, remember to import the component that we want to test.
import React from 'react'; import { render, screen } from '@testing-library/react'; import userEvent from '@testing-library/user-event'; import SubscribeForm from './SubscribeForm';
We can first create a test to ensure that our button is disabled when the page first loaded, since there is no input in the email address field.
it("The subscribe button is disabled before typing anything in the input text box", () => { render(<SubscribeForm />); expect(screen.getByRole("button", {name: /subscribe/i})).toBeDisabled(); });
From the last article, we know that we will give a name to our test, and provide a callback function, which includes the assertions.
First, we use render method to render the component to be tested in a container which is appended to document.body (on a side note, Jest 26 and before is using jsdom as default environment). After rendering out the component, we need to have a way to find the right element (which is the button) to test. We can use query methods from RTL to do so. The elements in DOM can be found by their accessibility roles and names (more on this later), or by text, or by test id that we give to the elements. There is a priority given by official statements. They recommend to query by role or text (where everyone is accessible), by semantic HTML (alt text such as img, area, etc), and by test id (user cannot see or hear this, therefore if you could not make sense of using any of previous methods, use this).
<div data-
screen.getByTestId('test-element')
You can find more information about the priority here:
About Queries of React Testing Library
You can do this to find out the accessible roles within your component: You can just write
screen.getByRole("") in the test for that component, it will fail but give you the accessibility information and the name of those elements.
Here are the accessible roles: heading: Name "Subscribe To Our Newsletter": <h1 /> -------------------------------------------------- textbox: Name "Email Address": <input id="email" name="email" placeholder="Email Address" type="email" value="" /> -------------------------------------------------- checkbox: Name "I agree to disagree whatever the terms and conditions are.": <input id="agreement_checkbox" name="agreement_checkbox" type="checkbox" /> -------------------------------------------------- button: Name "Subscribe": <button class="button" disabled="" name="subscribe-button" type="submit" /> --------------------------------------------------
From here we know that we have different accessibility roles such as button, textbox, checkbox and heading. In order to target our subscribe button, we need to target role "button". After targeting the role, we want specifically the button with the accessible name "Subscribe", as stated in the accessibility information provided ('Name "Subscribe"'). This value of "Name" can be derived from visible or invisible property of an element, the text in the button is one of them. In order to search for its name, we usually put insensitive case for regex on the name, into the second object argument of getByRole (
{name: /subscribe/i}). After getting that button, we want to check if this button is disabled (it should be disabled).
Then we have the second test. In this test, we simulate the user event to type something in the text box, and make the button enabled.
it("The subscribe button becomes enabled when we start typing in the input text box", () => { render(<SubscribeForm />); userEvent.type(screen.getByRole("textbox", {name: /email/i}), "abc@email.com"); expect(screen.getByRole("button", {name: /subscribe/i})).toBeEnabled(); });
We use the same step to render the SubscribeForm to the document, and use user event of "type", to type some text on the element that we want, in this case, it is the textbox that we can select by accessible role and name (refer back to the accessibility information that we took just now). The second argument of
userEvent.type() is the text that you want to input. After the text has been typed, we can now expect the button to be enabled.
Finally, we are doing a snapshot testing for our React componenet. We need to use react-test-renderer to render a pure JavaScript object (does not depend on DOM) for the snapshot.
npm install react-test-renderer
After installing and importing, we can use the renderer to create SubscribeForm component in JavaScript Object. Finally, we use a toMatchSnapshot() function from Jest, to kickstart the snapshot testing.
it("Test to match snapshot of component", () => { const subscribeFormTree = renderer.create(<SubscribeForm />).toJSON(); expect(subscribeFormTree).toMatchSnapshot(); })
When you run this test for the first time, it will create a new folder (automatically after you run the test), called __snapshots__ within your directory, in this case is the SubscribeForm folder.
PASS src/components/PokeSearch/PokeSearch.test.js PASS src/components/SubscribeForm/SubscribeForm.test.js › 1 snapshot written. Snapshot Summary › 1 snapshot written from 1 test suite. Test Suites: 2 passed, 2 total Tests: 5 passed, 5 total Snapshots: 1 written, 1 total Time: 2.519 s Ran all test suites. Watch Usage: Press w to show more.
You can find a snap document in it.
SubscribeForm.test.js.snap
// Jest Snapshot v1, exports[`Test to match snapshot of component 1`] = ` <div className="container" > <h1> Subscribe To Our Newsletter </h1> <form className="form" > <label htmlFor="email" > Email Address </label> <input id="email" name="email" onChange={[Function]} <input id="agreement_checkbox" name="agreement_checkbox" type="checkbox" /> <label htmlFor="agreement_checkbox" > I agree to disagree whatever the terms and conditions are. </label> <button className="button" disabled={true} Subscribe </button> </form> </div> `;
Now that the test suite take note of your previous snapshot of the component. If you run the test again, it will take another snapshot of the compoenent, and compare to the one in __snapshots__ folder. If they are different, the test is going to fail. This is useful to make sure that our UI components did not get changed unexpectedly. Let's try to make a change to our SubscribeForm component and run the test again. We are going to change "Subscribe to Our Newsletter" to "Subscribe to Their Newsletter".
<h1>Subscribe To Their Newsletter</h1>
Then we run the test again.
PASS src/components/PokeSearch/PokeSearch.test.js FAIL src/components/SubscribeForm/SubscribeForm.test.js ● Test to match snapshot of component expect(received).toMatchSnapshot() Snapshot name: `Test to match snapshot of component 1` - Snapshot - 1 + Received + 1 @@ -1,10 +1,10 @@ <div className="container" > <h1> - Subscribe To Our Newsletter + Subscribe To Their Newsletter </h1> <form className="form" > <label 22 | it("Test to match snapshot of component", () => { 23 | const subscribeFormTree = renderer.create(<SubscribeForm />).toJSON(); > 24 | expect(subscribeFormTree).toMatchSnapshot(); | ^ 25 | }) at Object.<anonymous> (src/components/SubscribeForm/SubscribeForm.test.js:24:31) › 1 snapshot failed. Snapshot Summary › 1 snapshot failed from 1 test suite. Inspect your code changes or press `u` to update them. Test Suites: 1 failed, 1 passed, 2 total Tests: 1 failed, 4 passed, 5 total Snapshots: 1 failed, 1 total Time: 3.817 s Ran all test suites. Watch Usage: Press w to show more.
...and the test failed. If this is an intended change, we can update our snapshot to the latest by pressing "u". By doing that, the snap file in our __snapshots__ folder will get updated, all the tests are re-run and they pass this time. This is pretty similar to what we used (Enzyme library) last time.
PASS src/components/PokeSearch/PokeSearch.test.js PASS src/components/SubscribeForm/SubscribeForm.test.js › 1 snapshot updated. Snapshot Summary › 1 snapshot updated from 1 test suite. Test Suites: 2 passed, 2 total Tests: 5 passed, 5 total Snapshots: 1 updated, 1 total Time: 2.504 s Ran all test suites. Watch Usage: Press w to show more.
Therefore, this is the complete script to test our SubscribeForm component.
import React from 'react'; import { render, screen } from '@testing-library/react'; import userEvent from '@testing-library/user-event'; import renderer from 'react-test-renderer'; import SubscribeForm from './SubscribeForm'; it("The subscribe button is disabled before typing anything in the input text box", () => { render(<SubscribeForm />); expect(screen.getByRole("button", {name: /subscribe/i})).toBeDisabled(); }); it("The subscribe button becomes enabled when we start typing in the input text box", () => { render(<SubscribeForm />); userEvent.type(screen.getByRole("textbox", {name: /email/i}), "abc@email.com"); expect(screen.getByRole("button", {name: /subscribe/i})).toBeEnabled(); }); it("Test to match snapshot of component", () => { const subscribeFormTree = renderer.create(<SubscribeForm />).toJSON(); expect(subscribeFormTree).toMatchSnapshot(); })
Note to mention: There is a clean up process (
afterEach(cleanup)) done to prevent memory leak automatically by Jest (injected globally) after each test.
Finally, we would also like to test our component asynchronously (PokeSearch).
import React from 'react'; import { render,screen,waitForElementToBeRemoved } from '@testing-library/react'; import PokeSearch from './PokeSearch'; it("Loading is shown until the Pokemon is fetched", async () => { render(<PokeSearch />); expect(screen.getByText('...Loading...')).toBeInTheDocument(); await waitForElementToBeRemoved(screen.queryByText('...Loading...')); });
First we can test if the "...Loading..." text is rendered correctly to the screen. We need to query on the correct element that contains "...Loading...", and use assertion method to check if it is in the DOM. Then we can use an asynchronous function provided by RTL to be resolved by the loading text element to disappear after the result is fetched. Besides, it is also recommended by official site to use
queryBy... to query for the element to disappear from DOM.
After testing the Loading text, we can then test the case after fetched. In this test case, we do not want to test with the real API (we are just ensuring that our component is working fine), there we can just mock the fetch function. We fix the returned data by the fetch function when the promise is resolved. After that, we will render the PokeSearch, and the fetch call is made to fetch our fake data. After the data is back, we will try to use
findBy... (use
findBy... for asynchronous case) to find for the element that has text "bulbasaur", and check if the element is in the DOM.
it("The Pokemon name is displayed correctly after it has been fetched", async () => { // Mock the browser fetch function window.fetch = jest.fn(() => { const pokemon = { name: 'bulbasaur', weight: 69, height: 7 }; return Promise.resolve({ json: () => Promise.resolve(pokemon), }); }); render(<PokeSearch />); const pokemonName = await screen.findByText('bulbasaur'); expect(pokemonName).toBeInTheDocument(); });
Hope this gives you an insight on how do get started on testing React components.
Do follow me for more future articles on web design, programming and self-improvement 😊
Discussion (3)
How we get earn money from dev.to
Hey Deepak, thanks for your comment :) I am not really sure how could we get paid here, but I do know there are platforms that pay writers for articles.
Which platform | https://dev.to/ohdylan/react-component-testing-54ie | CC-MAIN-2022-05 | en | refinedweb |
Hi,
Can't seem to find this on GOOGLE for webforms.
I have a web form and I need to search a specific directory for all files containing a specific word and show the name of the file and what comes after that word.
So, if i need to search for word Firstname: and i find a file with Firstname: i need to display Firstname: John Smith.
Any help will be appreciated.
Thanks
This is not a web forms function. File APIs are in the System.IO namespace. | https://social.msdn.microsoft.com/Forums/en-US/381ba6e4-d0d7-4ee7-9e61-a6bcef321d80/find-string-in-files-in-a-directory?forum=aspwebforms | CC-MAIN-2022-05 | en | refinedweb |
twisted.logger.Logger(object)class documentation
twisted.loggerView Source (View In Hierarchy)
A
Logger emits log messages to an observer. You should instantiate it as a class or module attribute, as documented in
this module's documentation.
Derive a namespace from the module containing the caller's caller.
When used as a descriptor, i.e.:
# File: athing.py class Something(object): log = Logger() def hello(self): self.log.info("Hello")
a
Logger's namespace will be set to the name of the class it is declared on. In the above example, the namespace would be
athing.Something.
Additionally, its source will be set to the actual object referring to the
Logger. In the above example,
Something.log.source would be
Something, and
Something().log.source would be an instance of
Something.
Emit a log event to all log observers at the given level.
Log a failure and emit a traceback.
For example:
try: frob(knob) except Exception: log.failure("While frobbing {knob}", knob=knob)
or:
d = deferredFrob(knob) d.addErrback(lambda f: log.failure("While frobbing {knob}", f, knob=knob))
This method is generally meant to capture unexpected exceptions in code; an exception that is caught and handled somehow should be logged, if appropriate, via
Logger.error instead. If some unknown exception occurs and your code doesn't know how to handle it, as in the above example, then this method provides a means to describe the failure in nerd-speak. This is done at
LogLevel.critical by default, since no corrective guidance can be offered to an user/administrator, and the impact of the condition is unknown.
Emit a log event at log level
LogLevel.debug.
Emit a log event at log level
LogLevel.info.
Emit a log event at log level
LogLevel.warn.
Emit a log event at log level
LogLevel.error.
Emit a log event at log level
LogLevel.critical. | https://twistedmatrix.com/documents/20.3.0/api/twisted.logger.Logger.html | CC-MAIN-2022-05 | en | refinedweb |
Django Multiple Page Forms
Recently updated on
I recently started working on another new project here at Imaginary Landscape, and this one looked rather enticing as it threw some stuff my way that I haven't had a chance to play with much recently. First on that chopping block was a multi-page registration form application. Immediately I remembered reading about Django's Form Wizard module and thought it'd be a great way to handle this small application. At least I thought so until I finished reading the specifications for the project and realized that the user must be able to move both backwards and forwards through the form. Oh, and as a bonus, I was dealing with sensitive information, so I had to be careful about what I stashed in sessions.
So unfortunately Form Wizard wasn't going to be able to handle my needs, as it doesn't really support going backwards in the forms. Also, it's built for use with django.forms.Form, not django.forms.ModelForm like I wanted to use since I have to store all this information in a database anyway. In the end I went with storing object IDs in the user's session. This would allow me to grab the data that was submitted based off the ID, avoid storing sensitive information in sessions, and provide me with all the functionality that I was looking to achieve. This actually ended up being more straightforward than I had initially expected. First you need some forms:
class UserInfoForm(forms.ModelForm): class Meta: model = models.UserInfo class AddressForm(forms.ModelForm): class Meta: model = models.Address
Let's just pretend these are connected to very basic models that fit their namesake. Now we'll need to create some views:
def user_information(request): if request.method == "POST": form = forms.UserInfoForm(request.POST) if form.is_valid(): user_info = form.save() request.session['user_info_id'] = user_info.id return HttpResponseRedirect( reverse("address_form")) else: if 'user_info_id' in request.session: try: user_info_obj = models.UserInfo.objects.get( id=request.session['user_info_id'] ) form = forms.UserInfoForm(instance=user_info_obj) except ObjectDoesNotExist: del request.session['user_info_id'] form = forms.UserInfoForm() else: form = forms.UserInfoForm() return render_to_response( "app/user_info.html", locals(), context_instance=RequestContext(request))
As you can see this is fairly straightforward. It's just how you'd handle a normal form, but after we save our form we're injecting the "user_info.id" into the session. We'll be needing this if we decide to go backwards from our next form. If we're not posting, we check to see if that ID is in the session. If it is, we load that object, and initialize the form it. If that object doesn't exist, we clear the session item and create a blank form for the user to fill out. Now we have our second form:
def address(request): if 'user_info_id' not in request.session: return HttpResponseRedirect( reverse("user_info_form")) if request.method == "POST": user_info_obj = models.UserInfo.objects.get( id=request.session['user_info_id'] ) new_address = models.Address(patient=user_info_obj) form = forms.AddressForm(request.POST, instance=new_address) if form.is_valid(): address_info = form.save() request.session['address_info_id'] = address_info.id ## Send them to a thank you page else: if 'address_info_id' in request.session: try: user_info_obj = models.UserInfo.objects.get( id=request.session['user_info_id'] ) except ObjectDoesNotExist: return HttpResponseRedirect( reverse("user_info_form")) try: address_obj = models.Address.objects.get( id=request.session['address_info_id'] ) form = forms.AddressForm(instance=address_obj) except ObjectDoesNotExist: user_info_obj = models.UserInfo.objects.get( id=request.session['user_info_id'] ) form = forms.AddressForm( initial={'user': user_info_obj.id}) else: user_info_obj = models.UserInfo.objects.get( id=request.session['user_info_id'] ) form = forms.AddressForm( initial={'user': user_info_obj.id}) return render_to_response( "app/address.html", locals(), context_instance=RequestContext(request))
This follows the same basic principle as the first form. As an added treat, it also uses some data from the first form. If we don't have that data, that means the user didn't fill out the first form. In that case, we send them back to the first form to properly fill it out. We also stash the ID of the address object in the sessions as well, which if this is the last form (which it is in this example) isn't totally necessary. You can choose to clear all the items from the session that you put in there if you prefer. However, if the user uses their browser's back button to go back to the first form and submit it, then the second form will still be initialized with the data they previously put in there.
Surprisingly, that pretty much sums up creating a multi-page form in Django. Naturally there are many ways to accomplish a multi-page form. If you're using simple forms and don't need to enable users to browse backwards, then I highly recommend FormWizard for ease of use. It's really easy to use, and handles most use cases. If you need something a bit more complex, the code above should work out for you. Though I must warn you, this code was adapted to this blog post from an existing project. As such, there might be some minor issues I missed in adaptation. Please let me know in the comments if you see any such issues. | https://imagescape.com/blog/django-multiple-page-forms/ | CC-MAIN-2022-05 | en | refinedweb |
The QtNetwork module provides classes that allow you to write TCP/IP and UDP.
To import the module use, for example, the following statement:
from PyQt4 import QtNetwork
Topics:..) | https://doc.bccnsoft.com/docs/PyQt4/qtnetwork.html | CC-MAIN-2022-05 | en | refinedweb |
Let's just get to the point .
Environment building
- Babel-- At present, browsers are very useful for ES6 The syntax parsing support of is not high , So transcoding is used in compiling , So it's using ES6 Before you install Babel, I encountered some problems during the previous installation, but I didn't record all of them , Now we have to try to reproduce it .
Babel6 The version does not support the use of npm install babel -g Installed , It's divided into several parts
- babel cli, For the command line
- babel-core, contain node api
npm install babel-cli -g
npm install babel-core --save-dev
babel You need to install the plug-in manually
npm install babel-preset-es2015
Then type on the command line vim .babelrc Create a folder called
.babelrc The file of , And write the following code :
{
"presets": ["es2015"]
}
Then save to exit .
2. To configure webpack
You can refer to
First record webpack Use
1、 Create a new folder for our projects
npm init establish package.json Configuration file for
2、 hold webpack Save to local dependency
npm install webpack --save-dev
3、 Single file packaging
newly build js file entry.js It says js Code , And then use
webpack entry.js bundle.js
You can generate a packaged bundle.js, And then put bundle.js Introduced into the page
4、webpack Default packaging js file , If we need to pack css The file needs to be loaded with the corresponding loader
for example :css Need to load loader Yes css-loader and style-loader
npm install css-loader style-loader --save-dev
Then in the entry file entry.js Add
require('style!css! ./style.css'); // hold css Load in as a module
5、 It's too cumbersome to write too many documents , We can use webpack.config.js File defines webpack Configuration of ,
module.exports ={
entry:'./entry.js', // Defined entry file
output:{
path: __dirname, // The path to the packaged file
filename: 'bundle.js' // Packed file name
},
devtool:'source-map', // Generate source-map You can use it in your browser sourcemap See the files we wrote before packing , Easy to debug ( It can be added where the files before packaging need to be debugged debugger; It works like a breakpoint )
module:{ // When used in packaged files loader When we need to write like this
loaders:[
{test:/\.js$/, exclude:/node_modules /, loader: 'react-hot!babel'}, //exclude It means to exclude those files from being packaged
{test:/\.css$/, loader:'style!css'} ]
}
}
6、 stay webpack Use in babel
First you have to install it locally babel Dependency needed
npm install babel-loader babel-core babel-preset-es2015 --save-dev
Then add... To the root of the project .babelrc Enter... In the file
{
presets:["es2015"]
}
Always use webpack+babel+react
First, install the dependency... Locally in the project
npm install babel-core babel-preset-es2015 babel-preset-react webpack webpack-dev-server babel-loader react-hot-loader --save-dev
Then install react Of
npm install react react-dom --save
Then configure... In the project .babelrc
{
"presets":["es2015","react"]
}
Create a new one name.js
" use strict";
import React from "react"; class Name extends React.Component{
render(){
return(
<div>
hello~~ yang <input />
</div>
);
}
}
export {Name as default};
And then in entry.js Configuration in the entry file
"use strict"; import React from "react";
import ReactDOM from "react-dom"; import Name from './name'; ReactDOM.render(
<Name />,
document.getElementById('app')
);
webpack.config.js To configure
module.exports ={
entry:'./entry.js',
output:{
path: __dirname,
filename: 'bundle.js'
},
devtool:'source-map',
module:{
loaders:[
{test:/\.js$/, exclude:/node_modules /, loader: 'react-hot!babel'},
{test:/\.css$/, loader:'style!css'} ]
}
}
And then in package.json Add a sentence in
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"watch": "webpack-dev-server --inline --hot"
},
Then run it on the console
npm run watch
You can access it in a browser
Use Babel + React + Webpack build Web More related articles on Application
- react+webpack Set up the project
One . Environmental preparation ①node ②npm Two . Begin to build ① Use npm install create-react-app Tools , stay cmd Enter... On the command line : npm install -g create-react-app ② Use life ...
- webpack+babel+ES6+react Environment building
webpack+babel+ES6+react Environment building step : 1 Create project structure Be careful : First create a project directory react The name is custom , And then go down to this directory mkdir app // establish app ...
- react+webpack+babel Environment building
[react+webpack+babel Environment building ] 1.react Official documents recommend the use of babel-preset-react.babel-preset-es2015 Two perset. Babel Official documents ...
- Detailed explanation Webpack+Babel+React Build the development environment
1. know Webpack Let's take a look at Webpack, Webpack It's a module packaging tool , Be able to put all kinds of documents ( for example :ReactJS.Babel.Coffeescript.Less/Sass etc. ) ...
- The smallest white webpack+react Environment building
This article is also published in my official account “ My sky ” Starting from scratch , With minimal configuration . Minimum code . The least dependence to build the simplest webpack+react Environmental Science . Recently playing webpack+react+ Mobile , So the first step is to build ...
- Use webpack build react project webpack-react-project
webpack-react-project Use webpack build react project webpack build react project github Source code Please refer to package.json and webpack.conf ...
- react Project construction and management webpack To configure
1, To configure webpack npm install -g webpack webpack Of cli Environmental Science npm install -g webpack-dev-se ...
- Use webpack build React project
Those remarks : I believe many developers are getting started react I use it all the time create-react-app or react-slingshot These scaffolding to quickly create applications , When there are special needs , Need modification eject Coming out we ...
- webpack build React( Manual build )
Preface I've been really learning in the dark recently , See what you don't quite understand , They all like to knock by hand 1 To 3 All over ( At the end of the night ), look ,React be based on webpack build ,react There's an official set of tripod tools , I built it myself. It's really ...
Random recommendation
- Linux Next browser-sync Can't start Chrome Solutions for
The author's environment : OS: Ubuntu Linux Browser: Chrome, Firefox Every time you want to start chrome browser , The system will report an error : browser-sync start -s --dir ...
- java Use this Keyword to call this class overload constructor
Other overloaded constructors of this class can be called in the constructor , You cannot call another constructor with a constructor name , It should use Java specific this(-.) To call . this(-.) The method must appear on the first line of the constructor , Used to call other overload constructors ...
- opengl Open local bmp pictures drawing
Be careful bmp The format of the picture ,32 position ARGB perhaps 24 position RGB. You must pay attention to how many bits you use . Otherwise, it will display the wrong picture or other mistakes . The code is as follows 32 Bit version #include < ...
- C#.NET Split How to use it
The first method : string s = "abcdeabcdeabcde"; string[] sArray = s.Split('c'); foreach (string i in ...
- delphi Realization vip126 email
This example is TSimpleThread , TSimpleList, IdhttpEx Web simulation (POST) Comprehensive use of . Demo Just write the send , But you can also charge , See the source code for details . ( This source code is written in 2 Years ago , It was not well written at that time , please ...
- C Language library function exploration
1.strlen() Find the string length // Simulation Implementation strlen function #include<stdio.h> #include<stdlib.h> #include<string. ...
- VTK Show .vtk Format file
void ReadandShowVTKFile () { vtkSmartPointer<vtkRenderer > aRenderer = vtkSmartPointer<vtkR ...
- I wrote a script to json Switch to md
use python The script will protocol.json Medium json according to templete.md Template generation , It turns out that protocol.md in Python: #!/usr/bin/python # -*- codi ...
- [ daily ] Go Language Bible -Slice Slicing exercises
1.Slice( section ) Represents a variable length sequence , Each element in the sequence has the same type , One slice Type general writing []T, among T representative slice The type of element in :slice The syntax of is very similar to array , It's just that there's no fixed length ,slice The bottom of ...
- Interrupt flag bit IRQF_ONESHOT
one shot Only once in itself , Combined with the interruption scenario , The interrupt is triggered once , Cannot nest . about primary handler, Of course, it's not nested , But for threaded interrupt han ... | https://chenhaoxiang.cn/2021/06/20210604191902074U.html | CC-MAIN-2022-05 | en | refinedweb |
CHFLAGS(2) NetBSD System Calls Manual CHFLAGS(2)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
chflags, lchflags, fchflags -- set file flags
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <sys/stat.h> #include <unistd.h> int chflags(const char *path, u_long flags); int lchflags(const char *path,, or the effective user ID is not the super-user and one or more of the super-user-only flags for the named file would be changed. [EOPNOTSUPP] The named file resides on a file system that does not support, 9.99 August 6, 2011 NetBSD 9.99 | https://man.netbsd.org/macppc/chflags.2 | CC-MAIN-2022-05 | en | refinedweb |
public class LocalKMeans extends Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public LocalKMeans()
public static org.apache.spark.mllib.clustering.VectorWithNorm[] kMeansPlusPlus(int seed, org.apache.spark.mllib.clustering.VectorWithNorm[] points, double[] weights, int k, int maxIterations)
points. This first does the K-means++ initialization procedure and then rounds of Lloyd's algorithm.
seed- (undocumented)
points- (undocumented)
weights- (undocumented)
k- (undocumented)
maxIterations- (undocumented) | https://spark.apache.org/docs/2.3.1/api/java/org/apache/spark/mllib/clustering/LocalKMeans.html | CC-MAIN-2022-05 | en | refinedweb |
Dynatrace Managed release notes version 1.224
These release notes relate to Dynatrace Managed specific changes. To learn about general Dynatrace changes included in this Dynatrace Managed release, see:
- Dynatrace SaaS release notes version 1.224 and Dynatrace SaaS release notes version 1.223
- Dynatrace API changelog version 1.224 and Dynatrace API changelog version 1.223
- Dynatrace OneAgent release notes version 1.223
New features and enhancements
General
You can now filter the Release inventory table (go to Releases) by the build version.
You can now filter the Release inventory table (go to Releases) by the technology type and version of the corresponding process group instances.
We have introduced a metric for Davis event ingestion.
Added support for filtering Data explorer tiles on dashboards.
We improved security when loading custom chart tiles on a shared dashboard by moving query execution completely to the server to further reduce potential threat vectors.
The waterfall shows more hints if Core Web Vitals are not available.
We streamlined the dashboard tile editing process a little. When you are finished editing a tile, you have three choices:
- Select Done just once to exit edit mode and display the finished dashboard. (Formerly, you had to select Done twice to finish editing the tile and then finish editing the dashboard.)
- Select another tile to edit the selected tile.
- Select in any empty area on the dashboard to display the tile catalog and dashboard settings.
We releases problems v2 API, see Dynatrace API changelog version 1.224.
It is now possible to set the stage of a release with the Kubernetes pod label 'dynatrace-release-stage'.
Kubernetes pages for namespace, workload and pod are extended with a breadcrumb navigation bar that shows the Kubernetes hierarchy from the cluster down to the pod level.
Service-level objectives (SLOs) now consist of a metric expression instead of either a metric rate or two metrics for good/total value. This introduces some changes in the web UI and SLO API.
We redesigned the "Define the SLO target" ("Set your target") step in the "Add new SLO" wizard to make it easier to define the service-level objective target and warning values.
We have updated the list of devices usable for browser monitors to include more current device models, and we have removed some outdated devices.
Dynatrace Managed is now also supported on Suse Linux Enterprise Server 15 SP3.
Resolved issues
- General Availability (Build 1.224.74)
- Update 80 (Build 1.224.80)
- Update 84 (Build 1.224.84)
- Update 96 (Build 1.224.96)
- Update 99 (Build 1.224.99)
General Availability (Build 1.224.74)
The 1.224 GA release contains 37 resolved issues (including 1 vulnerability resulution).
Cluster
- Corrected the link to documentation from user details (go to "Session segmentation" and select a user, and then select the "Learn more" link for the "Extended users..." switch). (APM-313905)
- Improved database service detection where no port is detected: if the candidate process group suggested by the topology has the technology of the database, it is associated with the database call; otherwise, it is associated only with the host. (APM-314790)
- Web service name simplified to no longer contain host or process group information. (APM-315412)
- Fixed an issue with mobile symbolication file storage to allow "unlimited" quota setting in Dynatrace Managed environments. (APM-312151)
- On the user details page, the "Extended users..." switch is never disabled now if there is at least one extended user for the specified timeframe. (APM-314148)
- Fixed issue with long request time for `Host settings` > `Detected processes`. (APM-315254)
- The "User sessions" > "Conversions vs bounces" chart now displays a tooltip on hover. (APM-312164)
- Users in a management zone no longer get a 403 HTTP error when opening Kubernetes pages. (APM-324758)
- Fixed an issue with read masking for pages. (APM-307851)
- The "User sessions" page, when "Analyze more data" is selected, no longer shows "0 results" when fetching intermediate data for user sessions after some data has been already fetched. (APM-296188)
- Fixed an issue in which the displayed number of open problems differed between mobile app and web UI. (APM-314219)
- When license is expired, we provide now correct email in top bar. (APM-315761)
- Fixed migration to `Data explorer` for custom charts containing legacy plugin metrics. (APM-316472)
- Fixed display names of legacy plugin metrics in custom charting. (APM-316469)
- URLs that are used only as additional information for Synthetic actions are no longer shortened. (APM-314301)
- Fixed an issue with browser monitor breadcrumbs. (APM-313343)
- The number of affected Kubernetes nodes of the vulnerability details card now matches the number in the vulnerable components card. (CASP-10339)
- The "Dashboards" page now displays the admin toggle only for admins and devops users. (APM-311677)
- Honeycomb dashboard tiles (Host health, Service health, etc.) with filters and remote environment configured now have working drilldowns to the corresponding view on the remote tenant. (APM-313297)
- On "User sessions" page, when a category from the side panel is applied, the "Return to sessions" link now correctly preserves global timeframe and filters. (APM-312363)
- Fixed issue with missing timings for Synthetic user actions. (APM-313751)
- Darwin is now recognised as an Apple OS. (APM-311621)
- Resolved issue in which the Visually Complete 2.0 algorithm was not activated automatically in some circumstances. (APM-312152)
- The Session Replay tab on session details page is now also displayed for Synthetic users. (APM-307056)
- Fixed an availability alerting bug related to evaluation of the host group and host settings when "Use global anomaly detection settings" was selected after editing host or host group settings. (APM-315121)
- Calculated Service metrics API - passing null/empty as a value for the `metricDefinition` key parameter now returns an appropriate constraint violation message. (APM-313239)
- Session list filters are no longer lost when changing filters on "Session details" page. (APM-313278)
- GWT error page analytics data is now sent when the error page is displayed. (APM-313545)
- Fixed a bug related to custom error rules in the RUM - Web application configuration API (null fields were not allowed in PUT request). (APM-311723)
- Resolved issue in which navigation to problem details page sometimes resulted in 404 not found error. (APM-312975)
- Added support for `UserAction` and `Session` properties of type LONG_STRING to Configuration API. (APM-315513)
- Added information on the self-enablement page about OneAgent release 1.217 requirement for Log Monitoring v2. (APM-319208)
- Fixed issue that caused unauthorized requests to the Synthetic Monitoring API to return unexpected error response. (APM-317625)
- On the "User sessions" page, if a filter button would result in 0 matching items, the button is disabled. (APM-306087)
Cluster Management Console
- Vulnerability: Implemented hardening against server-side request forgery (SSRF) attack. (APM-307227)
- Resolved issue with slow FedRAMP SSO IdP-initiated sign-in and error upon sign-out. (APM-310965)
- Resolved issue that caused infinite dashboard page loading in certain circumstances when redirected from SSO sign-in for Managed clusters. (APM-316656)
Update 80 (Build 1.224.80)
This cumulative update contains 3 resolved issues and all previously released updates for the 1.224 release.
Cluster
- Resolved issue in which HTTP monitors using credentials did not receive a credential update when they were running on public locations, so that they used the old value for public location executions unless the monitors were updated specifically. (APM-320713)
- To avoid high Cassandra Database loads, REST API calls are no longer audited. (APM-321322)
- Dashboard filters such as Kubernetes cluster, Kubernetes namespace, and Kubernetes workload are now properly applied to `Data explorer` tiles with built-in metrics. (APM-319031)
Update 84 (Build 1.224.84)
This cumulative update contains 2 resolved issues and all previously released updates for the 1.224 release.
Cluster
- Resolved issue in which, when a cluster node hit memory hard cleanup and stopped processing incoming user sessions data, it failed to start processing incoming user sessions data again when the cluster node returned to normal memory conditions. (APM-322680)
- Resolved an issue in which, for some schemaless metrics sent via MINT API, data was lost. Affected: MINT metrics where the metric key was modified by Dynatrace on the cluster, including (1) count metrics where the metric key doesn't end with count, and (2) gauge metrics where the metric key does end with count. (APM-321104)
Update 96 (Build 1.224.96)
This cumulative update contains 5 resolved issues (including 1 vulnerability resulution) and all previously released updates for the 1.224 release.
Cluster
- Vulnerability: Enhanced access control in token creation. (APM-330727)
- Fixed an issue in which, in certain circumstances, the number of ingestible events was reduced. (APM-322923)
- Fixed issue with access permission to Kubernetes entity page for some users with management zone permissions only. (APM-322118)
- Users in a management zone no longer get a 403 HTTP error when opening Kubernetes pages. (APM-324758)
- Disabled Chromium auto-update. (APM-328535)
Update 99 (Build 1.224.99)
This cumulative update contains 2 vulnerability resolutions and all previously released updates for the 1.224 release.
Cluster
- Vulnerability: Enhanced access control in token creation. (APM-330727)
- Vulnerability: In response to CVE-2021-44228 (Log4j vulnerability), JVM parameters have been extended for Dynatrace Server and Elasticsearch. (APM-341605) | https://www.dynatrace.com/support/help/whats-new/release-notes/managed/sprint-224 | CC-MAIN-2022-05 | en | refinedweb |
.
If you're not using Visual Studio .NET (any edition) to run the examples in the book you'll need to include Imports statements within your code. See page 261 for details about the Imports statement. At the very least, you'll need to import the System namespace for every example:
System.Windows.Forms.MessageBox.Show("Hello World", _
"A First Look at VB.NET", _
System.Windows.Forms.MessageBoxButtons.OK, _
System.Windows.Forms.MessageBoxIcon.Information)
The version number of an assembly takes the form: Major.Minor.Build.Revision, not Major.Minor.Revision.Build.
The example programs on pages 64 - 66 use Debug.WriteLine to print statements instead of Console.WriteLine. In each case the call to Debug.WriteLine should be replaced with a call to Console.WriteLine, in order for the statements to be printed to the command line.
The final sentence of the first paragraph should read, "Consider the following code, which uses the System.Text.StringBuilder class:"
The final paragraph on this page should refer to Singles, not Shorts - as reflected in the code.
The first line of code on this page should check if lngLong is less than Short.MaxValue rather than greater than:
The final paragraph on this page refers to "Object Strict". It should refer to "Option Strict".
The 4th bullet in the summary should read, "Beware that parameters are passed ByValue by default so changes are not returned."
The final two lines of code on this page contain parantheses that break the code. The code should read:
The final paragraph of the Method Signatures
section states that the ByRef and ByVal modifiers can be used to create
distinct method signatures. This is incorrect. ByVal and ByRef cannot
be used as a differentiating feature when overloading a property or
procedure.
mdtBirthDate should be declared as a Date, not a String:
The code that checks if a key is present in the HashTable should read:
The 2nd paragraph on this page should read, "..., we also need to comment out the lines that refer to the OfficeNumber property, ..."
The MyBase keyword cannot be used to invoke a Friend element from the parent class if the parent class is in a different assembly.
The code that checks if a key exists in the HashTable should read:
The final paragraph on this page should read, "We can then move on to implement the other two elements defined..."
The third paragraph of the Reusing Common Implementation section should read, "..., our Value method is linked to IPrintableObject.Value using this clause:"
The first paragraph on this page should read, "... Do this by clicking on the project name in the Solution Explorer window and..."
The
method declaration for PrintPage() should include an underscore
character at the end of the second line top indicate that the
declaration continues onto the next line:
The fourth paragraph should read, "... Selecting the Microsoft Visual Basic.NET Compatability Runtime component..."
In order to use the example code on page 289 you will need to import System.IO, by adding a Imports System.IO
statement to your code. In addition, so that the code will run even if
the C:\mytext.txt file does not exist the first statement in the LoggingExample2() sub should read:
Please not that "prescription" is misspelt as "presciption" several times in the example code in this chapter.
The first line of code on this page should read:
The final sentence of the first paragraph on this page should read, "An example code snippet that uses the GetXml method is as follows:".
The code is presented as a snippet only and so is not included in the code download for the book.
The XML shown on this page should read:
Please note that the code on this page is extracted from the TraverseDataReader method on the following page. You'll need to wait until page 354 to find the complete code necessary to run the example demonstrating the Command object.
In response to reader
feedback some clarification about stored procedures is required. The
book states stored procedures are compiled prior to execution. In fact,
a stored procedure in SQL Server 7 and 2000 is compiled at execution
time, just like any other T-SQL statement. The execution plan of the
SQL statement is cached and SQL Server uses algorithms to reuse these
execution plans efficiently.
[You can find more information about stored procedures in SQL Server in the Stored Procedures and Execution Plans section of SQL Server Books Online.]
The code that checks if the password is blank should read:
If passwordTextbox.Text = "" Then
MessageBox.Show("Password cannot be blank")
End If
Using a <DefaultValue> attribute will allow designers (such as Visual Studio .NET) to set the property to a default value but please note that your property still needs to be set to an intitial value in code so that it can be used outside of such designers.
DefaultMaxSelectedItems()
Private Function DefaultMaxSelectedItems() As Integer
Dim attributes As AttributeCollection = _
TypeDescriptor.GetProperties(Me) ("MaxItemsSelected").Attributes
Dim myAttribute As DefaultValueAttribute = _
CType(attributes(GetType(DefaultValueAttribute)), _
DefaultValueAttribute)
Return CInt(myAttribute.Value)
End Function
The first paragraph refers to CheckedListBox1; it should refer to LimitedCheckedListBox1.
The first line of the OnPaint() method code should read:
The
first line on this page should read, "Client-side events are
automatically processed in the client browser without making a round
trip to the server. So, for..."
This page incorrectly states that you cannot use a @OutputCache directive in a Web User Control, when in fact you can.
Note that you will need to alter the code for the table element of the navigation bar in order to use the BackColor custom property
The menuSaveChanges_Click method is missing a Handles statement. It should read:
The thirds stored procedure on this page has a spelling mistake (sp_exectsql instead of sp_executesql) and doesn't declare the third parameter. It should read:
exec sp_executesql N'UPDATE authors SET city=@p1, state=@p2
WHERE au_id=@p3, N'@p1 varchar(10), @p2 char(2), @p3 varchar(11), @p1 =
'Scottsdale', @p2 = 'AZ', @p3 = '238-95-7766'
The code for the ColumnChanged() event handler should be:
The comments in the code on this page include some dots where there should be double-quote marks. The correct comment is:
The first line of the final paragraph on this page should read, "To test this out, take the HTTP URL and add ?WSDL on to the end."
The screenshot and instructions on page 836 for adding a strong name
are incorrect. In the final release of Visual Studio .NET 2002 the
Strong Name tab in the Common Properties section of the Property Pages
dialog has been removed.
Instead, you should use the AssemblyKeyFile attribute in an Assembly Information File, as described here.
The SendKeys class has been moved to the System.Windows.Forms namespace not the System.IO namespace. | http://www.wrox.com/WileyCDA/WroxTitle/Professional-VB-NET-2nd-Edition.productCd-0764544004,descCd-ERRATA.html | CC-MAIN-2020-05 | en | refinedweb |
Two days ago, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.2 in his usual humorous way, describing it as a ‘Bobtail Squid’. The release has new additions like the inclusion of the Sound Open Firmware (SOF) project, improved pressure stall information, new mount API, significant performance improvements in the BFQ I/O scheduler, new GPU drivers, optional support for case-insensitive names in ext4 and more. The earlier version, Linux 5.1 was released exactly two months ago.
Torvalds says, “there really doesn’t seem to be any reason for another rc, since it’s been very quiet. Yes, I had a few pull requests since rc7, but they were all small, and I had many more that are for the upcoming merge window. So despite a fairly late core revert, I don’t see any real reason for another week of rc, and so we have a v5.2 with the normal release timing.”
Linux 5.2 also kicks off the Linux 5.3 merge window.
What’s new in Linux 5.2?
Inclusion of Sound Open Firmware (SOF) project
Linux 5.2 includes Sound Open Firmware (SOF) project, which has been created to reduce firmware issues by providing an open source platform to create open source firmware for audio DSPs. The SOF project is backed by Intel and Google. This will enable users to have open source firmware, personalize it, and also use the power of the DSP processors in their sound cards in imaginative ways.
Improved Pressure Stall information
With this release, users can configure sensitive thresholds and use poll() and friends to be notified, whenever a certain pressure threshold is breached within the user-defined time window. This allows Android to monitor and prevent mounting memory shortages, before they cause problems for the user.
New mount API
With Linux 5.2, Linux developers have redesigned the entire mount API, thus resulting in addition of six new syscalls: fsopen(2), fsconfig(2), fsmount(2), move_mount(2), fspick(2), and open_tree(2).
The previous mount(2) interface was not easy for applications and users to understand the returned errors, was not suitable for the specification of multiple sources such as overlayfs need and it was not possible to mount a file system into another mount namespace.
Significant performance improvements to that proportional weight.
In Linux 5.2, there have been performance tweaks to the BFQ I/O scheduler such that the application start-up time has increased under load by up to 80%. This drastically increases the performance and decreases the execution time of the BFQ I/O scheduler.
New GPU drivers for ARM Mali devices
In the past, the Linux community had to create open source drivers for the Mali GPUs, as ARM has never been open source friendly with the GPU drivers.
Linux 5.2 has two new community drivers for ARM Mali accelerators, such that lima covers the older t4xx and panfrost the newer 6xx/7xx series. This is expected to help the ARM Mali accelerators.
More CPU bug protection, and “mitigations” boot option
Linux 5.2 release has more bug infrastructure added to deal with the Microarchitectural Data Sampling (MDS) hardware vulnerability, thus allowing access to data available in various CPU internal buffers.
Also, in order to help users to deal with the ever increasing amount of CPU bugs across different architectures, the kernel boot option mitigations= has been added. It’s a set of curated, arch-independent options to enable/disable protections regardless irrespective of the system they are running in.
clone(2) to return pidfds
Due to the design of Unix, sending signals to processes or gathering /proc information is not always safe due to the possibility of PID reuse.
With clone(2) returning to pidfds, it will allow users to get pids at process creation time, which are usable with the pidfd_send_signal(2) syscall. pidfds helps Linux to avoid this problem, and the new clone(2) flag will make it even easier to get pidfs, thus providing an easy way to signal and process PID metadata safely.
Optional support for case-insensitive names in ext4
This release implements support for case-insensitive file name lookups in ext4, based on the feature bit and the encoding stored in the superblock. This will enable users to configure directories with chattr +F (EXT4_CASEFOLD_FL) attribute.
This attribute is only enabled on empty directories for filesystems that support the encoding feature, thus preventing collision of file names that differ by case.
Freezer controller for cgroups v2 added
A freezer controller provides an ability to stop the workload in a cgroup and temporarily free up some resources (cpu, io, network bandwidth and, potentially, memory) for some other tasks. Cgroup v2 lacked this functionality, until this release. This functionality is always available and is represented by cgroup.freeze and cgroup.events cgroup control files.
Device mapper dust target added
Linux 5.2 adds a device mapper ‘dust’ target to simulate a device that has failing sectors and/or read failures. It also adds the ability to enable the emulation of the read failures at an arbitrary time. The ‘dust’ target aims to help storage developers and sysadmins that want to test their storage stack.
Users are quite happy with the Linux 5.2 release.
Great, upgrade now
— 张三斤 (@ejizhan) July 8, 2019
😮 Oooh nice. 😁
Was waiting on those fpu opts, I'll probably dl and compile tonight.
— NEOAethyr (@konigssohne) July 7, 2019
wow
— ゆずソフト萌え (@YuzuSoftMoe) July 9, 2019
Linux 5.2 has many other performance improvements introduced in the file systems, memory management, block layer and more.
Visit the kernelnewbies page, for more details.
Read Next
“Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019
Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected
OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more! | https://hub.packtpub.com/linux-5-2-releases-with-inclusion-of-sound-open-firmware-project-new-mount-api-improved-pressure-stall-information-and-more/ | CC-MAIN-2020-05 | en | refinedweb |
Digital certificates and encryption in Exchange Server external clients (computers and mobile devices), and external messaging servers.
Note.
This topic describes the different types of certificates that are available, the default configuration for certificates in Exchange, and recommendations for additional certificates that you'll need to use with Exchange.
For the procedures that are required for certificates in Exchange Server, see Certificate procedures in Exchange Server.
Digital certificates overview
Digital certificates are electronic files that work like an online password to verify the identity of a user or a computer. They're used to create the encrypted channel that's used for client communications. A certificate is a digital statement that's issued by a certification authority (CA) that vouches for the identity of the certificate holder and enables the parties to communicate in a secure manner by using encryption.
Digital certificates provide the following services:
Encryption: They help protect the data that's exchanged from theft or tampering.
Authentication: They verify that their holders (people, web sites, and even network devices such as routers) are truly who or what they claim to be. Typically, the authentication is one-way, where the source verifies the identity of the target, but mutual TLS authentication is also possible.
Certificates can be issued for several uses. For example: web user authentication, web server authentication, Secure/Multipurpose Internet Mail Extensions (S/MIME), Internet Protocol security (IPsec), and code signing.
A certificate contains a public key and attaches that public key to the identity of a person, computer, or service that holds the corresponding private key. The public and private keys are used by the client and the server to encrypt data before it's transmitted. For Windows users, computers, and services, trust in the CA is established when the root certificate is defined in the trusted root certificate store, and the certificate contains a valid certification path. A certificate is considered valid if it hasn't been revoked (it isn't in the CA's certificate revocation list or CRL), or hasn't expired.
The three primary types of digital certificates are described in the following table.
To prove that a certificate holder is who they claim to be, the certificate must accurately identify the certificate holder to other clients, devices, or servers. The three basic methods to do this are described in the following table.
Certificates in Exchange
When you install Exchange 2016 or Exchange 2019 on a server, two self-signed certificates are created and installed by Exchange. A third self-signed certificate is created and installed by Microsoft Windows for the Web Management service in Internet Information Services (IIS). These three certificates are visible in the Exchange admin center (EAC) and the Exchange Management Shell, and are described in the following table:
The properties of these self-signed certificates are described in the Properties of the default self-signed certificates section.
These are the key issues that you need to consider when it comes to certificates in Exchange:
You don't need to replace the Microsoft Exchange self-signed certificate to encrypt network traffic between Exchange servers and services in your organization.
You need additional certificates to encrypt connections to Exchange servers by internal and external clients.
You need additional certificates to force the encryption of SMTP connections between Exchange servers and external messaging servers.
The following elements of planning and deployment for Exchange Server are important drivers for your certificate requirements:
Load balancing: Do you plan to terminate the encrypted channel at load balancer or reverse proxy server, use Layer 4 or Layer 7 load balancers, and use session affinity or no session affinity? For more information, see Load Balancing in Exchange 2016.
Namespace planning: What versions of Exchange are present, are you using the bound or unbound namespace model, and are you using split-brain DNS (configuring different IP addresses for the same host based on internal vs. external access)? For more information, see Namespace Planning in Exchange 2016.
Client connectivity: What services will your clients use (web-based services, POP, IMAP, etc.) and what versions of Exchange are involved? For more information, see the following topics:
Certificate requirements for Exchange services
The Exchange services that certificates can be assigned to are described in the following table.
* Kerberos authentication and Kerberos encryption are used for remote PowerShell access, from both the Exchange admin center and the Exchange Management Shell. Therefore, you don't need to configure your certificates for use with remote PowerShell, as long as you connect directly to an Exchange server (not to a load balanced namespace). To use remote PowerShell to connect to an Exchange server from a computer that isn't a member of the domain, or to connect from the Internet, you need to configure your certificates for use with remote PowerShell.
Best practices for Exchange certificates
Although the configuration of your organization's digital certificates will vary based on its specific needs, information about best practices has been included to help you choose the digital certificate configuration that's right for you.
Use as few certificates as possible: Very likely, this means using SAN certificates or wildcard certificates. In terms of interoperability with Exchange, both are functionally equivalent. The decision on whether to use a SAN certificate vs a wildcard certificate is more about the key capabilities or limitations (real or perceived) for each type of certificate as described in the Digital certificates overview..
Use certificates from a commercial CA for client and external server connections: Although you can configure most clients to trust any certificate or certificate issuer, it's much easier to use a certificate from a commercial CA for client connections to your Exchange servers. No configuration is required on the client to trust a certificate that's issued by a commercial CA. Many commercial CAs offer certificates that are configured specifically for Exchange. You can use the EAC or the Exchange Management Shell to generate certificate requests that work with most commercial CAs.
Choose the right commercial CA: Compare certificate prices and features between CAs. For example:
Verify that the CA is trusted by the clients (operating systems, browsers, and mobile devices) that connect to your Exchange servers.
Verify that the CA supports the kind of certificate that you need. For example, not all CAs support SAN certificates, the CA might limit the number of common names that you can use in a SAN certificate, or the CA may charge extra based on the number of common names in a SAN certificate.
See if the CA offers a grace period during which you can add additional common names to SAN certificates after they're issued without being charged.
Verify that the certificate's license allows you to use the certificate on the required number of servers. Some CAs only allow you to use the certificate on one server.
Use the Exchange certificate wizard: A common error when you create certificates is to forget one or more common names that are required for the services that you want to use. The certificate wizard in the Exchange admin center helps you include the correct list of common names in the certificate request. The wizard lets you specify the services that will use the certificate, and includes the common names that you need to have in the certificate for those services. Run the certificate wizard when you've deployed your initial set of Exchange 2016 or Exchange 2019 servers and determined which host names to use for the different services for your deployment.
Use as few host names as possible: Minimizing the number of host names in SAN certificates reduces the complexity that's involved in certificate management. Don't feel obligated to include the host names of individual Exchange servers in SAN certificates if the intended use for the certificate doesn't require it. Typically, you only need to include the DNS names that are presented to the internal clients, external clients, or external servers that use the certificate to connect to Exchange.
For a simple Exchange Server organization named Contoso, this is a hypothetical example of the minimum host names that would be required:
mail.contoso.com: This host name covers most connections to Exchange, including Outlook, Outlook on the web, OAB distribution, Exchange Web Services, Exchange admin center, and Exchange ActiveSync.
autodiscover.contoso.com: This specific host name is required by clients that support Autodiscover, including Outlook, Exchange ActiveSync, and Exchange Web Services clients. For more information, see Autodiscover service.
Properties of the default self-signed certificates
Some of the more interesting properties of the default self-signed certificates that are visible in the Exchange admin center and/or the Exchange Management Shell on an Exchange server are described in the following table.
*These properties aren't visible in the standard view in the Exchange Management Shell. To see them, you need to specify the property name (exact name or wildcard match) with the Format-Table or Format-List cmdlets. For example:
Get-ExchangeCertificate -Thumbprint <Thumbprint> | Format-List *
Get-ExchangeCertificate -Thumbprint <Thumbprint> | Format-Table -Auto FriendlyName,*PrivateKey*
For more information, see Get-ExchangeCertificate.
Further details about the default self-signed certificates that are visible in Windows Certificate Manger are described in the following table.
Typically, you don't use Windows Certificate Manger to manage Exchange certificates (use the Exchange admin center or the Exchange Management Shell). Note that the WMSVC certificate isn't an Exchange certificate.
Feedback | https://docs.microsoft.com/en-us/Exchange/architecture/client-access/certificates?redirectedfrom=MSDN&view=exchserver-2019 | CC-MAIN-2020-05 | en | refinedweb |
The dataflow analysis tracks local definitions, undefinitions and references to variables on different paths on the data flow.
From those informations there can be found various problems.
public class Foo {
public void foo() {
int buz = 5;
buz = 6; // redefinition of buz -> dd-anomaly
foo(buz);
buz = 2;
} // buz is undefined when leaving scope -> du-anomaly
} | https://tics.tiobe.com/viewerJava/index.php?ID=63&CSTD=Rule | CC-MAIN-2020-05 | en | refinedweb |
import "github.com/luci/luci-go/config/validation"
Package validation provides helpers for performing and setting up handlers for config validation related requests from luci-config.
Package validation provides a helper for performing config validations.
doc.go handler.go net_util.go rules.go validation.go
InstallHandlers installs the metadata and validation handlers that use the given validation rules.
It does not implement any authentication checks, thus the passed in router.MiddlewareChain should implement any necessary authentication checks.
ValidateHostname returns an error if the given string is not a valid RFC1123 hostname.
ConfigPattern is a pair of pattern.Pattern of configSets and paths that the importing service is responsible for validating..
Enter descends into a sub-element when validating a nested structure.
Useful for defining context. A current path of elements shows up in validation messages.
The reverse is Exit.
Error records the given error as a validation error.
Errorf records the given format string and args as a validation error.
Exit pops the current element we are visiting from the stack.
This is the reverse of Enter. Each Enter must have corresponding Exit. Use functions and defers to ensure this, if it's otherwise hard to track.
Finalize returns *Error if some validation errors were recorded.
Returns nil otherwise.
SetFile records that what follows is errors for this particular file.
Changing the file resets the current element (see Enter/Exit).
type Error struct { // Errors is a list of individual validation errors. // // Each one is annotated with "file" string and a logical path pointing to // the element that contains the error. It is provided as a slice of strings // in "element" annotation. Errors errors.MultiError }
Error is an error with details of failed validation.
Returned by Context.Finalize().
Error makes *Error implement 'error' interface.
Func performs the actual config validation and stores the associated results in the validation.Context.
Returns an error if the validation process itself fails due to causes unrelated to the data being validated. This will result in HTTP Internal Server Error reply, instructing the config service to retry.
RuleSet is a helper for building Validator from a set of rules: each rule specifies a pattern for config set.
Rules is the default validation rule set used by the process.
Individual packages may register vars and rules there during init() time.
Add registers a validation function for given configSet and path patterns.
The pattern may contain placeholders (e.g. "${appid}") that will be resolved before the actual validation starts. All such placeholder variables must be registered prior to adding rules that reference them (or 'Add' will panic).
'Add' will also try to validate the patterns by substituting all placeholders in them with empty strings and trying to render the resulting pattern. It will panic if the pattern is invalid.
ConfigPatterns renders all registered config patterns and returns them.
Used by the metadata handler to notify the config service about config files we understand.
RegisterVar registers a placeholder that can be used in patterns as ${name}.
Such placeholder is rendered into an actual value via the given callback before the validation starts. The value of the placeholder is injected into the pattern string as is. So for example if the pattern is 'regex:...', the placeholder value can be a chunk of regexp.
The primary use case for this mechanism is too allow to register rule patterns that depend on not-yet known values during init() time.
Panics if such variable is already registered.
ValidateConfig picks all rules matching the given file and executes their validation callbacks.
If there's no rule matching the file, the validation is skipped. If there are multiple rules that match the file, they all are used (in order of their) registration.
Package validation imports 13 packages (graph). Updated 2019-07-19. Refresh now. Tools for package owners. | https://godoc.org/github.com/luci/luci-go/config/validation | CC-MAIN-2020-05 | en | refinedweb |
#include <TIL_UVEnlarger.h>
Definition at line 22 of file TIL_UVEnlarger.h.
Definition at line 25 of file TIL_UVEnlarger.h.
If craster is NULL, then pixels in raster that has zero value will be assumed to have zero alpha.
How many pixels to pad each UV island with, when using the flood filling scheme for enlarging.
Definition at line 54 of file TIL_UVEnlarger.h.
Sets the scheme to use when enlarging.
Definition at line 58 of file TIL_UVEnlarger.h. | http://www.sidefx.com/docs/hdk/class_t_i_l___u_v_enlarger.html | CC-MAIN-2018-30 | en | refinedweb |
Hey,
I’m new to aerospike and consider using it for our project. We collect several hundred billion records in a years time.
If I understood correctly from the documents, an aerospike server is limited to a record count that is determined by the RAM.
Is that per namespace or for all namespace? Lets say my server has 64GB ram. Will it only be able to contain roughly 1Billion records?
I’m wondering because the hash of the primary key was said to be unique per namespace. So I’m wondering if the limit is per namespace or global?
Best regards, | https://discuss.aerospike.com/t/max-limits-of-records-in-a-server/1941 | CC-MAIN-2018-30 | en | refinedweb |
A feature of Scala that I hadn’t used before was Stream.cons. This allows you to create a stream (essentially, a lazily evaluated list) from a function. So, for doing some work on files based on their position in the directory hierarchy, we can create a list of their parents:
def fileParents(file: File) : Stream[File] = { val parent = file.getParentFile if (parent == null) Stream.empty else Stream.cons(parent, fileParents(parent)) }
and then use standard Scala functionality for filtering and finding things in lists, rather than having to write code that iterates through the parent files manually. | http://67bricks.com/blog/?paged=5 | CC-MAIN-2018-30 | en | refinedweb |
Digital Blasphemy is a great site where the digital artist Ryan Bliss posts a wide variety of wallpapers for download.
While a selection of the pieces are available for free, subscribing to the site provides access to all of the images at a variety of resolutions. Given how much I like Ryan's art, I signed up for both the lifetime supporter subscription and for his Patreon to help ensure I'll have more artwork to download in the future :)
I've long used a random selection of the Digital Blasphemy artwork as the desktop background on my personal laptop, but for a long time updating the available images was a matter of downloading the complete zip archives at the relevant resolutions, unzipping them to the appropriate location, and then going through them to delete the few that I know I don't like (or don't mind myself, but wouldn't be happy to have on-screen at a professional conference).
Eventually, I decided to solve the problem in a more sensible way, by figuring out a way to automate the process of checking for images I didn't have (in the resolutions I care about) and downloading them to the right location.
Packaging that up properly as a command line application would be a lot of work that wouldn't really help me, but by using an IPython notebook, I was able to convert my experimental code to see how I could retrieve the relevant data from the site directly into something that actually solved my original problem :)
If the name Digital Blasphemy sounds vaguely familiar, it may be due to this image (or one of its earlier incarnations):
NOTE: To grab an initial set of member-only images, I recommend using the zip archives Ryan publishes. This notebook is designed to handle collecting new images every few months, without adding back in any images you decided you didn't want, not for a bulk download of the entire gallery.
requests)
LOCAL_MIRRORglobal below to the destination directory
RESOLUTIONSglobal for the image resolutions you want to download
access.cfgfile in your local mirror directory with your Digital Blasphemy login credentials (DB just uses HTTP Basic Auth to control access, and some authenticated pages are currently only available over HTTP, so assume any password you use here can be compromised in transit)
BLOCKEDglobal to nominate particular images you don't want to download
import os.path import configparser import requests import re DRY_RUN = False LOCAL_MIRROR = os.path.expanduser("~/Pictures/Digital Blasphemy/") REMOTE_HOME_URL = '' REMOTE_CONTENT_URL = '' RESOLUTIONS = ["1440p"] LOCAL_RES_DIRS = {res:os.path.join(LOCAL_MIRROR, res) for res in RESOLUTIONS} # Slightly hacky to use os.path.join on URLs, but it works well enough in this case REMOTE_RES_URLS = {res:os.path.join(REMOTE_CONTENT_URL, res) for res in RESOLUTIONS} # Basic config file for Digital Blasphemy login credentials CONFIG_FILE = os.path.join(LOCAL_MIRROR, "access.cfg") config = configparser.RawConfigParser() config.read(CONFIG_FILE) db_username = config.get("login", "username") db_passwd = config.get("login", "password") # Page retrieval helper def get_page(db_url): """Retrieve a Digital Blasphemy page using the configured credentials""" return requests.get(db_url, auth=(db_username, db_passwd))
# To mangle a quote from a fine show: # "They say never parse HTML with regular expressions, # but it is, on occasion, an expedient hack" :) def iter_published_images(): content = get_page(REMOTE_HOME_URL).text for m in re.finditer('href="/preview.shtml\?i=(.*?)"', content): yield m.group(1) PUBLISHED = list(iter_published_images())
# I use my laptop for conference presentations # If I either don't really like a wallpaper or I'm # not happy displaying it at a professional # conference, I ensure I don't mirror it BLOCKED = ("chamelea", "emblem") # I also want to filter any images that are from the pickle jar # (experimental versions that aren't included in the main image index) ACCEPTABLE = set(name for name in PUBLISHED if not name.startswith(BLOCKED))
def iter_remote_file_list(res): content = get_page(REMOTE_RES_URLS[res]).text # Complete hack to get the file list from the server index page for m in re.finditer(r'<a href="(.*?)(%s\.jpg)">' % res, content): candidate = m.group(1) if candidate in ACCEPTABLE: yield candidate + m.group(2) def get_remote_files(res): return set(iter_remote_file_list(res)) def get_local_files(res): files = os.listdir(LOCAL_RES_DIRS[res]) return set(os.path.basename(f) for f in files)
import time def get_images_to_download(res): remote = get_remote_files(res) local = get_local_files(res) return remote - local def download_image(source_url, dest_file, dryrun=True): print(" Downloading {} -> {}".format(source_url, dest_file)) if dryrun: print(" Dry run only, skipping download") return data = get_page(source_url).content with open(dest_file, 'wb') as f: f.write(data) return len(data) # This assumes the local destination directory already exists def download_missing_images_for_res(res, dryrun=True): source_url = REMOTE_RES_URLS[res] dest_dir = LOCAL_RES_DIRS[res] delay = 0.05 if dryrun else 0.5 images = get_images_to_download(res) total = len(images) if not total: print("No {} images to download".format(res)) return print("{} {} images to be downloaded".format(total, res)) downloaded_images = [] for i, image in enumerate(images, start=1): print("Downloading {} image {}/{}".format(res, i, total)) source = os.path.join(source_url, image) dest = os.path.join(dest_dir, image) download_image(source, dest, dryrun) downloaded_images.append(dest) time.sleep(delay) # Be nice to the server return downloaded_images def download_missing_images(dryrun=True): updated_resolutions = {} for res in RESOLUTIONS: images = download_missing_images_for_res(res, dryrun) if images: updated_resolutions[res] = images return updated_resolutions
from IPython.display import display, Image def show_images(filenames): for filename in filenames: display(Image(filename=filename))
downloaded = download_missing_images(dryrun=DRY_RUN)
3 1440p images to be downloaded Downloading 1440p image 1/3 Downloading -> /home/ncoghlan/Pictures/Digital Blasphemy/1440p/acumen11440p.jpg Downloading 1440p image 2/3 Downloading -> /home/ncoghlan/Pictures/Digital Blasphemy/1440p/shiftingsandsnight11440p.jpg Downloading 1440p image 3/3 Downloading -> /home/ncoghlan/Pictures/Digital Blasphemy/1440p/shiftingsands11440p.jpg
if downloaded and not DRY_RUN: show_images(downloaded[RESOLUTIONS[0]]) | https://nbviewer.jupyter.org/urls/bitbucket.org/ncoghlan/misc/raw/default/notebooks/Digital%20Blasphemy.ipynb | CC-MAIN-2018-30 | en | refinedweb |
My code looks like this
ifstream myfile;
string fileName;
cin >> filename;
myfile.open(fileName);
I'm getting a rather lengthy error message and can't figure out how to pipe it to a file(I tried a couple things and think I might not have permissions, since I'm on my school's remote terminal), so I took a screenshot.
Sorry if that's taboo, but it was the quickest thing I could think of. The line it is referencing(main.cpp 22) is the line
myfile.open(fileName);
Again, compiles and runs perfectly in Visual Studio 2013, but not in this linux shell. Honestly, if anyone could help me understand the error message that would be a start.
Thanks in advance for any help
You need to use the compiler flag
-std=c++11. The following program compiles and builds fine on my machine using:
g++ -Wall -std=c++11 test-484.cc -o test-484
#include <iostream> #include <fstream> #include <string> using namespace std; int main() { ifstream myfile; string fileName; cin >> fileName; myfile.open(fileName); return 0; }
BTW, in your posted code, you have
string filename; ^^ lower case n not upper case N
If you are not using C++11, you can also do the following:
#include <iostream> #include <fstream> #include <string> int main () { std::ifstream myfile; std::string fileName; std::cin >> fileName; myfile.open(fileName.c_str()); }
But accessing the C string directly this way is not a good thing, if possible, follow R Sahu's suggestion to use C++11. | http://m.dlxedu.com/m/askdetail/3/25e82dc90bb97f3050c8804ff927702b.html | CC-MAIN-2018-30 | en | refinedweb |
Sessions are an important part of any web application. Session is more of a concept than a Class or Object or Keyword. ASP.Net’s Page class allows you to store and retrieve variables from current user session in following way:
Session[“MyVariable”] = 2012;
Session is unique to a user’s browser session. When user opens your application in a browser, IIS assigns a Session ID (a guid like unique id) to the browser session. If user opens another browser (Firefox, Chrome, IE) from the same machine, IIS will assign a different Session Id to the newly opened browser session. Browser receives this Session Id and passes back to the server via Cookie for each request it makes. If your application is configured to maintain the cookie-less session then session id is passed as part of the URL for each request. When user opens your application from a different tab of the same browser then browser will use the same session Id.
Within .Net the session is actually available from: HttpContext.Current.Session object. So, even when you use Session[“MyVariable”] in your code behind, you are actually accessing HttpContext.Current.Session[“MyVariable”] object.
The default practice is to use Session[“MyVariable”] type of syntax. But there are disadvantages to using this syntax:
- The return value is object so you will have to cast it to the type you need.
- You don’t know if the variable is set already or not, so you will have to check for null.
- If there is typo in the string “MyVariable” – there is no compilation error and you may receive run time error.
- There is no intellisense to find out what session variables are already available to you.
Hence, better practice should be to make use of Singleton pattern and create your own session class. This practice defies all the disadvantages listed above.
Create a class SiteSession like following. You can give it any name that you like:
[public class SiteSession { protected SiteSession() { } public static SiteSession Current { get { if (null == HttpContext.Current.Session) return null; if (null == HttpContext.Current.Session["SiteSession"]) { HttpContext.Current.Session["SiteSession"] = new SiteSession(); } return (SiteSession)HttpContext.Current.Session["SiteSession"]; } }//Current public void Destroy() { HttpContext.Current.Session.Clear(); } //now declare all session variables Public string CurrentState { get;set; } Public string UserName { get; set; } Public CustomObject CurrentObject { get; set; } }]
The class needs to be declared [Serializable] to store it in Session. And this class is stored in Session using its ‘Current’ property. It follows singleton pattern so you cannot create the instance of it and its only instance is stored in the HttpContext.Current.Session object.
There are three session variables that are declared in above example – CurrentState, UserName, CurrentObject. In your code you can use them anywhere as SiteSession.Current.UserName or SiteSession.Current.CurrentObject. IntelliSense will show you the names as well as you do not have to do type-casting.
Please note that Session[“CurrentState”] or Session[“UserName”] is not equivalent to SiteSession.Current.CurrentState and SiteSession.Current.UserName. You should no more use Session[“MyVariable”] syntax.
If you see any disadvantage using this method, please give your feedback. | http://www.aisoftwarellc.com/Blog/Post/ASP.Net-Tip-Session-Management/6 | CC-MAIN-2018-30 | en | refinedweb |
Random Walks and the Arcsine Law
Random Walks and the Arcsine.
Suppose you stand at 0 and flip a fair coin. If the coin comes up heads, you take a step to the right. Otherwise you take a step to the left. How much of the time will you spend to the right of where you started?
As the number of steps N goes to infinity, the probability that the proportion of your time in positive territory is less than x approaches 2 arcsin(√x)/π. The arcsine term gives this rule its name, the arcsine law.
Here’s a little Python script to illustrate the arcsine law.
import random from numpy import arcsin, pi, sqrt def step(): u = random.random() return 1 if u < 0.5 else -1 M = 1000 # outer loop N = 1000 # inner loop x = 0.3 # Use any 0 < x < 1 you'd like. outer_count = 0 for _ in range(M): n = 0 position= 0 inner_count = 0 for __ in range(N): position += step() if position > 0: inner_count += 1 if inner_count/N < x: outer_count += 1 print (outer_count/M) print (2*arcsin(sqrt(x))/pi) }} | https://dzone.com/articles/random-walks-and-arcsine-law | CC-MAIN-2018-30 | en | refinedweb |
Friends:
I am a professor doing research with undergraduates. I am turning to this forum because it seems MUCH more collegial, kind, helpful and thoughtful than any other forums which I have read. Here is the issue:
I am testing the Time-of-Flight board for a project and am unclear about inserting a delay value into the code that is based on distance measurements. In summary, at long distances an ERM is activated in one pattern, and at short distances it is activated in another pattern. I need a delay between activation patterns that is proportional to distance but have been unsuccessful. Sample code is inserted with comments. The problem is that the delay must be very short if the distance drops suddenly. I have been unsuccessful in doing so. Suggestions about how to code that DELAY?
Ideally, I’d like to compare a stored previous distance (“int inches”) with a new distance reading (“int new_inches”) but I have been unsuccessful. I would really appreciate specific suggestions of code syntax and placement.
Thanks in advance.
The portion of the sketch with the problem is noted by this line “//HERE IS THE PROBLEM PORTION**”
#include <Wire.h> #include "Adafruit_VL53L0X.h" Adafruit_VL53L0X lox = Adafruit_VL53L0X(); int pin5 = 5;// to debug with LED int ERM = 6; int y = 10; int inches; void setup() { pinMode (pin5, OUTPUT);//to debug with LED Serial.begin(115200); // wait until serial port opens for native USB devices while (! Serial) { delay(1); } Serial.println(F("Adafruit VL53L0X test")); if (!lox.begin()) { Serial.println(F("Failed to boot VL53L0X")); while (1); } // power Serial.println(F("VL53L0X test\n\n")); } void loop() { VL53L0X_RangingMeasurementData_t measure; //Serial.print(F("Reading a measurement... ")); lox.rangingTest(&measure, false); // pass in 'true' to get debug data printout! if (measure.RangeStatus != 4) { // phase failures have incorrect data } //This is the IF condition that activates the ERM at long distances // 24 to 60 inches away. Inches = mm*0.039. if (measure.RangeMilliMeter >= 600 && measure.RangeMilliMeter <= 1524) { int inches = (measure.RangeMilliMeter * 0.04); //convert mm to inches int PWM = ((-27 * log(inches)) + 250); for (int x = 0 ; x <= PWM; x += 4) { analogWrite( 6 , x ); delay (y); analogWrite( 6 , 0 ); } for (int x = PWM; x >= 0; x -= 4) { analogWrite( 6 , x ); delay (y); analogWrite( 6 , 0 ); } Serial.print(F("long inches= ")); Serial.println (inches); Serial.println (); } //At this point, the sketch needs to reassess the distance measurement and //DELAY between stimuli (the ISI or inter-stimulus delay) based on that //new distance (i.e., long for far away but very short if distance //has suddenly dropped. However I'm not sure how best to do this. //If the distance has dropped suddenly after the last assessment //here, I need to read a new distance measurement from the incoming data stream. //I tried creating a new variable to compare to previous inches but have been unsuccessful. //The IF conditional below is not working as I'd like. I need the delay to be short //immediately if the distance drops suddenly but stay long if the //distance stays long. Ideas on how to code this? if (measure.RangeMilliMeter > 0 && measure.RangeMilliMeter < 600) { delay (50); } else if (measure.RangeMilliMeter > 600) { int inches = (measure.RangeMilliMeter * 0.04); // convert mm to inches int ISI01 = (151 * (exp(0.0732 * inches))); // calculate inter-stimulus interval #1 delay (ISI01); //inter-stimulus #1 } //This is the IF condition that activates the ERM at short distances // 0 to 24 inches away. Inches = mm*0.039. if (measure.RangeMilliMeter >= 0 && measure.RangeMilliMeter <= 600) { int inches = (measure.RangeMilliMeter * 0.04); // mm to inches int ISI02 = (151 * (exp(0.0732 * inches))); //e = 2.72; calculate ISI #2 int PWM = ((-27 * log(inches)) + 250); Serial.print(F("short inches= ")); Serial.println (inches); Serial.println (); analogWrite( 6 , PWM ); delay (70); analogWrite( 6 , 0 ); delay (ISI02) } // close if (measure.RangeMilliMeter 0-600) else { //Serial.print(F("waiting")); //Serial.println(); } } //closes Void loop | https://forum.pololu.com/t/vl53l0x-compare-values/15178/1 | CC-MAIN-2018-30 | en | refinedweb |
Flush not working everytime in editor
Hi,
I'm using the editor framework and getting strange behaviour while running in devmode. When I call flush() on the driver it works sometimes and sometimes not. Here is my code:
Code:
public class TrailerTypeEditor implements IsWidget, Editor<TrailerType> { private final MaintenanceServiceAsync maintenanceService = GWT.create(MaintenanceService.class); private Dialog dialog; private AsyncCallback<TrailerType> callback; interface TrailerTypeDriver extends SimpleBeanEditorDriver<TrailerType, TrailerTypeEditor> { } TrailerType vehicleType; // editor fields TextField description; private TrailerTypeDriver driver = GWT.create(TrailerTypeDriver.class); public TrailerTypeEditor() { } public TrailerTypeEditor(TrailerType lType, AsyncCallback<TrailerType> callback) { this.vehicleType = lType; this.callback = callback; boolean edit = vehicleType.getPk() != null && vehicleType.getPk() > 0; dialog = new Dialog(); dialog.setHeadingText((edit ? "Edit " : "Add ") + "Trailer Type"); dialog.add(asWidget()); driver.initialize(this); driver.edit(this.vehicleType); configureButtons(); dialog.show(); } public Widget asWidget() { VerticalLayoutContainer c = new VerticalLayoutContainer(); description = new TextField(); description.setAllowBlank(false); description.setName("description"); c.add(new FieldLabel(description, "Description"), new VerticalLayoutContainer.VerticalLayoutData(1, -1)); return c; } private void configureButtons() { // dialog.getButtonBar().clear(); dialog.getButtonBar().setPack(BoxLayoutContainer.BoxLayoutPack.CENTER); CellButtonBase okButton = dialog.getButtonById(Dialog.PredefinedButton.OK.toString()); okButton.setHTML("Ok"); okButton.setIcon(RMS.icons.ok()); dialog.getButtonBar().add(okButton); okButton.addSelectHandler(new SelectEvent.SelectHandler() { @Override public void onSelect(SelectEvent event) { vehicleType = driver.flush(); maintenanceService.updateEntityBase(vehicleType, new AsyncCallbackImp<TrailerType>() { @Override public void onSuccess(TrailerType result) { System.out.println(); callback.onSuccess(result); } }); dialog.hide(); } }); CellButtonBase cancelButton = new CellButtonBase(); cancelButton.setHTML("Cancel"); cancelButton.setIcon(RMS.icons.cancel()); dialog.getButtonBar().add(cancelButton); cancelButton.addSelectHandler(new SelectEvent.SelectHandler() { @Override public void onSelect(SelectEvent event) { dialog.hide(); } }); } }
Hi I manage to figure out what the problem is. This only happens when you edit a textField for example and hit the OK button when the flush happens. The value in the Widget is not updated. It is still in edit mode. I tried to call the finishEditing() method for all the Widgets that is edited before calling flush() but it is still not working. How can i force all edit widgets to update there edited value before calling flush()
okButton.addSelectHandler(new SelectEvent.SelectHandler() {
@Override
public void onSelect(SelectEvent event) {
description.finishEditing();
if (driver.isDirty()) {
vehicleType = driver.flush();
maintenanceService.updateEntityBase(vehicleType, new AsyncCallbackImp<TrailerType>() {
@Override
public void onSuccess(TrailerType result) {
System.out.println();
callback.onSuccess(result);
}
});
}
dialog.hide();
}
});
Hi!
I am having the exact same problem. Any work-around found?
There was an issue related to this in pre-release versions of GXT 3, but has been resolved to my knowledge in 3.0.0. Take a look at the example at - try to edit the field, and notice what happens if you don't leave the field and click on the 'save' button.
The code used there, nearly identical to the example provided by the original poster:
Code:
save.addSelectHandler(new SelectHandler() { @Override public void onSelect(SelectEvent event) { driver.flush(); stock = driver.flush(); if (driver.hasErrors()) { new MessageBox("Please correct the errors before saving.").show(); return; } updateDisplay(); stockStore.update(stock); } });
FinishEditing
I created a method which I call before I do the flush.
Code:
public static boolean validateFormPanel(FormPanel formPanel) { for (int i = 0; i < formPanel.getFields().size(); i++) { IsField<?> isField = formPanel.getFields().get(i); if (isField instanceof Field ) { try { ((Field) isField).finishEditing(); } catch (NullPointerException e) { } } isField.validate(false); } return formPanel.isValid(); }
hi,
@colin
I am doing exactly as you wrote (same result with or without the second flush operation in your sample code). The last edited property does not have the entered value, if I press a button to retrieve the value of the edited object via a flush operation from the driver. If I first press TAB and then the button, the value is contained in the object.
Unfortunately I could not reproduce this error in a smaller sample. In the sample, everything worked as it should.
@christie
Seems, that finishediting was indeed not called. After adding a call to finishediting, it worked like a charm. Perhaps there is a timing problem or a required event is not raised in this particular case. Thanks for the pointer!
Thanks for the feedback - the samples we've had haven't experienced this issue, so we would welcome a working sample that can demonstrate the bug.
In the meantime, while @christiedavel's fix works for a FormPanel, we don't encourage using a FormPanel unless you actually want an html <form> tag. Instead, consider the FormPanelHelper's static methods to start at a given container and find all Fields declared within it.
Thanks for the pointer, I will try this. as soon as I will find some time I will try to reproduce this behavior.
One issue that has been brought to my attention is when triggering driver.flush() (actually field.getValue(), but flush() calls getValue()) on certain dom events like MouseDownEvent. One example of this is the ListView or Grid selection models, which pay attention to this pre-click event to handle multiple selection.
This is a problem because MouseDownEvent on the newly focused element doesn't actually focus the element just yet, so the TextField still has focus, and so it hasn't parsed/saved its current value, and the old value is still used.
In cases like this, rather than calling finishEditing on every field, I'd offer two suggestions:
* Use ClickEvent instead - there isn't a need to get the data _that_ early, give the user a chance to actually lift the mouse button!
* Defer the flush() call - use the Scheduler to wait until the next event loop, after which Blur and the new Focus will both have occurred, and the real value will be available.
In a case where you must have the value _while the user is editing_ (like a Timer going off) finishEditing will work, but it will likely stop the user from editing the field they are working on - they'll need to re-focus before they can continue. ValueBaseField.getCurrentValue() can be used in some cases to work with this, but isn't usable right away from an editor - an adapter would probably need to be written to wrap the field and call this other method instead.
Thank you for the background information, but in my case this does not seem to be applicable, since I am using select events only (in the form of @UiHandler-annotated member functions). | https://www.sencha.com/forum/showthread.php?175016-Flush-not-working-everytime-in-editor | CC-MAIN-2016-18 | en | refinedweb |
On 16.02.2013 19:33, Ville Skyttä wrote: > Makes it clearer that it's a Makefile snippet, distinguishes from VDR > conf files. Reposted here per Klaus' request, he'd like to see 3 > people ack this. > --- > Make.config.template | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/Make.config.template b/Make.config.template > index b02d796..5bd0a10 100644 > --- a/Make.config.template > +++ b/Make.config.template > @@ -62,7 +62,7 @@ endif > > # Use this if you want to have a central place where you configure compile time > # parameters for plugins: > -#PLGCFG = $(CONFDIR)/plugins.conf > +#PLGCFG = $(CONFDIR)/plugins.mk > > ### The remote control: There have been two more AKCs on vdr-portal.de (where I posted your patch and asked for ACK/NACK), so the patch is accepted. Klaus | http://www.linuxtv.org/pipermail/vdr/2013-February/027299.html | CC-MAIN-2016-18 | en | refinedweb |
Agenda
See also: IRC log
<Yves> trackbot, start telcon
<trackbot> Meeting: SOAP-JMS Binding Working Group Teleconference
<trackbot> Date: 16 September 2008
<Roland> rrsagent make minutes
<scribe> scribe: eric
<Roland>
Going through, actions 4 - 17 done
Also looks like 21, 24 also done
Roland says he've completed 28
action 28 resolved
<trackbot> Sorry, couldn't find user - 28
action-28 resolved
Mark has gone through the draft of XML Schema, reports that there are no changes to the basic types...
two questions: is that enough to complete this action, and do we want to update our schema namespace?
roland: If there are no changes, not much reason to update.
markphillips: Can roland send along the original email so that Mark can respond appropriately.
<scribe> ACTION: markphillips to send a response. [recorded in]
<trackbot> Sorry, couldn't find user - markphillips
Roland: let's leave the schema namespace as it is.
<markphillips> zakim +0196270aadd is markphillips
<scribe> closed action-23
close action-23
<trackbot> ACTION-23 Review the relevant parts of the XML Schema draft on behalf of SOAP/JMS WG closed
Roland: remaining items - text
... also changes to the URI scheme.
<Roland> additional variants : queue and topic
jms: queue:<some queue name>
or
jms: topic:<some topic name>
Roland: what is it you would propose to do to the spec to support TextMessage. What is the impact on the spec.
Phil: looking to put together a
proposal that states that TextMessages must be used in certain
circumstances
... if customer specifically requests a text message, then text message it must be, or a failure.
Roland: Personal inclination to have MUSTs, rather than shoulds.
<scribe> ACTION: Eric to write up a proposal for what to do with jms URI queue and topic proposal. [recorded in]
<trackbot> Created ACTION-33 - Write up a proposal for what to do with jms URI queue and topic proposal. [on Eric Johnson - due 2008-09-23].
<scribe> ACTION: phil to write up specific proposal on how to support TextMessage. [recorded in]
<trackbot> Created ACTION-34 - Write up specific proposal on how to support TextMessage. [on Phil Adams - due 2008-09-23].
<Phil> ty Eric :)
<Roland>
If you want access, create your account on the wiki, then send an email to Yves with your account information.
Yves requests that you match your W3C ID.
Roland: If you're fed up with
many IDs, you can of course use openid.
... each wiki at W3C is separate and independent.
... good place to store FAQs.
<Phil> Yves, I created an id for the wiki... "padams2"
eric: What is the status of the legal release from Oracle with respect to the previous FAQ work?
roland: still working on it.
<Phil> ty
Derek: one clarification - will have action 29 ready for next week, and that might affect the spec.
oops lost phone - calling back.
roland: let us identify all possible impacts to the spec by next week.
This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: eric Inferring ScribeNick: eric Default Regrets: Amy Agenda: Found Date: 16 Sep 2008 Guessing minutes URL: People with action items: eric markphillips phil[End of scribe.perl diagnostic output] | http://www.w3.org/2008/09/16-soap-jms-minutes | CC-MAIN-2016-18 | en | refinedweb |
> Hmmm, I see your point, but using xsl:version is not illegal and should
> mean the same thing... in that case, Xalan has a bug you should fix :)
I don't think it is the same thing. The URI of the version attribute is
null for the attribute in one case, and non-null in the other case. I
don't think XT recognizes a xsl:prefix where it's not specified either. I
just want to do the right, conformant thing. It would be wrong of Xalan to
let you do xsl:version in the xsl:stylesheet attribute if it's not legal
according to the XSLT specification. I'm cc'ing the great minds to see if
we can get some consensus on this.
Folks: the crux of the issue is whether a namespaced version attribute in:
<xsl:stylesheet xmlns:xsl=""
xsl:
should be recognized by the XSLT processor. I assume this same issue would
apply to all attributes on xsl elements. My take is that xsl namespaced
attributes on xsl namespaced elements should not be recognized (for no good
reason other than that's how I suppose XML works, and I think that's what's
in the XSLT recommendation).
-scott
Stefano
Mazzocchi To: cocoon-dev@xml.apache.org
<stefano@apac cc: (bcc: Scott Boag/CAM/Lotus)
he.org> Subject: Re: R:R:pathargs problem
02/29/00
09:33 PM
Please
respond to
cocoon-dev
Scott Boag/CAM/Lotus wrote:
>
> > Then I'm sorry but we just found a XSLT spec bug :)
> >
> > it should really be "xsl:version" since otherwise we don't know whose
> > version that is.
>
> One assumes that if an attribute is not namespaced, it is "owned" by the
> element.
Where is this behavior specified?
> Therefore, I don't think there is any ambiguity about who's
> version it is. (The same is not true with simple LRE stylesheets, where
> clearly the version attribute must be namespaced).
Hmmm, I see your point, but using xsl:version is not illegal and should
mean the same thing... in that case, Xalan has a bug you should fix :)
> Or is there some spec out there somewhere that specifies version
> attributes, that I'm not aware of?
No. Nothing I know of.
> I didn't think this was either part of
> the Infoset spec, Namespace spec, or XML spec, but I could be wrong. I
> guess maybe it should be part of one of those specs.
>
> -scott
--
Stefano Mazzocchi One must still have chaos in oneself to be
able to give birth to a dancing star.
<stefano@apache.org> Friedrich Nietzsche
--------------------------------------------------------------------
Come to the first official Apache Software Foundation Conference!
------------------------- --------------------- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200003.mbox/%3COF6A2EE498.ED9D8F46-ON85256895.00525887@lotus.com%3E | CC-MAIN-2016-18 | en | refinedweb |
I'm trying to run this sample code and I keep getting this error:
"process1.cpp:92: implicit declaration of function `int wait(...)' "
I thought the error was that I didn't include a specific library but I think I have every thing I need. Help please.
Code:#include <iostream.h> #include <stdio.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <unistd.h> #include <stdlib.h> #define SHMSZ 1024 using namespace std; int write(int *b) { char c; int shmid; key_t key; int *shm; char *s; /* * We'll name our shared memory segment * "5678". */ key = 5678; /* * Create the segment. */ if ((shmid = shmget(key, SHMSZ, IPC_CREAT | 0666)) < 0) { cerr<<"shmget"; exit(1); } /* * Now we attach the segment to our data space. */ if ((shm = (int *)/*(char *)*/shmat(shmid, 0, 0)) == /*(char *)*/(int *) -1) { cerr<<"shmat"; exit(1); } /* * Now put some things into the memory for the * other process to read. */ // s = shm; int i; for (i=0; i<=1000; i++,shm++) *shm = i; *b=0; cout<<"write is complited"<<endl; cout<<*b<<endl; return(*b); } main() { char c; int shmid; key_t key; static int *b; key = 5679; /* * Create the one bit segment. */ if ((shmid = shmget(key, 1, IPC_CREAT | 0666)) < 0) { cerr<<"shmget"; exit(1); } /* * Now we attach the segment to our data space. */ if ((b = ( int *)shmat(shmid, 0, 0)) == (int *) -1) { cerr<<"shmat"; exit(1); } cout<<*b<<endl; while(1) { if(*b==1) { write(b); } else wait(5); } | http://cboard.cprogramming.com/cplusplus-programming/114125-help-error.html | CC-MAIN-2016-18 | en | refinedweb |
public class PortletContextScope extends java.lang.Object implements Scope, DisposableBean
Scopewrapper PortletContextScope(javax.portlet.PortletContext portletContext)
portletContext- the PortletContext to wrap
public void destroy()
destroyin interface
DisposableBean
ContextCleanupListener | http://docs.spring.io/spring-framework/docs/3.2.0.M2/api/org/springframework/web/portlet/context/PortletContextScope.html | CC-MAIN-2016-18 | en | refinedweb |
pthread_attr_setstackprealloc()
Set the amount of memory to preallocate for a thread's MAP_LAZY stack
Synopsis:
#include <pthread.h> int pthread_attr_setstackprealloc( const pthread_attr_t * attr, size_t stacksize);
Since:
BlackBerry 10.0.0
Arguments:
- attr
- A pointer to the pthread_attr_t structure that defines the attributes to use when creating new threads. For more information, see pthread_attr_init().
- stacksize
- The amount of stack you want to preallocate for new threads.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The pthread_attr_setstackprealloc() function sets the size of the memory to preallocate for a thread's MAP_LAZY stack.
By default, the system allocates sysconf(_SC_PAGESIZE) bytes of physical memory for the initial stack reference. This function allows you to change this default memory size, if you know that a thread will need more stack space. Semantically, there is no difference in operation, but the memory manager attempts to make more efficient use of Memory Management Unit hardware (e.g. a larger page size in the page entry table) for the stack if it knows upfront that more memory will be required.
Returns:
- EOK
- Success.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_attr_setstackprealloc.html | CC-MAIN-2016-18 | en | refinedweb |
I have an Eclipse 3.1.1 install that I have been using for several
months. I have been wanting to set up a repository, so I installed
Subversion and the Subclipse 1.0.5 plugin. After I added two of my
projects to the repository, everything seemed fine at first. The first
time I tried to save a file, it became errored, and all the classes
that are in other packages are no longer visible. If I remove the
import declaration, it successfully adds it back in, but the error
comes back. Did I miss something? This happens to any file that I save
that has references to other packages in the project.
Jim Azeltine
Sr. Software Engineer - SAIC
--. | http://svn.haxx.se/subusers/archive-2007-08/0031.shtml | CC-MAIN-2016-18 | en | refinedweb |
Convert DegMinSec to Decimal Degrees
Hello,
I once again found a little time to toy with C Programming.
I have applied the helpful advice from this message board since my last post.
I used the utility on
to check my output.
I was pleased that my program output matched that of the utility except on
my final test input of 1.0101.
Please see the comments at the end of my program for details.
Any help would be greatly appreciated!Any help would be greatly appreciated!Code:
/* Name: TODECDEG.C */
/* Purpose: Convert Degrees Minutes Seconds to Decimal Degrees */
/* Author: JacquesLeJock */
/* Date: November 19, 2007 */
/* Compiler: Pacific C for MS-DOS, v7.51 */
#include <stdio.h>
main()
{
double deg_min_sec = 0, decimal_deg = 0;
double mind = 0, secd = 0;
int deg = 0, min = 0, sec = 0;
printf("Enter Degrees Minutes Seconds: ");
scanf("%lf", °_min_sec);
deg = deg_min_sec; /* conversion during assignment */
min = (deg_min_sec - deg) * 100;
sec = ((deg_min_sec - deg) * 100 - min) * 100;
mind = (double) min / 60; /* */
secd = (double) sec / 3600;
decimal_deg = deg + mind + secd;
printf("Decimal Degrees: %f\n\n", decimal_deg);
return 0;
}
/* Sample output:
Enter Degrees Minutes Seconds: 1.0101
Decimal Degrees: 1.016667
Press any key to continue ...
*/
/* Note:
This conversion does not agree with.
Their utility yields 1.016944. Why?
*/
Regards,
Jacques | http://cboard.cprogramming.com/c-programming/96043-convert-degminsec-decimal-degrees-printable-thread.html | CC-MAIN-2016-18 | en | refinedweb |
audio_engine_channels(9E)
audio_engine_playahead(9E)
- poll entry point for a non-STREAMS character driver
#include <sys/types.h> #include <sys/poll.h> #include <sys/ddi.h> #include <sys/sunddi.h> int prefixchpoll(dev_t dev, short events, int anyyet, short *reventsp, struct pollhead **phpp);
This entry point is optional. Architecture independent level 1 (DDI/DKI).
The device number for the device to be polled.
The events that may occur. Valid events are:
Data other than high priority data may be read without blocking.
Normal data may be written without blocking.
High priority data may be received without blocking.
A device hangup has occurred.
An error has occurred on the device.
Normal data (priority band = 0) may be read without blocking.
Data from a non-zero priority band may be read without blocking
The same as POLLOUT.
Priority data (priority band > 0) may be written.
A flag that is non-zero if any other file descriptors in the pollfd array have events pending. The poll(2) system call takes a pointer to an array of pollfd structures as one of its arguments. See the poll(2) reference page for more details.
A pointer to a bitmask of the returned events satisfied.
A pointer to a pointer to a pollhead structure.
The chpoll() entry point routine is used by non-STREAMS character device drivers that wish to support polling. The driver must implement the polling discipline itself. The following rules must be followed when implementing the polling discipline:
Implement the following algorithm when the chpoll() entry point is called:
if (events_are_satisfied_now) { *reventsp = satisfied_events & events; } else { *reventsp = 0; if (!anyyet) *phpp = &my_local_pollhead_structure; } return (0);
Allocate an instance of the pollhead structure. This instance may be tied to the per-minor data structure defined by the driver. The pollhead structure should be treated as a “black box” by the driver. Initialize the pollhead structure by filling it with zeroes. The size of this structure is guaranteed to remain the same across releases.
Call the pollwakeup() function with events listed above whenever pollable events which the driver should monitor occur. This function can be called with multiple events at one time. The pollwakup() can be called regardless of whether or not the chpoll() entry is called; it should be called every time the driver detects the pollable event. The driver must not hold any mutex across the call to pollwakeup(9F) that is acquired in its chpoll() entry point, or a deadlock may result.
chpoll() should return 0 for success, or the appropriate error number.
poll(2), nochpoll(9F), pollwakeup(9F) | http://docs.oracle.com/cd/E23824_01/html/821-1476/chpoll-9e.html | CC-MAIN-2016-18 | en | refinedweb |
Base class for needles that can be used in a QwtDial. More...
#include <qwt_dial_needle.h>
Base class for needles that can be used in a QwtDial.
QwtDialNeedle is a pointer that indicates a value by pointing to a specific direction.
Draw the needle
Draw the needle.
The origin of the needle is at position (0.0, 0.0 ) pointing in direction 0.0 ( = east ).
The painter is already initialized with translation and rotation.
Implemented in QwtCompassWindArrow, QwtCompassMagnetNeedle, and QwtDialSimpleNeedle.
Sets the palette for the needle. | http://qwt.sourceforge.net/class_qwt_dial_needle.html | CC-MAIN-2016-18 | en | refinedweb |
What i gotta do:
Write a program that inputs a tele phone #as astring in the form (555)555-5555. The program should use function strok() to extract the area code as a token , the first three digits of the phone # as a token and the last four as a token. the seven digits of the phone should be concatenated into one string.The program should convert the are-code string to int and convert the phone # string to long. both the area-code and the phone #should be printed
Output:
What i got sofar:What i got sofar:Code:
Enter a phone number in the form (555) 555-5555:
(555)555-5555
The integer area code is 555
The long integer phone number is 555555
press any key to continue...
its only a bit of it i dont know how to set it up with gets, stoi(), strcopy(), and strcat()its only a bit of it i dont know how to set it up with gets, stoi(), strcopy(), and strcat()Code:
#include <stdio.h>
#include <string.h>
int main()
{
char string[ 10 ];
char *tokenPtr;
printf( "Enter a phone number in the form (555) 555-5555: \n" );
scanf( "%s", string);
tokenPtr = strtok( string, "() -" );
while ( tokenPtr != NULL ){
printf( "\nThe integer area code is %s\n", tokenPtr );
tokenPtr = strtok( NULL, "() -" );
}
return 0;
} | http://cboard.cprogramming.com/c-programming/67025-strtok-phone-number-printable-thread.html | CC-MAIN-2016-18 | en | refinedweb |
.> - Added the `bk-scsi-target' tree to the -mm lineup. It is managed by James> Bottomley > - Some enhancements to the ext3 block reservation code here. Please cc> sct@redhat.com on oops reports ;)> - There's a patch here which will cause warnings if a PCI device driver is> removed without having called pci_disable_device(). Please try to cc the> appropriate mailing list or maintainer when reporting any instances.I've been informed that /proc/profile livelocks some systems in thetimer interrupt, usually at boot. The following patch attempts to amortize the atomic operations done on the profile buffer to address this stability concern. This patch has nothing to do with performance;kernels using periodic timer interrupts are under realtime constraintsto complete whatever work they perform within timer interrupts beforethe next timer interrupt arrives lest they livelock, performing no workwhatsoever apart from servicing timer interrupts. The latency of thecacheline bounce for prof_buffer contributes to the time spent in thetimer interrupt, hence it must be amortized when remote access latenciesor deviations from fair exclusive cacheline acquisition may causecacheline bounces to take longer than the interval between timer ticks.What this patch does is to create a per-cpu open-addressed hashtableindexed by profile buffer slot holding values representing the numberof pending profile buffer hits. When this hashtable overflows, oneiterates over the hashtable accounting each of the pairs of profilebuffer slots and hit counts to the global profile buffer. Zero is alegitimate profile buffer slot, so zero hit counts represent unusedhashtable entries. The hashtable is furthermore protected from reentryinto the timer interrupt by interrupt disablement. read_proc_profile()does not flush the per-cpu hashtables because flushing may causetimeslice overrun on the systems where prof_buffer cacheline bouncesare so problematic as to livelock the timer interrupt.This is expected to be a much stronger amortization than merely reducingthe frequency of profile buffer access by a factor of the size of thehashtable because numerous hits may be held for each of its entries.This reduces what was before the patch a number of atomic incrementsequal to what after the patch becomes the sum of the hits held for eachentry in the hashtable, to a number of atomic_add()'s equal to thenumber of entries in the per_cpu hashtable. This is nondeterministic,but as the profile hits tend to be concentrated in a very small numberof profile buffer slots during any given timing interval, is likely torepresent a very large number of atomic increments. This amortizationof atomic increments does not depend on the hash function, only the(lack of) scattering of profile buffer hits.I would be much obliged if the reporters of this issue could verifywhether this resolves their livelock. Untested, as I was hoping thebugreporters could do that bit for me.Index: mm5-2.6.9-rc1/kernel/profile.c===================================================================--- mm5-2.6.9-rc1.orig/kernel/profile.c 2004-09-13 16:27:36.639247200 -0700+++ mm5-2.6.9-rc1/kernel/profile.c 2004-09-13 21:36:35.498912144 -0700@@ -12,10 +12,18 @@ #include <linux/profile.h> #include <asm/sections.h> +struct profile_hit {+ unsigned long pc, hits;+};+#define NR_PROFILE_HIT (PAGE_SIZE/sizeof(struct profile_hit))+ static atomic_t *prof_buffer; static unsigned long prof_len, prof_shift; static int prof_on; static cpumask_t prof_cpu_mask = CPU_MASK_ALL;+#ifdef CONFIG_SMP+static DEFINE_PER_CPU(struct profile_hit [NR_PROFILE_HIT], cpu_profile_hits);+#endif /* CONFIG_SMP */ static int __init profile_setup(char * str) {@@ -181,6 +189,41 @@ EXPORT_SYMBOL_GPL(profile_event_register); EXPORT_SYMBOL_GPL(profile_event_unregister); +#ifdef CONFIG_SMP+void profile_hit(int type, void *__pc)+{+ unsigned long primary, secondary, flags, pc = (unsigned long)__pc;+ int i, cpu;+ struct profile_hit *hits;++ if (prof_on != type || !prof_buffer)+ return;+ pc = min((pc - (unsigned long)_stext) >> prof_shift, prof_len - 1);+ cpu = get_cpu();+ i = primary = pc & (NR_PROFILE_HIT - 1);+ secondary = ((~pc << 1) | 1) & (NR_PROFILE_HIT - 1);+ hits = per_cpu(cpu_profile_hits, cpu);+ local_irq_save(flags);+ do {+ if (hits[i].pc == pc) {+ hits[i].hits++;+ goto out;+ } else if (!hits[i].hits) {+ hits[i].pc = pc;+ hits[i].hits = 1;+ goto out;+ } else+ i = (i + secondary) & (NR_PROFILE_HIT - 1);+ } while (i != primary);+ atomic_inc(&prof_buffer[pc]);+ for (i = 0; i < NR_PROFILE_HIT; ++i)+ atomic_add(hits[i].hits, &prof_buffer[hits[i].pc]);+ memset(hits, 0, NR_PROFILE_HIT*sizeof(struct profile_hit));+out:+ local_irq_restore(flags);+ put_cpu();+}+#else void profile_hit(int type, void *__pc) { unsigned long pc;@@ -190,6 +233,7 @@ pc = ((unsigned long)__pc - (unsigned long)_stext) >> prof_shift; atomic_inc(&prof_buffer[min(pc, prof_len - 1)]); }+#endif void profile_tick(int type, struct pt_regs *regs) {-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2004/9/14/5 | CC-MAIN-2016-18 | en | refinedweb |
OOP in C++
The key update of the C++ programming language was the addition of the OOPS concept to the already powerful C Programming Language.
Advantages of OOP
- Fast and easier to execute
- Provides a clear structure for the code
- Helps to keep the C++ code DRY(“Don’t Repeat Yourself”)
- DRY makes the code easier to modify, maintain and debug
- Possible to create reusable applications with less code with shorter development time
Concepts that form the base of object-oriented programming:
Object
An Object is an instance of a class and is also know as the basic component of object-oriented programming(That is data and function that operate on data are tied up as a unit called an object). When a class is defined, no memory is allocated but when an object is created only then the memory is allocated.
#include <bits/stdc++.h> using namespace std; class GeeksGod // class class_name { public: string name; void printname() { cout << "Name is: " << name; } }; // class ended with a semicolon int main() { GeeksGod obj1; // Declare an object of class GeeksGod obj1.name = "Mayank Mandhana"; // accessing data member obj1.printname(); // accessing member function return 0; }
Class
The building block of C++ that leads to Object-Oriented programming is a Class. A class is a user-defined data type or data structure that is declared with the keyword class, that has data members and member functions as its members. Features of classes are as follows:
- Class is defined between the curly brackets and terminated by a semicolon
- Data members and member functions are contained in a class
- Data variables and functions used to operate these variables and together these data members and member functions define the properties and behavior of the objects in a Class
- In the example of the Objects as defined above, “class GeeksGod”, the class refers to the keyword and GeeksGod is the class name
Abstraction
Data Abstraction refers to providing only the necessary information of the data and hiding their implementation and background details from the outside world, i.e., to show only the needed information in the program without showing the details. For example, a database system hides certain details of how data is stored, created and maintained.
Features of Data Abstraction are:
- Class internals get protected from unintentional user-level errors
- The coder do not have to write the low-level code
- Code duplicity is avoided
- Code reuse and proper partitioning across classes
- It permits internal implementation characteristics to be changed without affecting the users
#include <iostream> using namespace std; class Abstraction { private: int n1, n2; public: void set(int a, int b) { n1 = a; n2 = b; } void displayn() { cout<<"n1 = " << n1 << endl; cout<<"n2 = " << n2 ; } }; int main() { Abstraction obj; obj.set(1, 2); obj.displayn(); return 0; }
Encapsulation
Encapsulation is an Object-Oriented Programming concept that binds the data and functions together that has the capability to manipulate the data and keeps them safe from outside interference and misuse.
#include<iostream> using namespace std; class Encap { private: // entity outside this class cannot access these data members directly int num; char ch; public: int getNum() const { return num; } char getCh() const { return ch; } void setNum(int num) { this->num = num; } void setCh(char ch) { this->ch = ch; } }; int main() { Encap obj; obj.setNum(1); obj.setCh('GeeksGod'); cout<<obj.getNum()<<endl; cout<<obj.getCh(); return 0; } OUTPUT ------ 1 GeeksGod
Inheritance
In C++, it is possible to inherit attributes from one class to the other class. The capablity of a class to derive resources and characteristics from another class is called Inheritance. One of the most useful aspects of object-oriented programming is code reusability and helps to reduce the code size.
As the name suggests Inheritance is the process of forming a new class from an existing class called a base class, the new class formed is called a derived class.
Syntax:
class Subclass_name : access_mode Superclass_name
Example:
class humans { public: int legs = 2; }; class hands : public humans // hands class inheriting humam class { public: int tail = 2; }; int main() { humans d; cout << d.legs; cout << d.hands; } OUTPUT ------ 2 2
Polymorphism
As the name suggests, it simply means more than one form. Polymorphism is the capability to use an operator or function in multiple ways, i.e. a function or an operator functioning in many ways differently according to the usage.
class Plant // Base Class { public: void plantcolor() { cout << "The colors of plant are: \n" ; } }; class Grass : public Plant // Derived class { public: void plantcolor() { cout << "The color of the grass is: Green \n" ; } }; class Phantom_Orchid : public Plant // Derived class { public: void plantcolor() { cout << "The color of Phantom Orchid is White \n" ; } }; | https://edusera.org/object-oriented-programmingoop-in-c/ | CC-MAIN-2022-40 | en | refinedweb |
Authent:
1. User signs up into the Cognito User Pool.
2. User uploads – during Sign Up – a document image containing his/her photo and name, to an S3 Bucket (e.g. Passport).
3. A Lambda function is triggered containing the uploaded image as payload.
4. The function first indexes the image in a specific Amazon Rekognition Collection to store these user documents.
5. The same function then persists in a DynamoDB table as the indexed image metadata, together with the email registered in Amazon Cognito User Pool for later queries.
6. User enters an email in the custom Sign In page, which makes a request to Cognito User Pool.
7. Amazon Cognito User Pool triggers the “Define Auth Challenge” trigger that determines which custom challenges are to be created at this moment.
8. The User Pool then invokes the “Create Auth Challenge” trigger. This trigger queries the DynamoDB table for the user containing the given email id to retrieve its indexed photo from the Amazon Rekognition Collection.
9. The User Pool invokes the “Verify Auth Challenge” trigger. This verifies if the challenge was indeed successfully completed; if it finds an image, it will compare it with the photo taken during Sign In to measure its confidence between both the images.
10.:
1. Create Rekognition Collection (Python 3.6)– This Lambda function gets triggered only once, at the beginning of deployment, to create a Custom Collection in Amazon Rekognition to index documents for user Sign Ups.
2..
3.
4..
5..
import boto3 import os def handler(event, context): maxResults=1 collectionId=os.environ['COLLECTION_NAME'] client=boto3.client('rekognition') #Create a collection print('Creating collection:' + collectionId) response=client.create_collection(CollectionId=collectionId) print('Collection ARN: ' + response['CollectionArn']) print('Status code: ' + str(response['StatusCode'])) print('Done...') return response.
from __future__ import print_function import boto3 from decimal import Decimal import json import urllib import os dynamodb = boto3.client('dynamodb') s3 = boto3.client('s3') rekognition = boto3.client('rekognition') # --------------- Helper Functions ------------------ def index_faces(bucket, key): response = rekognition.index_faces( Image={"S3Object": {"Bucket": bucket, "Name": key}}, CollectionId=os.environ['COLLECTION_NAME']) return response def update_index(tableName,faceId, fullName): response = dynamodb.put_item( TableName=tableName, Item={ 'RekognitionId': {'S': faceId}, 'FullName': {'S': fullName} } ) # --------------- Main handler ------------------ def handler(event, context): # Get the object from the event bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus( event['Records'][0]['s3']['object']['key'].encode('utf8')) try: # Calls Amazon Rekognition IndexFaces API to detect faces in S3 object # to index faces into specified collection response = index_faces(bucket, key) # Commit faceId and full name object metadata to DynamoDB if response['ResponseMetadata']['HTTPStatusCode'] == 200: faceId = response['FaceRecords'][0]['Face']['FaceId'] ret = s3.head_object(Bucket=bucket,Key=key) email = ret['Metadata']['email'] update_index(os.environ['COLLECTION_NAME'],faceId, email) return response except Exception as e: print("Error processing object {} from bucket {}. ".format(key, bucket)) raise e:
exports.handler = async (event, context) => { console.log("Define Auth Challenge: " + JSON.stringify(event)); if (event.request.session && event.request.session.length >= 3 && event.request.session.slice(-1)[0].challengeResult === false) { // The user provided a wrong answer 3 times; fail auth event.response.issueTokens = false; event.response.failAuthentication = true; } else if (event.request.session && event.request.session.length && event.request.session.slice(-1)[0].challengeResult === true) { // The user provided the right answer; succeed auth event.response.issueTokens = true; event.response.failAuthentication = false; } else { // The user did not provide a correct answer yet; present challenge event.response.issueTokens = false; event.response.failAuthentication = false; event.response.challengeName = 'CUSTOM_CHALLENGE'; } return event; }.
const aws = require('aws-sdk'); const dynamodb = new aws.DynamoDB.DocumentClient(); exports.handler = async (event, context) => { console.log("Create auth challenge: " + JSON.stringify(event)); if (event.request.challengeName == 'CUSTOM_CHALLENGE') { event.response.publicChallengeParameters = {}; let answer = ''; // Querying for Rekognition ids for the e-mail provided const params = { TableName: process.env.COLLECTION_NAME, IndexName: "FullName-index", ProjectionExpression: "RekognitionId", KeyConditionExpression: "FullName = :userId", ExpressionAttributeValues: { ":userId": event.request.userAttributes.email } } try { const data = await dynamodb.query(params).promise(); data.Items.forEach(function (item) { answer = item.RekognitionId; event.response.publicChallengeParameters.captchaUrl = answer; event.response.privateChallengeParameters = {}; event.response.privateChallengeParameters.answer = answer; event.response.challengeMetadata = 'REKOGNITION_CHALLENGE'; console.log("Create Challenge Output: " + JSON.stringify(event)); return event; }); } catch (err) { console.error("Unable to query. Error:", JSON.stringify(err, null, 2)); throw err; } } return event; }.
var aws = require('aws-sdk'); var rekognition = new aws.Rekognition(); exports.handler = async (event, context) => { console.log("Verify Auth Challenge: " + JSON.stringify(event)); let userPhoto = ''; event.response.answerCorrect = false; // Searching existing faces indexed on Rekognition using the provided photo on s3 const objectName = event.request.challengeAnswer; const params = { "CollectionId": process.env.COLLECTION_NAME, "Image": { "S3Object": { "Bucket": process.env.BUCKET_SIGN_UP, "Name": objectName } }, "MaxFaces": 1, "FaceMatchThreshold": 90 }; try { const data = await rekognition.searchFacesByImage(params).promise(); // Evaluates if Rekognition was able to find a match with the required // confidence threshold if (data.FaceMatches[0]) { console.log('Face Id: ' + data.FaceMatches[0].Face.FaceId); console.log('Similarity: ' + data.FaceMatches[0].Similarity); userPhoto = data.FaceMatches[0].Face.FaceId; if (userPhoto) { if (event.request.privateChallengeParameters.answer == userPhoto) { event.response.answerCorrect = true; } } } } catch (err) { console.error("Unable to query. Error:", JSON.stringify(err, null, 2)); throw err; } return event; }:
import Amplify from 'aws-amplify'; Amplify.configure({ Auth: { region: 'your region', userPoolId: 'your userPoolId', userPoolWebClientId: 'your clientId', }, Storage: { region: 'your region', bucket: 'your sign up bucket' } });.
import { Auth } from 'aws-amplify'; signUp = async event => { const params = { username: this.state.email, password: getRandomString(30), attributes: { name: this.state.fullName } }; await Auth.signUp(params); }; function getRandomString(bytes) { const randomValues = new Uint8Array(bytes); window.crypto.getRandomValues(randomValues); return Array.from(randomValues).map(intToHex).join(''); } function intToHex(nr) { return nr.toString(16).padStart(2, '0'); }
Starts the custom authentication flow to the user.
import { Auth } from "aws-amplify"; signIn = () => { try { user = await Auth.signIn(this.state.email); this.setState({ user }); } catch (e) { console.log('Oops...'); } };
Answering the Custom Challenge
In this step, we open the camera through Browser to take a user photo and then upload it to Amazon S3, so we can start the face comparison.
import Webcam from "react-webcam"; // Instantiate and set webcam to open and take a screenshot // when user is presented with a custom challenge /* Webcam implementation goes here */ // Retrieves file uploaded to S3 and sends as a File to Rekognition // as answer for the custom challenge dataURLtoFile = (dataurl, filename) => {}); }; sendChallengeAnswer = () => { // Capture image from user camera and send it to S3 const imageSrc = this.webcam.getScreenshot(); const attachment = await s3UploadPub(dataURLtoFile(imageSrc, "id.png")); // Send the answer to the User Pool const answer = `public/${attachment}`; user = await Auth.sendCustomChallengeAnswer(cognitoUser, answer); this.setState({ user }); try { // This will throw an error if the user is not yet authenticated: await Auth.currentSession(); } catch { console.log('Apparently the user did not enter the right code'); } };. | https://girishgodage.in/blog/authenticate-applications-through-facial-recognition-with-amazon-cognito-and-amazon-rekognition | CC-MAIN-2022-40 | en | refinedweb |
Exporting to e2studio with CMSIS_DAP DBG
Table of Contents
- Environment
- Setup Procedure
- Install Windows serial driver
- Install e2studio
- Install OpenOCD
- Associate GR-PEACH config with OpenOCD
- Install OpenOCD add-in to e2studio
- Configure OpenOCD on e2studio
- Build of e2studio environment
- Exporting to e2studio
- import project to e2studio
- Build Process
- The way to debug
- Debug
- Support features
Environment¶
If you would like to use J-Link for debugging, please refer to Exporting to e2studio (J-Link debug).
Setup Procedure¶
Install Windows serial driver¶
Install latest Windows Serial Port Driver to setup CMSIS-DAP from the link below:
Install e2studio¶
Please download e2studio 5.0.0 or lator, and install
Install OpenOCD¶
Please download exe file of OpenOCD v0.10.0-201601101000-dev, and install.
Associate GR-PEACH config with OpenOCD¶
Please copy renesas_gr-peach.cfg to scripts\board directory included in the OpenOCD installed location. By default, it should be located as follows:
- In case of using 32-bit windows:
C:\Program Files\GNU ARM Eclipse\OpenOCD\0.10.0-201601101000-dev\scripts\board
- In case of using 64-bit windows:
C:\Program Files (x86)\GNU ARM Eclipse\OpenOCD\0.10.0-201601101000-dev\scripts\board
Install OpenOCD add-in to e2studio¶
Information
This procedure can be skipped when you use e2studio version 5.2 or later since the add-in is incorporated in e2studio.
- Launch e2studio.
- Select[Help]menu→[Install new software...]
- Input [work with] box, and push [Add] button.
- Check [GNU ARM C/C++ OpenOCD Debugging] and push [Next >] button.
- Install and restart e2studio.
Configure OpenOCD on e2studio¶
- Select [Window] -> [Preferences].
- Select [Run/Debug] - [OpenOCD].
- Check if the directory and executable are filled with OpenOCD installation folder and openocd.ex respectively. If not, please input OpenOCD installation folder and openocd.exe there and click [OK]. Note that the default OpenOCD installation folder should be as follows:
- In case of using 32-bit windows:
C:/Program Files/GNU ARM Eclipse/OpenOCD/0.10.0-201601101000-dev/bin
- In case of using 64-bit windows:
C:/Program Files (x86)/GNU ARM Eclipse/OpenOCD/0.10.0-201601101000-dev/bin
Build of e2studio environment¶
Exporting to e2studio¶
- Go to Mbed compiler.
- Right click at the program you want to export.
- Select "Export Program"
- Select "Renesas GR-PEACH" for Export Target
Select "e2studio" for Export Toolchain
Push "Export"
- Expand zip file.
import project to e2studio¶
- Launch e2studio.
- Specify workspace directory. Workspace directory must be placed in the upper directory of the directory that includes .project file.
In this document, project file is placed in C:\WorkSpace\GR-PEACH_blinky_e2studio_rz_a1h\GR-PEACH_blinky, and the workspace is placed in C:\WorkSpace\GR-PEACH_blinky_e2studio_rz_a1h.
- If Toolchain Integration dialog appared, select [GCC ARM embedded version 4.9.3.20150529] and click [Register].
- After e2studio window opens, click [go to workbench].
- Select [File]menu-[import].
- Select [General]-[Existing Projects into Workspace], and click [Next>]
- Click [Browse].
- Click [OK].
- Click [Finish].
Build Process¶
- Launch e2studio.
- Select the [Window] menu -> [Show View] -> [Project Explorer].
- Select the project to build.
- Click build icon.
e.g.) The folder structure when making the work folder "C:\Workspase". Export project is GR-PEACH_blinky. C: +-- Workspace +-- GR-PEACH_blinky_e2studio_rz_a1h +-- .metadata +-- GR-PEACH_blinky | .cproject | .gdbinit | .hgignore | .project | exporter.yaml | GettingStarted.htm | GR-PEACH_blinky OpenOCD.launch | main.cpp | mbed.bld | SoftPWM.lib +-- .hg +-- .settings +-- Debug <- When clicking [Build Project], ".bin" and ".elf" file will be created here. +-- mbed +-- SoftPWM
The way to debug¶
Debug¶
- Connect USB cable
- Copy ".bin" file to Mbed drive
- Reconnect USB cable
- Select project to debug.
- From menu in C/C++ perspective or debug perspective , select [Run] [Debug Configurations...]
- Select [<project-name> OpenOCD] in [GDB OpenOCD Debugging]
- Click "Debug".
- If you want to reset :
please enter the following command to "arm-none-eabi-gdb.exe" screen in "console" view.
When you drop down from the console view toolbar buttons, you can switch the screen.
monitor reset init
Support features¶
These features are supported. Generic views which uses the features below would be useable.
- Software breakpoint will be replaced with Hardware breakpoint. 6 points are available in total.
- Downloading from e2 studio to serial flash memory is not supported. But you can download the program by copying the bin file to the drive which is generated when you connect the board to PC, because GR-PEACH is Mbed device. Before you download the program by the manner of Mbed, please disconnect the debugger from the board.
- Reading or writing memory while program is running are not supported. And writing is supported only for RAM.
- .gdbinit is required to stepping the program which uses interrupt.
- The button for reset in Debug View doesn't work, but the command to reset is available. Please enter "monitor reset init" to reset the program in the console for GDB (arm-none-eabi-gdb).
Although the display is not changed, but the program would be reset. The button for restart will work once. Please don't use it.
e2 studio has the special views for Renesas. The supported status are below. | https://os.mbed.com/teams/Renesas/wiki/Exporting-to-e2studio-with-CMSIS_DAP-DBG | CC-MAIN-2022-40 | en | refinedweb |
Introduction to Java 8 Streams
The main subject of this article is advanced data processing topics using a new functionality added to Java 8 – The Stream API and the Collector API.
To get the most out of this article you should already be familiar with the main Java APIs, the
Object and
String classes, and the Collection API.
The
java.util.stream package consists of classes, interfaces, and many types to allow for functional-style operations over elements. Java 8 introduces a concept of a Stream that allows the programmer to process data descriptively and rely on a multi-core architecture without the need to write any special code.
A
Stream represents a sequence of objects derived from a source, over which aggregate operations can be performed.
From a purely technical point of view, a Stream is a typed interface - a stream of T. This means that a stream can be defined for any kind of object, a stream of numbers, a stream of characters, a stream of people, or even a stream of a city.
From a developer point of view, it is a new concept that might just look like a Collection, but it is in fact much different from a Collection.
There are a few key definitions we need to go through to understand this notion of a Stream and why it differs from a Collection:
The most common misconception which I'd like to address first - a stream does not hold any data. This is very important to keep that in mind and understand.
There is no data in a Stream, however, there is data held in a Collection.
A
Collection is a structure that holds its data. A Stream is just there to process the data and pull it out from the given source, or move it to a destination. The source might be a Collection, though it might also be an array or I/O resource. The stream will connect to the source, consume the data, and process the elements in it in some way.
A stream should not modify the source of the data it processes. This is not really enforced by the compiler of the JVM itself, so it is merely a contract. If I am to build my own implementation of a stream, I should not modify the source of the data I am processing. Although it is perfectly fine to modify the data in the stream though. Why is that so? Because if we want to process this data in parallel, we are going to distribute it among all the cores of our processors and we do not want to have any kind of visibility or synchronization issues that could lead to bad performances or errors. Avoiding this kind of interference means that we shouldn't modify the source of the data while we're processing it.
Probably the most powerful point out of these three. It means that the stream in itself can process as much data as we want. Unbounded does not mean that a source has to be infinite. In fact, a source may be finite, but we might not have access to the elements contained in that source. Suppose the source is a simple text file. A text file has a known size even if it is very big. Also suppose that the elements of that source are, in fact, the lines of this text file. Now, we might know the exact size of this text file but if we do not open it and manually go through the content, we'll never know how many lines it has. This is what unbounded means - we might not always know beforehand the number of elements a stream will process from the source. Those are the three definitions of a stream. So we can see from those three definitions that a stream really has nothing to do with a collection. A collection holds its data. A collection can modify the data it holds. And of course, a collection holds a known and finite amount of data.
collect()method is a terminal operation that is usually present at the end of operations to indicate the end of the Stream processing.
We can generate a stream with the help of a few methods:
The
stream() method returns the sequential stream with a Collection as its source. You can use any collection of objects as a source:
private List<String> list = new Arrays.asList("Scott", "David", "Josh"); list.stream();
The
parallelStream() method returns a parallel stream with a Collection as its source:
private List<String> list = new Arrays.asList("Scott", "David", "Josh"); list.parallelStream().forEach(element -> method(element));
The thing with parallel streams is that when executing such an operation, the Java runtime segregates the stream into multiple substreams. It executes the aggregate operations and the combines the result. In our case, it calls the
method with each element in the stream in parallel.
Although, this can be a double-edged sword, since executing heavy operations this way could block other parallel streams since it blocks the threads in the pool.
The static
of() method can be used to create a Stream from an array of objects or individual objects:
Stream.of(new Employee("David"), new Employee("Scott"), new Employee("Josh"));
And lastly, you can use the static
.builder() method to create a Stream of objects:
Stream.builder<String> streamBuilder = Stream.builder(); streamBuilder.accept("David"); streamBuilder.accept("Scott"); streamBuilder.accept("Josh"); Stream<String> stream = streamBuilder.build();
By calling the
.build() method, we pack the accepted objects into a regular Stream.
public class FilterExample { public static void main(String[] args) { List<String> fruits = Arrays.asList("Apple", "Banana", "Cherry", "Orange"); // Traditional approach for (String fruit : fruits) { if (!fruit.equals("Orange")) { System.out.println(fruit + " "); } } // Stream approach fruits.stream() .filter(fruit -> !fruit.equals("Orange")) .forEach(fruit -> System.out.println(fruit)); } }
A traditional approach to filtering out a single fruit would be with a classic for-each loop.
The second approach uses a Stream to filter out the elements of the Stream that match the given predicate, into a new Stream that is returned by the method.
Additionally, this approach uses a
forEach() method, that performs an action for each element of the returned stream. You can replace this with something called a method reference. In Java 8, a method reference is the shorthand syntax for a lambda expression that executes just one method.
The method reference syntax is simple, and you can even replace the previous lambda expression
.filter(fruit -> !fruit.equals("Orange")) with it:
Object::method;
Let's update the example and use method references and see how it looks like:
public class FilterExample { public static void main(String[] args) { List<String> fruits = Arrays.asList("Apple", "Banana", "Cherry", "Orange"); fruits.stream() .filter(FilterExample::isNotOrange) .forEach(System.out::println); } private static boolean isNotOrange(String fruit) { return !fruit.equals("Orange"); } }
Streams are easier and better to use with Lambda expressions and this example highlights how simple and clean the syntax looks compared to the traditional approach.
A traditional approach would be to iterate through a list with an enhanced for loop:
List<String> models = Arrays.asList("BMW", "Audi", "Peugeot", "Fiat"); System.out.print("Imperative style: " + "\n"); for (String car : models) { if (!car.equals("Fiat")) { Car model = new Car(car); System.out.println(model); } }
On the other hand, a more modern approach is to use a Stream to map:
List<String> models = Arrays.asList("BMW", "Audi", "Peugeot", "Fiat"); System.out.print("Functional style: " + "\n"); models.stream() .filter(model -> !model.equals("Fiat")) // .map(Car::new) // Method reference approach // .map(model -> new Car(model)) // Lambda approach .forEach(System.out::println);
To illustrate mapping, consider this class:
private String name; public Car(String model) { this.name = model; } // getters and setters @Override public String toString() { return "name='" + name + "'"; }
It's important to note that the
models list is a list of Strings – not a list of
Car. The
.map() method expects an object of type T and returns an object of type R.
We're converting String into a type of Car, essentially.
If you run this code, the imperative style and functional style should return the same thing.
Sometimes, you'd want to convert a Stream to a Collection or Map. Using the utility class Collectors and the functionalities it offers:
List<String> models = Arrays.asList("BMW", "Audi", "Peugeot", "Fiat"); List<Car> carList = models.stream() .filter(model -> !model.equals("Fiat")) .map(Car::new) .collect(Collectors.toList());
A classic task is to categorize objects according to certain criteria. We can do this by matching the needed information to the object information and check if that's what we need:
List<Car> models = Arrays.asList(new Car("BMW", 2011), new Car("Audi", 2018), new Car("Peugeot", 2015)); boolean all = models.stream().allMatch(model -> model.getYear() > 2010); System.out.println("Are all of the models newer than 2010: " + all); boolean any = models.stream().anyMatch(model -> model.getYear() > 2016); System.out.println("Are there any models newer than 2016: " + any); boolean none = models.stream().noneMatch(model -> model.getYear() < 2010); System.out.println("Is there a car older than 2010: " + none);
allMatch()- Returns
trueif all elements of this stream match the provided predicate.
anyMatch()- Returns
trueif any element of this stream match the provided predicate.
noneMatch()- Returns
trueif no element of this stream matches the provided predicate.
In the preceding code example, all of the given predicates are satisfied and all will return
true.
Most people today are using Java 8. Though not everybody is using Streams. Just because they represent a newer approach to programming and represent a touch with functional style programming along with lambda expressions for Java, doesn't necessarily mean that it's a better approach. They simply offer a new way of doing things. It's up to developers themselves to decide whether to rely on functional or imperative style programming. With a sufficient level of exercise, combining both principles can help you improve your software. As always, we encourage you to check out the official documentation for additional information.Reference: stackabuse.com | https://www.codevelop.art/introduction-to-java-8-streams.html | CC-MAIN-2022-40 | en | refinedweb |
- Getting Started
- Apollo Client
- GraphQL Queries
- Global IDs
- Immutability and cache updates
- Usage in Vue
- Local state with Apollo
- Using with Vuex
- Working on GraphQL-based features when frontend and backend are not in sync
- Manually triggering queries
- Working with pagination
- Managing performance
- Best Practices
- Testing
- Handling errors
- Usage outside of Vue
- Making initial queries early with GraphQL startup calls
- Troubleshooting
GraphQL
Getting Started
Helpful Resources
General resources:
GraphQL at GitLab:
- GitLab Unfiltered GraphQL playlist
- GraphQL at GitLab: Deep Dive (video) by Nick Thomas
- An overview of the history of GraphQL at GitLab (not frontend-specific)
- GitLab Feature Walkthrough with GraphQL and Vue Apollo (video) by Natalia Tepluhina
- A real-life example of implementing a frontend feature in GitLab using GraphQL
- History of client-side GraphQL at GitLab (video) Illya Klymov and Natalia Tepluhina
- From Vuex to Apollo (video) by Natalia Tepluhina
- An overview of when Apollo might be a better choice than Vuex, and how one could go about the transition
- 🛠 Vuex -> Apollo Migration: a proof-of-concept project
- A collection of examples that show the possible approaches for state management with Vue+GraphQL+(Vuex or Apollo) apps
Libraries
We use Apollo (specifically Apollo Client) and Vue Apollo when using GraphQL for frontend development.
If you are using GraphQL in
Apollo GraphQL VS Code extension
If you use VS Code, the Apollo GraphQL extension supports autocompletion in
.graphql files. To set up
the GraphQL extension, follow these steps:
- Generate the schema:
bundle exec rake gitlab:graphql:schema:dump
- Add an
apollo.config.jsfile to the root of your
gitlablocal directory.
Populate the file with the following content:
module.exports = { client: { includes: ['./app/assets/javascripts/**/*.graphql', './ee/app/assets/javascripts/**/*.graphql'], service: { name: 'GitLab', localSchemaFile: './tmp/tests/graphql/gitlab_schema.graphql', }, }, };
- Restart VS Code.
Exploring. You can also write queries and mutations directly on the left tab and check their execution by clicking Execute query button on the top left:
Ap (for example,
${gon.relative_url_root}/api/graphql)
fetchPolicydetermines how you want your component to interact with the Apollo cache. Defaults to “cache-first”.
Multiple client queries for the same object
If you are making multiple queries to the same Apollo client object you might encounter the following error:
Cache data may be lost when replacing the someProperty field of a Query object. To address this problem, either ensure all objects of SomeEntityhave an id or a custom merge function. We are already checking
ID presence for every GraphQL type that has an
ID, so this shouldn’t be the case. Most likely, the
SomeEntity type doesn’t have an
ID property, and to fix this warning we need to define a custom merge function.
We have some client-wide types with
merge: true defined in the default client as typePolicies (this means that Apollo will merge existing and incoming responses in the case of subsequent queries). Please consider adding
SomeEntity there or defining a custom merge function for it.
Graph.
If you are using queries for the CustomersDot GraphQL endpoint, end the filename with
.customer.query.graphql,
.customer.mutation.graphql, or
.customer.fragment.graphql.
Fragments documentation
Global);
It is required to query global
id for every GraphQL type that has an
id in the schema:
query allReleases(...) { project(...) { id // Project has an ID in GraphQL schema so should fetch it releases(...) { nodes { // Release has no ID property in GraphQL schema name tagName tagPath assets { count links { nodes { id // Link has an ID in GraphQL schema so should fetch it name } } } } pageInfo { // PageInfo no ID property in GraphQL schema startCursor hasPreviousPage hasNextPage endCursor } } } }
Imm. Please, is generated.
Usageollo
It is possible to manage an application state with Apollo by using client-site resolvers or type policies with reactive variables when creating your default client.
Using client-side resolvers
The default state can be set by writing to the cache after setting up the default client. In the
example below, we are using query with
@client Apollo directive to write the initial data to
Apollo cache and then get this state in the Vue component:
// user.query.graphql query User { user @client { name surname age } }
// index.js import Vue from 'vue'; import VueApollo from 'vue-apollo'; import createDefaultClient from '~/lib/graphql'; import userQuery from '~/user/user.query.graphql' Vue.use(VueApollo); const defaultClient = createDefaultClient(); defaultClient.cache.writeQuery({ query: userQuery, data: { user: { name: 'John', surname: 'Doe', age: 30 }, }, }); const apolloProvider = new VueApollo({ defaultClient, });
// App.vue import userQuery from '~/user/user.query.graphql' export default { apollo: { user: { query: userQuery } } }
Along with creating local data, we can also extend existing GraphQL types with
@client fields. This is extremely helpful when we need to mock an API response for fields not yet added to our GraphQL API.
Mocking API response with local Apollo cache
Using local Apollo Cache is helpful when we have a need to mock some GraphQL API responses, queries, or mutations locally (such as when they’re still not added to our actual API).
For example, we have a fragment on
DesignVersion used in our queries:
fragment VersionListItem on DesignVersion { id sha }
We also need to fetch the version author and the
created at property to display tries fetches
id and
sha from the remote API endpoint. It then assigns our hardcoded values to the
author and
createdAt version properties. With this data, frontend developers are able to work on their UI without being blocked by backend. When the response is added to the API, our custom local resolver can be removed. The only change to the query/fragment is to remove the
@client directive.
Read more about local state management with Apollo in the Vue Apollo documentation.
Using type policies with reactive variables
Apollo Client 3 offers an alternative to client-side resolvers by using reactive variables to store client state.
NOTE: We are still learning the best practices for both type policies and reactive vars. Take a moment to improve this guide or leave a comment if you use it!
In the example below we define a
@client query and its
typedefs:
// ./graphql/typedefs.graphql extend type Query { localData: String! }
// ./graphql/get_local_data.query.graphql query getLocalData { localData @client }
Similar to resolvers, your
typePolicies will execute when the
@client query is used. However,
using
makeVar will trigger every relevant active Apollo query to reactively update when the state
mutates.
// ./graphql/local_state.js import { makeVar } from '@apollo/client/core'; import typeDefs from './typedefs.graphql'; export const createLocalState = () => { // set an initial value const localDataVar = makeVar(''); const cacheConfig = { typePolicies: { Query: { fields: { localData() { // obtain current value // triggers when `localDataVar` is updated return localDataVar(); }, }, }, }, }; // methods that update local state const localMutations = { setLocalData(newData) { localDataVar(newData); }, clearData() { localDataVar(''); }, }; return { cacheConfig, typeDefs, localMutations, }; };
Pass the cache config to your Apollo Client:
// index.js // ... import createDefaultClient from '~/lib/graphql'; import { createLocalState } from './graphql/local_state'; const { cacheConfig, typeDefs, localMutations } = createLocalState(); const apolloProvider = new VueApollo({ defaultClient: createDefaultClient({}, { cacheConfig, typeDefs }), }); return new Vue({ el, apolloProvider, provide: { // inject local state mutations to your app localMutations, }, render(h) { return h(MyApp); }, });
Wherever used, the local query will update as the state updates thanks to the reactive variable.
Using with Vuex
When the Apollo Client is used in Vuex and fetched data is stored in the Vuex store, the Apollo Client cache does not need to be, }, );
Working on GraphQL-based features when frontend and backend are not in sync
Any feature that requires GraphQL queries/mutations to be created or updated should be carefully planned. Frontend and backend counterparts should agree on a schema that satisfies both client-side and server-side requirements. This enables both departments to start implementing their parts without blocking each other.
Ideally, the backend implementation should be done prior to the frontend so that the client can immediately start querying the API with minimal back and forth between departments. However, we recognize that priorities don’t always align. For the sake of iteration and delivering work we’re committed to, it might be necessary for the frontend to be implemented ahead of the backend.
Implementing frontend queries and mutations ahead of the backend
In such case, the frontend will define GraphQL schemas or fields that do not correspond to any
backend resolver yet. This is fine as long as the implementation is properly feature-flagged so it
does not translate to public-facing errors in the product. However, we do validate client-side
queries/mutations against the backend GraphQL schema with the
graphql-verify CI job.
You must confirm your changes pass the validation if they are to be merged before the
backend actually supports them. Below are a few suggestions to go about this.
Using the
@client directive
The preferred approach is to use the
@client directive on any new query, mutation, or field that
isn’t yet supported by the backend. Any entity with the directive is skipped by the
graphql-verify validation job.
Additionally Apollo will attempt to resolve them client-side, which can be used in conjunction with Mocking API response with local Apollo cache. This provides a convenient way of testing your feature with fake data defined client-side. When opening a merge request for your changes, it can be a good idea to provide local resolvers as a patch that reviewers can apply in their GDK to easily smoke-test your work.
Make sure to track the removal of the directive in a follow-up issue, or as part of the backend implementation plan.
Adding an exception to the list of known failures
GraphQL queries/mutations validation can be completely turned off for specific files by adding their
paths to the
config/known_invalid_graphql_queries.yml
file, much like you would disable ESLint for some files via an
.eslintignore file.
Bear in mind that any file listed in here will not be validated at all. So if you’re only adding
fields to an existing query, use the
@client directive approach so that the rest of the query
is still validated.
Again, make sure that those overrides are as short-lived as possible by tracking their removal in the appropriate issue.
Feature-flagged queries
In cases where the backend is complete and the frontend is being implemented behind a feature flag, a couple options are available to leverage the feature flag in the GraphQL queries.
The
@include directive
The
@include (or its opposite,
@skip) can be used to control whether an entity should be
included in the query. If the
@include directive evaluates to
false, the entity’s resolver is
not hit and the entity is excluded from the response. For example:
query getAuthorData($authorNameEnabled: Boolean = false) { username name @include(if: $authorNameEnabled) }
Then in the Vue (or JavaScript) call to the query we can pass in our feature flag. This feature flag needs to be already set up correctly. See the feature flag documentation for the correct way to do this.
export default { apollo: { user: { query: QUERY_IMPORT, variables() { return { authorNameEnabled: gon?.features?.authorNameEnabled, }; }, } }, };
Note that, even if the directive evaluates to
false, the guarded entity is sent to the backend and
matched against the GraphQL schema. So this approach requires that the feature-flagged entity
exists in the schema, even if the feature flag is disabled. When the feature flag is turned off, it
is recommended that the resolver returns
null at the very least.
Different versions of a query
There’s another approach that involves duplicating the standard query, and it should be avoided. The copy includes the new entities while the original remains unchanged. It is up to the production code to trigger the right query based on the feature flag’s status. For example:
export default { apollo: { user: { query() { return this.glFeatures.authorNameEnabled ? NEW_QUERY : ORIGINAL_QUERY, } } }, };
This approach is not recommended as it results in bigger merge requests and requires maintaining
two similar queries for as long as the feature flag exists. This can be used in cases where the new
GraphQL entities are not yet part of the schema, or if they are feature-flagged at the schema level
(
new_entity: :feature_flag).
Manually_info.fragment.graphql" query { project(fullPath: "root/my-project") { id issue(iid: "42") { designCollection { designs(atVersion: null, after: "Ihwffmde0i", first: 10) { edges { node { id } } pageInfo { ...PageInfo } } } } } }
Note that we are using the
page_info.fragment.graphql to populate the
pageInfo information.
Using
fetchMore method in components
This approach makes sense to use with user-handled pagination. For example, when the scrolling to fetch more data or explicitly clicking. This allows us to get.
fetchNextPage(endCursor) { this.$apollo.queries.designs.fetchMore({ variables: { // ... The rest of the design variables first: 10, after: endCursor, }, }); }
Defining field merge policy
We would also need to define a field policy to specify how do we want to merge the existing results with the incoming results. For example, if we have
Previous/Next buttons, it makes sense to replace the existing result with the incoming one:
const apolloProvider = new VueApollo({ defaultClient: createDefaultClient( {}, { cacheConfig: { typePolicies: { DesignCollection: { fields: { designs: { merge(existing, incoming) { if (!incoming) return existing; if (!existing) return incoming; // We want to save only incoming nodes and replace existing ones return incoming } } } } } }, }, ), });
When we have an infinite scroll, it would make sense to add the incoming
designs nodes to existing ones instead of replacing. In this case, merge function would be slightly different:
const apolloProvider = new VueApollo({ defaultClient: createDefaultClient( {}, { cacheConfig: { typePolicies: { DesignCollection: { fields: { designs: { merge(existing, incoming) { if (!incoming) return existing; if (!existing) return incoming; const { nodes, ...rest } = incoming; // We only need to merge the nodes array. // The rest of the fields (pagination) should always be overwritten by incoming let result = rest; result.nodes = [...existing.nodes, ...nodes]; return result; } } } } } }, }, ), });
apollo-client provides
a few field policies to be used with paginated queries. Here’s another way to achieve infinite
scroll pagination with the
concatPagination policy:
import { concatPagination } from '@apollo/client/utilities'; import Vue from 'vue'; import VueApollo from 'vue-apollo'; import createDefaultClient from '~/lib/graphql'; Vue.use(VueApollo); export default new VueApollo({ defaultClient: createDefaultClient( {}, { cacheConfig: { typePolicies: { Project: { fields: { dastSiteProfiles: { keyArgs: ['fullPath'], // You might need to set the keyArgs option to enforce the cache's integrity }, }, }, DastSiteProfileConnection: { fields: { nodes: concatPagination(), }, }, }, }, }, ), });
This is similar to the
DesignCollection example above as new page results are appended to the
previous ones.
Using. This allows us
to see if we need to fetch the next page, calling is used as a static key when caching the data.
You’d then be able to retrieve the data without providing any pagination-specific variables.
Here’s an example of a query using the
@connection directive:
#import "~/graphql_shared/fragments/page_info stores
The Apollo client batches queries by default. Given 3 deferred queries, Apollo groups them into one request, sends the single request to the server, and responds after all 3 queries have completed.
If you need to have queries sent as individual requests, additional context can be provided to tell Apollo to do this.
export default { apollo: { user: { query: QUERY_IMPORT, context: { isSingleRequest: true, } } }, };
Polling and Performance
While the Apollo client has support for simple polling, for performance reasons, our ETag-based caching is preferred to hitting the database each time.
After the ETag resource is set up to be cached from backend, there are a few changes to make on the frontend.
First, get your ETag resource from the backend, which should be in the form of a URL path. In the example of the pipelines graph, this is called the
graphql_resource_etag, which is used to create new headers to add to the Apollo context:
/* pipelines/components/graph/utils.js */ /* eslint-disable @gitlab/require-i18n-strings */ const getQueryHeaders = (etagResource) => { return { fetchOptions: { method: 'GET', }, headers: { /* This will depend on your feature */ 'X-GITLAB-GRAPHQL-FEATURE-CORRELATION': 'verify/ci/pipeline-graph', 'X-GITLAB-GRAPHQL-RESOURCE-ETAG': etagResource, 'X-REQUESTED-WITH': 'XMLHttpRequest', }, }; }; /* eslint-enable @gitlab/require-i18n-strings */ /* component.vue */ apollo: { pipeline: { context() { return getQueryHeaders(this.graphqlResourceEtag); }, query: getPipelineDetails, pollInterval: 10000, .. }, },
Here, the apollo query is watching for changes in
graphqlResourceEtag. If your ETag resource dynamically changes, you should make sure the resource you are sending in the query headers is also updated. To do this, you can store and update the ETag resource dynamically in the local cache.
You can see an example of this in the pipeline status of the pipeline editor. The pipeline editor watches for changes in the latest pipeline. When the user creates a new commit, we update the pipeline query to poll for changes in the new pipeline.
# pipeline_etag.query.graphql query getPipelineEtag { pipelineEtag @client }
/* pipeline_editor/components/header/pipeline_status.vue */ import getPipelineEtag from '~/pipeline_editor/graphql/queries/client/pipeline_etag.query.graphql'; apollo: { pipelineEtag: { query: getPipelineEtag, }, pipeline: { context() { return getQueryHeaders(this.pipelineEtag); }, query: getPipelineQuery, pollInterval: POLL_INTERVAL, }, } /* pipeline_editor/components/commit/commit_section.vue */ await this.$apollo.mutate({ mutation: commitCIFile, update(store, { data }) { const pipelineEtag = data?.commitCreate?.commit?.commitPipelinePath; if (pipelineEtag) { store.writeQuery({ query: getPipelineEtag, data: { pipelineEtag } }); } }, });
ETags depend on the request being a
GET instead of GraphQL’s usual
POST. Our default link library does not support
GET requests, so we must let our default Apollo client know to use a different library. Keep in mind, this means your app cannot batch queries.
/* componentMountIndex.js */ const apolloProvider = new VueApollo({ defaultClient: createDefaultClient( {}, { useGet: true, }, ), });
Finally, we can add a visibility check so that the component pauses polling when the browser tab is not active. This should lessen the request load on the page.
/* component.vue */ import { toggleQueryPollingByVisibility } from '~/pipelines/components/graph/utils'; export default { mounted() { toggleQueryPollingByVisibility(this.$apollo.queries.pipeline, POLL_INTERVAL); }, };
You can use this MR as a reference on how to fully implement ETag caching on the frontend.
Once subscriptions are mature, this process can be replaced by using them and we can remove the separate link library and return to batching queries.
How to test ETag caching
You can test that your implementation works by checking requests on the network tab. If there are no changes in your ETag resource, all polled requests should:
- Be
GETrequests instead of
POSTrequests.
- Have an HTTP status of
304instead of
200.
Make sure that caching is not disabled in your developer tools when testing.
If you are using Chrome and keep seeing
200 HTTP status codes, it might be this bug: Developer tools show 200 instead of 304. In this case, inspect the response headers’ source to confirm that the request was actually cached and did return with a
304 status code.
Subscriptions
We use subscriptions to receive real-time updates from GraphQL API via websockets. Currently, the number of existing subscriptions is limited, you can check a list of available ones in GraphqiQL explorer
NOTE: We cannot test subscriptions using GraphiQL, because they require an ActionCable client, which GraphiQL does not support at the moment.
Subscriptions don’t require any additional configuration of Apollo Client instance, you can use them in the application right away. To distinguish subscriptions from queries and mutations, we recommend naming them with
.subscription.graphql extension:
// ~/sidebar/queries/issuable_assignees.subscription.graphql subscription issuableAssigneesUpdated($issuableId: IssuableID!) { issuableAssigneesUpdated(issuableId: $issuableId) { ... on Issue { assignees { nodes { ...User status { availability } } } } } }
When using GraphQL subscriptions in Vue application, we recommend updating existing Apollo query results with subscribeToMore option:
import issuableAssigneesSubscription from '~/sidebar/queries/issuable_assignees.subscription.graphql' apollo: { issuable: { query() { return assigneesQueries[this.issuableType].query; }, subscribeToMore: { // Specify the subscription that will update the query document() { return issuableAssigneesSubscription; }, variables() { return { issuableId: convertToGraphQLId(this.issuableClass, this.issuableId), }; }, }, }, },
We would need also to define a field policy similarly like we do it for the paginated queries
Best Practices
When to use (and not use)
update hook in mutations
Apollo Client’s
.mutate()
method exposes an
update hook that is invoked twice during the mutation lifecycle:
- Once at the beginning. That is, before the mutation has completed.
- Once after the mutation has completed.
You should use this hook only if you’re adding or removing an item from the store
(that is, ApolloCache). If you’re updating an existing item, it is usually represented by
a global
id.
In that case, presence of this
id in your mutation query definition makes the store update
automatically. Here’s an example of a typical mutation query with
id present in it:
mutation issueSetWeight($input: IssueSetWeightInput!) { issuableSetWeight: issueSetWeight(input: $input) { issuable: issue { id weight } errors } }
Testing
Generating the GraphQL schema
Some of our tests load the schema JSON files. To generate these files, run:
bundle exec rake gitlab:graphql:schema:dump
You should run this task after pulling from upstream, or when rebasing your
branch. This is run automatically as part of
gdk update.
tmpdirectory as “Excluded”, you should “Mark Directory As -> Not Excluded” for
gitlab/tmp/tests/graphql. This will allow the JS GraphQL plugin to automatically find and index the schema.
Testing); });
Mocking Apollo Client
To test the components with Apollo operations, we need to mock an Apollo Client in our unit tests. We use
mock-apollo-client library to mock Apollo client and
createMockApollo helper we created on top of it.
We need to inject
VueApollo into the Vue instance by calling
Vue.use(VueApollo). This will install
VueApollo globally for all the tests in the file. It is recommended to call
Vue.use(VueApollo) just after the imports.
import VueApollo from 'vue-apollo'; import Vue from 'vue'; Vue.use(VueApollo); function createMockApolloProvider() { return createMockApollo(requestHandlers); } function createComponent(options = {}) { const { mockApollo } = options; ... return shallowMount(..., { }); }); }); });
In theComponent(LoadingSpinner).exists()).toBe(true) }); it('renders designs list', async () => { const mockApollo = createMockApolloProvider(); const wrapper = createComponent({ mockApollo }); await waitForPromises()(); await waitForPromises() expect(wrapper.find('.test-error').exists()).toBe(true) })
Request handlers can also be passed to component factory as a parameter.
Mutations could be tested the same way: waitForPromises(); expect( findDesigns() .at(0) .props('id'), ).toBe('2'); });
To mock multiple query response states, success and failure, Apollo Client’s native retry behavior can combine with Jest’s mock functions to create a series of responses. These do not need to be advanced manually, but they do need to be awaited in specific fashion.
describe('when query times out', () => { const advanceApolloTimers = async () => { jest.runOnlyPendingTimers(); await waitForPromises() }; beforeEach(async () => { const failSucceedFail = jest .fn() .mockResolvedValueOnce({ errors: [{ message: 'timeout' }] }) .mockResolvedValueOnce(mockPipelineResponse) .mockResolvedValueOnce({ errors: [{ message: 'timeout' }] }); createComponentWithApollo(failSucceedFail); await waitForPromises(); }); it('shows correct errors and does not overwrite populated data when data is empty', async () => { /* fails at first, shows error, no data yet */ expect(getAlert().exists()).toBe(true); expect(getGraph().exists()).toBe(false); /* succeeds, clears error, shows graph */ await advanceApolloTimers(); expect(getAlert().exists()).toBe(false); expect(getGraph().exists()).toBe(true); /* fails again, alert returns but data persists */ await advanceApolloTimers(); expect(getAlert().exists()).toBe(true); expect(getGraph().exists()).toBe(true); }); });
Testing
@client queries
Using }); }); });
When you need to configure the mocked apollo client’s caching behavior, provide additional cache options when creating a mocked client instance and the provided options will merge with the default cache option:
const defaultCacheOptions = { fragmentMatcher: { match: () => true }, addTypename: false, };
function createMockApolloProvider({ props = {}, requestHandlers } = {}) { Vue.use(VueApollo); const mockApollo = createMockApollo( requestHandlers, {}, { dataIdFromObject: (object) => // eslint-disable-next-line no-underscore-dangle object.__typename === 'Requirement' ? object.iid : defaultDataIdFromObject(object), }, ); return mockApollo; }
Handling errors
The GitLab GraphQL mutations have two distinct error modes: Top-level and errors-as-data.
When utilising a GraphQL mutation, consider handling both of these error modes to ensure that the user receives the appropriate feedback when an error occurs.
Top-level errors
These errors are located at the “top level” of a GraphQL response. These are non-recoverable errors including argument errors and syntax errors, and should not be presented directly to the user.
Handling top-level errors
Apollo is aware of top-level errors, so we are able to leverage Apollo’s various error-handling mechanisms to handle these errors. For example, handling Promise rejections after invoking the
mutate method, or handling the
error event emitted from the
ApolloMutation component.
Because these errors are not intended for users, error messages for top-level errors should be defined client-side.
Errors-as-data
These errors are nested in the
data object of a GraphQL response. These are recoverable errors that, ideally, can be presented directly to the user.
Handling errors-as-data
First, we must add
errors to our mutation object:
mutation createNoteMutation($input: String!) { createNoteMutation(input: $input) { note { id + errors } }
Now, when we commit this mutation and errors occur, the response includes is
repository/files.
Troubleshooting
Mocked client returns empty objects instead of mock response
If your unit test is failing because response contains empty objects instead of mock data, you would need to add
__typename field to the mocked response. This happens because mocked client (unlike the real one) does not populate the response with typenames and in some cases we need to do it manually so the client is able to recognize a GraphQL type.
Warning about losing cache data
Sometimes you can see a warning in the console:
Cache data may be lost when replacing the someProperty field of a Query object. To address this problem, either ensure all objects of SomeEntityhave an id or a custom merge function. Please check section about multiple queries to resolve an issue.
- || "/"}) | https://docs.gitlab.com/14.10/ee/development/fe_guide/graphql.html | CC-MAIN-2022-40 | en | refinedweb |
Barcode Software
vb.net create barcode image
The 1990s 2000s in .NET
Integrated qr barcode in .NET The 1990s 2000s
Leslie doesn t realize that she represses her real feelings. She is highly emotional, but she lets her feelings build up and then tries to control them. Eventually, she has a volcanic eruption that surprises both herself and those around her. In addition, some of Leslie s feelings may be masking deeper emotions. For example, she becomes angry when she s actually hurt, and she becomes hurt when she s actually anxious. When Leslie is functioning at her normal moderate self-mastery level, an indirect challenge would be best. When she is functioning closer to high self-mastery, a direct challenge would have the most impact on her.
using button microsoft excel to deploy barcode in asp.net web,windows application
BusinessRefinery.com/barcode
asp.net barcode reader sdk
Using Barcode recognizer for tutorial .net vs 2010 Control to read, scan read, scan image in .net vs 2010 applications.
BusinessRefinery.com/ bar code
Below the menu items are the toolbar buttons or icons. The toolbar buttons allow you to quickly perform a common function within ASDM. Here is a description of the toolbar buttons: Home Displays the Home screen, which lets you view important information about your security appliance such as the status of your interfaces, the version of code you are running, licensing information, and performance information Configuration Displays the Configuration screen, which allows you to configure the features of the appliance
use jasper barcode development to encode bar code for java width
BusinessRefinery.com/ barcodes
asp.net barcode generator open source
using barcode implement for web.net control to generate, create barcodes image in web.net applications. output
BusinessRefinery.com/ bar code
5 SRC IP = 201.201.201.2 DST IP = 200.200.200.1
using barcode generating for reportingservices class control to generate, create bar code image in reportingservices class applications. contact
BusinessRefinery.com/ bar code
using barcode generating for rdlc report files control to generate, create bar code image in rdlc report files applications. additional
BusinessRefinery.com/barcode
FUNCTION MACROS
to access qr codes and qr bidimensional barcode data, size, image with .net barcode sdk namespace
BusinessRefinery.com/QR
use excel microsoft qr drawer to draw qr bidimensional barcode with excel microsoft address
BusinessRefinery.com/QR Code 2d barcode
Join Tables Using a Join Operation in t h e F R O M Clause
add qr code to ssrs report
use sql server reporting services qr code development to build quick response code in .net framework
BusinessRefinery.com/QR Code JIS X 0510
winforms qr code
use .net winforms qr maker to access quick response code with .net numeric
BusinessRefinery.com/qr bidimensional barcode
IPv6 ACLs
qr code size property in .net
BusinessRefinery.com/QRCode
rdlc qr code
using barcode implement for rdlc reports net control to generate, create qr barcode image in rdlc reports net applications. agent
BusinessRefinery.com/QRCode
use word code 128 creation to make code 128 barcode on word email
BusinessRefinery.com/barcode 128a
barcode 128 generator vb.net
using way visual studio .net to get ansi/aim code 128 with asp.net web,windows application
BusinessRefinery.com/code 128c
Combining Eqs. (8.23) and (8.24) yields y y d sin q - 1 = + - cosq . r a a Squaring both sides of Eq. (8.25) and rearranging, we get sin 2 q - sin 2 q 2r 2 r r r d = + - cosq + a y a y ay
crystal reports barcode 128 download
using barcode generation for .net crystal report control to generate, create uss code 128 image in .net crystal report applications. images
BusinessRefinery.com/Code-128
crystal reports pdf 417
use .net barcode pdf417 generating to develop pdf 417 in .net complete
BusinessRefinery.com/pdf417
This page intentionally left blank
rdlc pdf 417
using barcode printing for rdlc reports net control to generate, create pdf 417 image in rdlc reports net applications. protocol
BusinessRefinery.com/barcode pdf417
using implementing word microsoft to make pdf-417 2d barcode in asp.net web,windows application
BusinessRefinery.com/pdf417 2d barcode
Do(n t) you want to . . . Are(n t) you interested in . . . Would(n t) it please you to . . . Do(n t) you want to go to the country Yes, I would. Are(n t) you interested in going to the movies No, I m not interested. Would(n t) it please you to go out Yes, it would. (No) le (te) parece . . . (No) le (te) interesa . . . (No) le (te) gustar a . . . (No) Le (Te) parece bien ir al campo
generate, create code 3 of 9 protocol none in excel spreadsheets projects
BusinessRefinery.com/Code 39
java data matrix generator
generate, create barcode data matrix demo none in java projects
BusinessRefinery.com/Data Matrix
letter-spacing, text-align
To center Inside
Using the Built-in Function Objects
Part I:
(Continued)
V0 1 2 LC
Requests total/allowed/denied 0/0/0 Server timeouts/retries 0/0 Responses received 0 Response time average 60s/300s 0/0 seconds/request <--output omitted-->.
Milestones
3. We do (a), (b), (c), (d). (a) Let u = cos x, du = sin x dx. Then the integral becomes (1 + u2 )2 2u du = (1 + u2 )3 + C. 3
Criticism of the GMPC and public acceptance issues has not really surfaced, in part, because every Malaysian citizen over the age of 12 has long been required to carry a paper identity card containing his thumbprints. By law, each Malaysian must carry his or her government-issued identity card. Moreover, the Constitution of Malaysia does not specifically recognize a right to privacy.
More QR Code on .NET
E l e c t r i c Ve h i c l e H i s t o r y in .NET Creator QR Code
The Tesla Roadster . in .NET Generating qr-codes
Buy New Buy Used Convert New in .NET Draw QRCode
T h e B e s t E l e c t r i c Ve h i c l e f o r Yo u in .NET Print QR Code JIS X 0510
zxing barcode generator java example: Chassis and Design in .NET Encoder Quick Response Code in .NET Chassis and Design
vb.net create barcode image: Aerodynamic Drag Force Defined in .NET Create qr barcode in .NET Aerodynamic Drag Force Defined
Chassis and Design in .NET Display QR Code 2d barcode
Drivetrain type Front wheel drive Rear wheel drive in .NET Integrate QR Code ISO/IEC18004
Calculation Overview in .NET Drawer QR Code ISO/IEC18004
Vehicle Speed in .NET Incoporate QR Code 2d barcode
This page intentionally left blank in .NET Development QR Code JIS X 0510
vb.net create barcode image: DC Motors in General in .NET Print qr bidimensional barcode in .NET DC Motors in General
Torque Characteristics in .NET Integrate QR Code ISO/IEC18004
AC Induction Motors in .NET Integrating QR Code JIS X 0510
zxing barcode generator java example: Tomorrow s Best EV Motor Solution in .NET Creation QR in .NET Tomorrow s Best EV Motor Solution
Build Your Own Elec tric Vehicle in .NET Maker QR Code JIS X 0510
Zilla Models in .NET Maker QR Code JIS X 0510
Performance and torque ver sus an inter nal combustion engine (Cour tesy of T esla in .NET Compose QR Code 2d barcode
Charging Chemical Reaction in .NET Creator qrcode
Articles you may be interested
print barcode in vb.net: *The AIA website is. The CMAA website is. in Software Generator Code 39 in Software *The AIA website is. The CMAA website is.
free barcode font for vb.net: Laboratory Manual in Software Generating qr codes in Software Laboratory Manual
2d barcode vb.net: C++ from the Ground Up in .NET Drawer 2d Data Matrix barcode in .NET C++ from the Ground Up
vb.net barcode generator source code: A Sampling of Methods De ned by SortedDictionary TK, TV in C# Encoder QR Code 2d barcode in C# A Sampling of Methods De ned by SortedDictionary TK, TV
print barcode in vb.net: DESIGNING AN ENTERPRISE APPLICATION DELIVERY ARCHITECTURE in Software Attach qr barcode in Software DESIGNING AN ENTERPRISE APPLICATION DELIVERY ARCHITECTURE
java barcode reader free download: Need for reliable communications in Software Generation qr codes in Software Need for reliable communications
download barcode font for vb.net: Collectively referred to as the set of Ethernet service attributes. in Objective-C Access Data Matrix barcode in Objective-C Collectively referred to as the set of Ethernet service attributes.
barcode font reporting services: Exploring the System Namespace in visual C# Build qr-codes in visual C# Exploring the System Namespace
ssrs ean 13: + * / Addition Subtraction Multiplication Division in .net C# Draw QR Code 2d barcode in .net C# + * / Addition Subtraction Multiplication Division
rdlc barcode: 7: Business Continuity and Disaster Recovery in Software Generation USS Code 39 in Software 7: Business Continuity and Disaster Recovery
how to generate barcode in rdlc report: Making Sense of the COSO Cube and the COSO Pyramid in Software Paint 39 barcode in Software Making Sense of the COSO Cube and the COSO Pyramid
barcode recognition .net open source: Parental Lock in Software Creation QRCode in Software Parental Lock
barcode font in vb.net: Getting Started in Software Render qr barcode in Software Getting Started
integrate barcode scanner into asp.net web application: Define the M e a n i n g of Data in Software Development QRCode in Software Define the M e a n i n g of Data
2d barcode vb.net: Virtual Functions and Polymorphism in .NET Get barcode data matrix in .NET Virtual Functions and Polymorphism
generate qr code using excel: CAM DESIGN HANDBOOK in Software Drawer barcode data matrix in Software CAM DESIGN HANDBOOK
barcode in vb.net source code: CERTIFICATION OBJECTIVE 16.02 in Objective-C Get QR Code ISO/IEC18004 in Objective-C CERTIFICATION OBJECTIVE 16.02
barcode printer vb.net: Global Variables in .NET Develop ECC200 in .NET Global Variables
java api barcode scanner: Charge-Charge Forces in .NET Writer QR in .NET Charge-Charge Forces
barcode reader using java source code: Biometrics in Software Encode barcode standards 128 in Software Biometrics | http://www.businessrefinery.com/yc3/443/31/ | CC-MAIN-2022-40 | en | refinedweb |
Hi all,
Since this month’s addition of new structures, I’m seeing some issues with MPRester(). The issue occurs with a large fraction of structures when using the basic lines to grab a MP structure:
from pymatgen import MPRester mpr = MPRester(<my_key>) structure = mpr.get_structure_by_material_id(<mp_id>)
I’m seeing this occur with roughly 10% of my structures (from a pool of >2,000 mp-ids), but the structures that fail don’t look to be depreciated or removed from the MP database. Any idea what’s going on?
Here are two example mp-ids that fail with MPRester().get_structure_by_material_id(): mp-1113805 and mp-546846. Interestingly, the former can’t load the data on it’s mp webpage while the latter can (link1 and link2).
Let me know if there’s any insight on what’s going on!
-Jack | https://matsci.org/t/mprester-issue-from-this-months-update/3327 | CC-MAIN-2022-40 | en | refinedweb |
How to write to file in C#
File output can be used by C# programs to communicate with other programs written in different programming languages, or with human beings.
This post documents my experiences in writing to files in C#.net.
Specifying file access
For file writes, we must use either the
Write or the
ReadWrite member of the
System.IO.FileAccess enumeration to specify write access.
Specifying file mode
Apart from specifying file access, we specify the file mode via one of the members of the
System.IO.FileMode enumeration. The file mode determines how the operating system will open the file for our code to write to.
Getting an instance of System.IO.FileStream
As with file creation, there are a few ways gain code access to a file which we intend to write data:
- Using the
System.IO.FileStreamconstructors.
FileStream fileStream = new FileStream("techcoil.txt", FileMode.Append, FileAccess.Write);
- Using the static
Openmethod of
System.IO.Fileclass.
FileStream fileStream = File.open("techcoil.txt", FileMode.Append, FileAccess.Write);
- Using the
Openmethod of the
System.IO.FileInfoclass.
FileInfo fileInfo = new FileInfo("techcoil.txt"); FileStream fileStream = fileInfo.Open(FileMode.Append, FileAccess.Write);
The above code segments get the operating system to open techcoil.txt for writing to the end of the file. If techcoil.txt does not exist, the operating system will create it.
Writing text data to file
To write text data to the file directly, we can encapsulate it in a
System.IO.StreamWriter instance and use the
Write or
WriteLine method to write text data to the file. The following example writes the current date time to the end of techcoil.txt.
using System; using System.IO; using System.Text; public class WriteTextToFile { public static void Main(string[] args) { try { // If techcoil.txt exists, seek to the end of the file, // else create a new one. FileStream fileStream = File.Open("techcoil.txt", FileMode.Append, FileAccess.Write); // Encapsulate the filestream object in a StreamWriter instance. StreamWriter fileWriter = new StreamWriter(fileStream); // Write the current date time to the file fileWriter.WriteLine(System.DateTime.Now.ToString()); fileWriter.Flush(); fileWriter.Close(); } catch (IOException ioe) { Console.WriteLine(ioe); } } }
Writing binary data to file
To write binary data to file, we can use the
Write method of the
FileStream instance. As with the previous example, the following code writes the current date time string to the end of techcoil.txt. However, it converts the current date time string as bytes before writing to techcoil.txt.
public class WriteBinaryToFile { public static void Main(string[] args) { try { // If techcoil.txt exists, seek to the end of the file, // else create a new one. FileInfo fileInfo = new FileInfo("techcoil.txt"); FileStream fileStream = fileInfo.Open(FileMode.Append, FileAccess.Write); // Get the current date time as a string and add a new line to // the end of the string String currentDateTimeString = System.DateTime.Now.ToString() + Environment.NewLine; // Get the current date time string as bytes byte[] currentDateTimeStringInBytes = ASCIIEncoding.UTF8.GetBytes(currentDateTimeString); // Write those bytes to the techcoil.txt fileStream.Write(currentDateTimeStringInBytes, 0, currentDateTimeStringInBytes.Length); fileStream.Flush(); fileStream.Close(); } catch (IOException ioe) { Console.WriteLine(ioe); } // end try-catch } // end public static void Main(string[] args) } // end public class WriteBinaryToFile | https://www.techcoil.com/blog/how-to-write-to-file-in-c/ | CC-MAIN-2022-40 | en | refinedweb |
Batch processing—typified by bulk-oriented, non-interactive, and frequently long running, background execution—is widely used across virtually every industry and is applied to a diverse array of tasks. Batch processing may be data or computationally intensive, execute sequentially or in parallel, and may be initiated through various invocation models, including ad hoc, scheduled, and on-demand.
This Spring Batch tutorial explains the programming model and the domain language of batch applications in general and, in particular, shows some useful approaches to the design and development of batch applications using the current Spring Batch 3.0.7 version.
What is Spring Batch?
Spring Batch is a lightweight, comprehensive framework designed to facilitate development of robust batch applications. It also provides more advanced technical services and features that support extremely high volume and high performance batch jobs through its optimization and partitioning techniques. Spring Batch builds upon the POJO-based development approach of the Spring Framework, familiar to all experienced Spring developers.
By way of example, this article considers source code from a sample project that loads an XML-formatted customer file, filters customers by various attributes, and outputs the filtered entries to a text file. The source code for our Spring Batch example (which makes use of Lombok annotations) is available here on GitHub and requires Java SE 8 and Maven.
What is Batch Processing? Key Concepts and Terminology
It is important for any batch developer to be familiar and comfortable with the main concepts of batch processing. The diagram below is a simplified version of the batch reference architecture that has been proven through decades of implementations on many different platforms. It introduces the key concepts and terms relevant to batch processing, as used by Spring Batch.
As shown in our batch processing example, a batch process is typically encapsulated by a
Job consisting of multiple
Steps. Each
Step typically has a single
ItemReader,
ItemProcessor, and
ItemWriter. A
Job is executed by a
JobLauncher, and metadata about configured and executed jobs is stored in a
JobRepository.
Each
Job may be associated with multiple
JobInstances, each of which is defined uniquely by its particular
JobParameters that are used to start a batch job. Each run of a
JobInstance is referred to as a
JobExecution. Each
JobExecution typically tracks what happened during a run, such as current and exit statuses, start and end times, etc.
A
Step is an independent, specific phase of a batch
Job, such that every
Job is composed of one or more
Steps. Similar to a
Job, a
Step has an individual
StepExecution that represents a single attempt to execute a
Step.
StepExecution stores the information about current and exit statuses, start and end times, and so on, as well as references to its corresponding
Step and
JobExecution instances..
JobRepository is the mechanism in Spring Batch that makes all this persistence possible. It provides CRUD operations for
JobLauncher,
Job, and
Step instantiations. Once a
Job is launched, a
JobExecution is obtained from the repository and, during the course of execution,
StepExecution and
JobExecution instances are persisted to the repository.
Getting Started with Spring Batch Framework
One of the advantages of Spring Batch is that project dependencies are minimal, which makes it easier to get up and running quickly. The few dependencies that do exist are clearly specified and explained in the project’s
pom.xml, which can be accessed here.
The actual startup of the application happens in a class looking something like the following:
@EnableBatchProcessing @SpringBootApplication public class BatchApplication { public static void main(String[] args) { prepareTestData(1000); SpringApplication.run(BatchApplication.class, args); } }
The
@EnableBatchProcessing annotation enables Spring Batch features and provides a base configuration for setting up batch jobs.
The
@SpringBootApplication annotation comes from the Spring Boot project that provides standalone, production-ready, Spring-based applications. It specifies a configuration class that declares one or more Spring beans and also triggers auto-configuration and Spring’s component scanning.
Our sample project has only one job that is configured by
CustomerReportJobConfig with an injected
JobBuilderFactory and
StepBuilderFactory. The minimal job configuration can be defined in
CustomerReportJobConfig as follows:
@Configuration public class CustomerReportJobConfig { @Autowired private JobBuilderFactory jobBuilders; @Autowired private StepBuilderFactory stepBuilders; @Bean public Job customerReportJob() { return jobBuilders.get("customerReportJob") .start(taskletStep()) .next(chunkStep()) .build(); } @Bean public Step taskletStep() { return stepBuilders.get("taskletStep") .tasklet(tasklet()) .build(); } @Bean public Tasklet tasklet() { return (contribution, chunkContext) -> { return RepeatStatus.FINISHED; }; } }
There are two main approaches to building a step.
One approach, as shown in the above example, is tasklet-based. A
Tasklet supports a simple interface that has only one method,
execute(), which is called repeatedly until it either returns
RepeatStatus.FINISHED or throws an exception to signal a failure. Each call to the
Tasklet is wrapped in a transaction.
Another approach, chunk-oriented processing, refers to reading the data sequentially and creating “chunks” that will be written out within a transaction boundary. Each individual item is read in from an
ItemReader, handed to an
ItemProcessor, and aggregated. Once the number of items read equals the commit interval, the entire chunk is written out via the
ItemWriter, and then the transaction is committed. A chunk-oriented step can be configured as follows:
@Bean public Job customerReportJob() { return jobBuilders.get("customerReportJob") .start(taskletStep()) .next(chunkStep()) .build(); } @Bean public Step chunkStep() { return stepBuilders.get("chunkStep") .<Customer, Customer>chunk(20) .reader(reader()) .processor(processor()) .writer(writer()) .build(); }
The
chunk() method builds a step that processes items in chunks with the size provided, with each chunk then being passed to the specified reader, processor, and writer. These methods are discussed in more detail in the next sections of this article.
Custom Reader
For our Spring Batch sample application, in order to read a list of customers from an XML file, we need to provide an implementation of the interface
org.springframework.batch.item.ItemReader:
public interface ItemReader<T> { T read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException; }
An
ItemReader provides the data and is expected to be stateful. It is typically called multiple times for each batch, with each call to
read() returning the next value and finally returning
null when all input data has been exhausted.
Spring Batch provides some out-of-the-box implementations of
ItemReader, which can be used for a variety of purposes such as reading collections, files, integrating JMS and JDBC as well as multiple sources, and so on.
In our sample application, the
CustomerItemReader class delegates actual
read() calls to a lazily initialized instance of the
IteratorItemReader class:
public class CustomerItemReader implements ItemReader<Customer> { private final String filename; private ItemReader<Customer> delegate; public CustomerItemReader(final String filename) { this.filename = filename; } @Override public Customer read() throws Exception { if (delegate == null) { delegate = new IteratorItemReader<>(customers()); } return delegate.read(); } private List<Customer> customers() throws FileNotFoundException { try (XMLDecoder decoder = new XMLDecoder(new FileInputStream(filename))) { return (List<Customer>) decoder.readObject(); } } }
A Spring bean for this implementation is created with the
@Component and
@StepScope annotations, letting Spring know that this class is a step-scoped Spring component and will be created once per step execution as follows:
@StepScope @Bean public ItemReader<Customer> reader() { return new CustomerItemReader(XML_FILE); }
Custom Processors
ItemProcessors transform input items and introduce business logic in an item-oriented processing scenario. They must provide an implementation of the interface
org.springframework.batch.item.ItemProcessor:
public interface ItemProcessor<I, O> { O process(I item) throws Exception; }
The method
process() accepts one instance of the
I class and may or may not return an instance of the same type. Returning
null indicates that the item should not continue to be processed. As usual, Spring provides few standard processors, such as
CompositeItemProcessor that passes the item through a sequence of injected
ItemProcessors and a
ValidatingItemProcessor that validates input.
In the case of our sample application, processors are used to filter customers by the following requirements:
- A customer must be born in the current month (e.g., to flag for birthday specials, etc.)
- A customer must have less than five completed transactions (e.g., to identify newer customers)
The “current month” requirement is implemented via a custom
ItemProcessor:
public class BirthdayFilterProcessor implements ItemProcessor<Customer, Customer> { @Override public Customer process(final Customer item) throws Exception { if (new GregorianCalendar().get(Calendar.MONTH) == item.getBirthday().get(Calendar.MONTH)) { return item; } return null; } }
The “limited number of transactions” requirement is implemented as a
ValidatingItemProcessor:
public class TransactionValidatingProcessor extends ValidatingItemProcessor<Customer> { public TransactionValidatingProcessor(final int limit) { super( item -> { if (item.getTransactions() >= limit) { throw new ValidationException("Customer has less than " + limit + " transactions"); } } ); setFilter(true); } }
This pair of processors is then encapsulated within a
CompositeItemProcessor that implements the delegate pattern:
@StepScope @Bean public ItemProcessor<Customer, Customer> processor() { final CompositeItemProcessor<Customer, Customer> processor = new CompositeItemProcessor<>(); processor.setDelegates(Arrays.asList(new BirthdayFilterProcessor(), new TransactionValidatingProcessor(5))); return processor; }
Custom Writers
For outputting the data, Spring Batch provides the interface
org.springframework.batch.item.ItemWriter for serializing objects as necessary:
public interface ItemWriter<T> { void write(List<? extends T> items) throws Exception; }. There are standard implementations such as
CompositeItemWriter,
JdbcBatchItemWriter,
JmsItemWriter,
JpaItemWriter,
SimpleMailMessageItemWriter, and others.
In our sample application, the list of filtered customers is written out as follows:
public class CustomerItemWriter implements ItemWriter<Customer>, Closeable { private final PrintWriter writer; public CustomerItemWriter() { OutputStream out; try { out = new FileOutputStream("output.txt"); } catch (FileNotFoundException e) { out = System.out; } this.writer = new PrintWriter(out); } @Override public void write(final List<? extends Customer> items) throws Exception { for (Customer item : items) { writer.println(item.toString()); } } @PreDestroy @Override public void close() throws IOException { writer.close(); } }
Scheduling Spring Batch Jobs
By default, Spring Batch executes all jobs it can find (i.e., that are configured as in
CustomerReportJobConfig) at startup. To change this behavior, disable job execution at startup by adding the following property to
application.properties:
spring.batch.job.enabled=false
The actual scheduling is then achieved by adding the
@EnableScheduling annotation to a configuration class and the
@Scheduled annotation to the method that executes the job itself. Scheduling can be configured with delay, rates, or cron expressions:
// run every 5000 msec (i.e., every 5 secs) @Scheduled(fixedRate = 5000) public void run() throws Exception { JobExecution execution = jobLauncher.run( customerReportJob(), new JobParametersBuilder().toJobParameters() ); }
There is a problem with the above example though. At run time, the job will succeed the first time only. When it launches the second time (i.e. after five seconds), it will generate the following messages in the logs (note that in previous versions of Spring Batch a
JobInstanceAlreadyCompleteException would have been thrown):
INFO 36988 --- [pool-2-thread-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=customerReportJob]] launched with the following parameters: [{}] INFO 36988 --- [pool-2-thread-1] o.s.batch.core.job.SimpleStepHandler : Step already complete or not restartable, so no action to execute: StepExecution: id=1, version=3, name=taskletStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0, exitDescription= INFO 36988 --- [pool-2-thread-1] o.s.batch.core.job.SimpleStepHandler : Step already complete or not restartable, so no action to execute: StepExecution: id=2, version=53, name=chunkStep, status=COMPLETED, exitStatus=COMPLETED, readCount=1000, filterCount=982, writeCount=18 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=51, rollbackCount=0, exitDescription=
This happens because only unique
JobInstances may be created and executed and Spring Batch has no way of distinguishing between the first and second
JobInstance.
There are two ways of avoiding this problem when you schedule a batch job.
One is to be sure to introduce one or more unique parameters (e.g., actual start time in nanoseconds) to each job:
@Scheduled(fixedRate = 5000) public void run() throws Exception { jobLauncher.run( customerReportJob(), new JobParametersBuilder().addLong("uniqueness", System.nanoTime()).toJobParameters() ); }
Alternatively, you can launch the next job in a sequence of
JobInstances determined by the
JobParametersIncrementer attached to the specified job with
SimpleJobOperator.startNextInstance():
@Autowired private JobOperator operator; @Autowired private JobExplorer jobs; @Scheduled(fixedRate = 5000) public void run() throws Exception { List<JobInstance> lastInstances = jobs.getJobInstances(JOB_NAME, 0, 1); if (lastInstances.isEmpty()) { jobLauncher.run(customerReportJob(), new JobParameters()); } else { operator.startNextInstance(JOB_NAME); } }
Spring Batch Unit Testing
Usually, to run unit tests in a Spring Boot application, the framework must load a corresponding
ApplicationContext. Two annotations are used for this purpose:
@RunWith(SpringRunner.class) @ContextConfiguration(classes = {...})
There is a utility class
org.springframework.batch.test.JobLauncherTestUtils to test batch jobs. It provides methods for launching an entire job as well as allowing for end-to-end testing of individual steps without having to run every step in the job. It must be declared as a Spring bean:
@Configuration public class BatchTestConfiguration { @Bean public JobLauncherTestUtils jobLauncherTestUtils() { return new JobLauncherTestUtils(); } }
A typical test for a job and a step looks as follows (and can use any mocking frameworks as well):
@RunWith(SpringRunner.class) @ContextConfiguration(classes = {BatchApplication.class, BatchTestConfiguration.class}) public class CustomerReportJobConfigTest { @Autowired private JobLauncherTestUtils testUtils; @Autowired private CustomerReportJobConfig config; @Test public void testEntireJob() throws Exception { final JobExecution result = testUtils.getJobLauncher().run(config.customerReportJob(), testUtils.getUniqueJobParameters()); Assert.assertNotNull(result); Assert.assertEquals(BatchStatus.COMPLETED, result.getStatus()); } @Test public void testSpecificStep() { Assert.assertEquals(BatchStatus.COMPLETED, testUtils.launchStep("taskletStep").getStatus()); } }
Spring Batch introduces additional scopes for step and job contexts. Objects in these scopes use the Spring container as an object factory, so there is only one instance of each such bean per execution step or job. In addition, support is provided for late binding of references accessible from the
StepContext or
JobContext. The components that are configured at runtime to be step- or job-scoped are tricky to test as standalone components unless you have a way to set the context as if they were in a step or job execution. That is the goal of the
org.springframework.batch.test.StepScopeTestExecutionListener and
org.springframework.batch.test.StepScopeTestUtils components in Spring Batch, as well as
JobScopeTestExecutionListener and
JobScopeTestUtils.
The
TestExecutionListeners are declared at the class level, and its job is to create a step execution context for each test method. For example:
@RunWith(SpringRunner.class) @TestExecutionListeners({DependencyInjectionTestExecutionListener.class, StepScopeTestExecutionListener.class}) @ContextConfiguration(classes = {BatchApplication.class, BatchTestConfiguration.class}) public class BirthdayFilterProcessorTest { @Autowired private BirthdayFilterProcessor processor; public StepExecution getStepExecution() { return MetaDataInstanceFactory.createStepExecution(); } @Test public void filter() throws Exception { final Customer customer = new Customer(); customer.setId(1); customer.setName("name"); customer.setBirthday(new GregorianCalendar()); Assert.assertNotNull(processor.process(customer)); } }
There are two
TestExecutionListeners. One is from the regular Spring Test framework and handles dependency injection from the configured application context. The other is the Spring Batch
StepScopeTestExecutionListener that sets up step-scope context for dependency injection into unit tests. A
StepContext is created for the duration of a test method and made available to any dependencies that are injected. The default behavior is just to create a
StepExecution with fixed properties. Alternatively, the
StepContext can be provided by the test case as a factory method returning the correct type.
Another approach is based on the
StepScopeTestUtils utility class. This class is used to create and manipulate
StepScope in unit tests in a more flexible way without using dependency injection. For example, reading the ID of the customer filtered by the processor above could be done as follows:
@Test public void filterId() throws Exception { final Customer customer = new Customer(); customer.setId(1); customer.setName("name"); customer.setBirthday(new GregorianCalendar()); final int id = StepScopeTestUtils.doInStepScope( getStepExecution(), () -> processor.process(customer).getId() ); Assert.assertEquals(1, id); }
Ready for Advanced Spring Batch?
This article introduces some of the basics of design and development of Spring Batch applications. However, there are many more advanced topics and capabilities—such as scaling, parallel processing, listeners, and more—that are not addressed in this article. Hopefully, this article provides a useful foundation for getting started.
Information on these more advanced topics can then be found in the official Spring Back documentation for Spring Batch.
Understanding the basics
Spring Batch is a lightweight, comprehensive framework designed to facilitate development of robust batch applications. It also provides more advanced technical services and features that support extremely high volume and high performance batch jobs through its optimization and partitioning techniques.
A "Step" is an independent, specific phase of a batch "Job", such that every Job is composed of one or more Steps.
"JobRepository" is the mechanism in Spring Batch that makes all this persistence possible. It provides CRUD operations for JobLauncher, Job, and Step instantiations.
One approach is tasklet-based, where a Tasklet supports a simple interface with a single execute() method. The other approach, **chunk-oriented processing**, refers to reading the data sequentially and creating "chunks" that will be written out within a transaction boundary. | https://www.toptal.com/spring/spring-batch-tutorial | CC-MAIN-2022-40 | en | refinedweb |
Internally, list package implements doubly linked list.
How to import list package in go golang:
import "container/list"
Example:
package main import ( "container/list" "fmt" ) func main() { // Create a new list and insert elements in it. l := list.New() l.PushBack(1) // 1 l.PushBack(2) // 1 -> 2 l.PushFront(3) // 3 -> 1 -> 2 l.PushBack(4) // 3 -> 1 -> 2 -> 4 // Iterate through list and print its elements. for ele := l.Front(); ele != nil; ele = ele.Next() { fmt.Println(ele.Value) } }
Output:
3
1
2
4
There are two structures used in list package.
- Element structure
- List structure
“Element” structure in list package
Here, Element is an element of a linked list.
type Element struct {
// next and prev are pointers of doubly-linked list.
next, prev *Element
// The list to which this element belongs.
list *List
// The value stored with this element.
Value interface{}
}
“List” structure in list package
List represents a doubly linked list. The zero value for List is an empty list and it is ready to use.
type List struct {
// contains filtered or unexported fields
}
list package has following given below set of functions to perform operation on linked list.
Functions under “Element” structure:
Functions under “List” structure:
To learn more about golang, Please refer given below link.
References: | https://www.techieindoor.com/go-list-package-in-go-golang/ | CC-MAIN-2022-40 | en | refinedweb |
hards.java
ListShards.java demonstrates how to list the shards in a Kinesis data stream.
/* *.kinesis; import software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient; import software.amazon.awssdk.services.kinesis.KinesisAsyncClient; import software.amazon.awssdk.services.kinesis.model.ListShardsRequest; import software.amazon.awssdk.services.kinesis.model.ListShardsResponse; public class ListShards { public static void main(String[] args) { String name = args[0]; KinesisAsyncClient client = KinesisAsyncClient.builder() .httpClientBuilder(NettyNioAsyncHttpClient.builder() .maxConcurrency(100) .maxPendingConnectionAcquires(10_000)) .build(); ListShardsRequest request = ListShardsRequest.builder() .streamName(name) .build(); ListShardsResponse response = client.listShards(request).join(); System.out.println(request.streamName() + " has " + response.shards()); } }
Sample Details
Service: kinesis
Last tested: 2019-06-28
Author: jschwarzwalder AWS
Type: full-example | https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-kinesis-src-main-java-com-example-kinesis-ListShards.java.html | CC-MAIN-2019-43 | en | refinedweb |
The phrase “better safe than sorry” gets thrown around whenever people talk about monitoring or getting observability into your AWS resources but the truth is that you can’t sit around and wait until a problem arises, you need to proactively look for opportunities to improve your application in order to stay one step ahead of the competition. Setting up alerts that go off whenever a particular event happens is a great way to keep tabs on what’s going on behind the scenes of your serverless applications and this is exactly what I’d like to tackle in this article.
AWS Lambda Metrics
AWS Lambda is monitoring functions for you automatically, while it reports metrics through the Amazon CloudWatch. The metrics we speak of consist of total invocations, throttles, duration, error, DLQ errors, etc. You should consider CloudWatch as a metrics repository, being that metrics are the basic concept in CloudWatch and they represent a set of data points which are time-ordered. Metrics are defined by name, one or even more dimensions, as well as a namespace. Every data point has an optional unit of measure and a time stamp.
And while Cloudwatch is a good tool to get the metrics of your functions, Dashbird takes it up a notch by providing that missing link that you’d need in order to properly debug those pesky Lambda issues. It allows you to detect any kinds of failures within all programming languages supported by the platform. This includes crashes, configuration errors, timeouts, early exits, etc. Another quite valuable thing that Dashbird offers is Error Aggregation that allows you to see immediate metrics about errors, memory utilization, duration, invocations as well as code execution.
AWS Lambda metrics explained
Before we jump in I feel like we should discuss the metrics themselves to make sure we all understand and know what every term means or what they refer too.
From there, we’ll take a peek at some of the namespace metrics inside the AWS Lambda, and we’ll explain how do they operate. For example
Invocations will calculate the number of times a function has been invoked in response to invocation API call or to an event which substitutes the RequestCount metric. All of this includes the successful and failed invocations, but it doesn’t include the throttled attempts. You should note that AWS Lambda will send mentioned metrics to CloudWatch only if their value is at the point of nonzero.
Errors will primarily measure the number of failed invocations that happened because of the errors in the function itself which is a substitution for ErrorCount metric. Failed invocations are able to start a retry attempt which can be successful.
There are limitations we must mention:
- Doesn’t include the invocations that have failed because of the invocation rates exceeded the concurrent limits which were set by default (429 error code).
- Doesn’t include failures that occurred because of the internal service errors (500 error code).
DeadLetterErrors can start a discrete increase in numbers when Lambda is not able to write the failed payload event to your pre-configured DeadLetter lines. This incursion could happen due to permission errors, misconfigured resources, timeouts or even because of the throttles from downstream services.
Duration will measure the real-time beginning when the function code starts performing as a result of an invocation up until it stops executing. The duration will be rounded up to closest 100 milliseconds for billing. It’s notable that AWS Lambda sends these metrics to CloudWatch only if the value is nonzero.
Throttles will calculate the number of times a Lambda function has attempted an invocation and were throttled by the invocation rates that exceed the users’ concurrent limit (429 error code). You should also be aware that the failed invocations may trigger retry attempts automatically which can be successful.
Iterator Age is used for stream-based invocations only. These functions are triggered by one of the two streams: Amazon’s DynamoDB or Kinesis stream. Measuring the age of the last record for every batch of record processed. Age is the sole difference from the time Lambda receives the batch and the time the last record from the batch was written into the stream.
Concurrent Executions are basically an aggregate metric system for all functions inside the account, as well as for all other functions with a custom concurrent pre-set limit. Concurrent executions are not applicable for different forms and versions of functions. Basically, this means that it measures the sum of concurrent executions in a particular function from a certain point in time. It is crucial for it to be viewed as an average metric considering its aggregated across the time period.
Unreserved Concurrent Executions are almost the same as Concurrent Executions, but they represent the sum of the concurrency of the functions that don’t have custom concurrent limits specified. They apply only to the user’s account, and they need to be viewed as an average metric if they’re aggregated across the period of time.
Where do you start?
Cloudwatch
In order to access the metrics using the CloudWatch console, you should open the console and in the navigational panel choose the metrics option. Furthermore, in the CloudWatch Metrics by Category panel, you should select the Lambda Metrics option.
Dashbird
To access your metrics you need to log in the app and the first screen will show you a bird’s eye view of all the important stats of your functions. From cost, invocations, memory utilization, function duration as well as errors. Everything is conveniently packed on to a single screen.
Setting Up Metric Based Alarms For Lambda Functions
It is essential to set up alarms that will notify you when your Lambda function ends up with an error, so you’ll be able to react proficiently.
Cloudwatch
To set up an alarm for failed function (can be caused by the fall of the entire website or even an error in the code) you should go to the CloudWatch console, choose Alarms on your left and click Create Alarm. Choose the “Lambda Metrics,” and from there, you should look for your Lambda name in the list. From there, check the box of a row where the metric name is “Error.” Then just click Next.
Now, you’ll be able to put a name and a description for the alarm. From here, you should set up the alarm to be triggered every time “Errors” are over 0, for one continuous period. As the Statistic, select the “sum” and the minutes required for your particular case in the dropdown “Period” window.
Inside the Notification box, choose the “select notification list” in a dropdown menu and choose your SNS endpoint. The last step in this setup is to click the “Create Alarm” button.
Dashbird
Setting metric-based alerts with Dashbird is not as complicated, in fact it’s quite the opposite. While in the app, go to the Alerts menu and click on the add button on the right side of your screen and give it a name. After that you select the metric you are interested in, which can either be a coldstart, retry, invocation and of course error. All you have to do is select the rules (eg: whenever the number of coldstards are over 5 in a 10-minute window alert me) and you are done.
How do you pick the right solution for your metric based alerts?
Though question. While Cloudwatch is a great tool, the second you have more lambdas in your system you’ll find it very hard to debug or even understand your errors due to the large volume of information. Dashbird, on the other hand, offers details about your invocations and errors that are simple and concise and have a lot more flexibility when it comes to customization. My colleague Renato made a simple table that compares the two services.
I’d be remiss not to make an observation: with AWS CloudWatch whenever a function is invoked, they spin up a micro-container to serve the request and open a log stream in CloudWatch for it. They re-use the same log stream as long as this container remains alive. This means the same log stream gets logs from multiple invocations in one place.
This quickly gets very messy and it’s hard to debug issues because you need to open the latest log stream and browse all the way down to the latest invocations logs While in Dashbird we show individual invocations ordered by time which makes it a lot easier for developers to understand what’s going on at any point in time.
Have anything useful to add? Please do so in the comment box below.
read original article here | https://coinerblog.com/getting-down-and-dirty-with-metric-based-alerting-for-aws-lambda-44dee79df49a/ | CC-MAIN-2019-43 | en | refinedweb |
Open Sourcing XAML Behaviors for WPF
Karan
Today,.
I followed these steps closely to convert an F# WPF app across.
However, even though the XAML editor intellisense sees the Interaction namespace and its members, when I come to run the app I get the following:
XamlObjectWriterException: ‘Cannot set unknown member ‘{}Interaction.Triggers’
Any ideas? I have the C# sample building and running correctly so why does this go wrong in F#?
Versions: VS2019 Pro, F# 4.6
Same issue here. Have you discovered a workaround? We are also converting an F# WPF application to .NET Core. | https://devblogs.microsoft.com/dotnet/open-sourcing-xaml-behaviors-for-wpf/ | CC-MAIN-2019-43 | en | refinedweb |
Windows Server 2008 R2 SP1 Technical Overview Published: October 2010 © 2010 Microsoft Corporation. All rights reserved. This document is developed prior to the product‘s, Excel, Hyper-V, MSDN, Silverlight, Visual Studio, Windows, the Windows logo, Windows PowerShell, Windows 7, and Windows Server R2 are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are property of their respective owners. Table of Contents ® Introduction to Windows Server 2008 R2 .......................................................................... 1 Overview .......................................................................................................................... 1 Using this Guide ............................................................................................................... 1 Virtualization ....................................................................................................................... 2 Server Virtualization with Hyper-V ................................................................................... 2 Increased Availability for Moving Virtual Machines ...................................................... 3 Increased Availability for Addition and Removal of Virtual Machine Storage ............ 12 Improved Management of Virtual Datacenters .......................................................... 12 Simplified Method for Physical and Virtual Computer Deployments ......................... 14 Hyper-V Processor Compatibility Mode for Live Migration ........................................ 14 Improved Virtual Networking Performance ................................................................ 16 Improved Virtual Machine Memory Management...................................................... 16 Terminal Services Becomes Remote Desktop Services for Improved Presentation Virtualization............................................................................................................... 18 Remote Desktop Services and Virtual Desktop Infrastructure .................................... 19 Improved User Experience When Accessing Media Rich Content .............................. 25 Management ..................................................................................................................... 28 Improved Data Center Power Consumption Management ............................................ 29 Improve the Power Efficiency of Individual Servers .................................................... 29 Processor Power Management ................................................................................... 30 Storage Power Management ...................................................................................... 31 Additional Power Saving Features .............................................................................. 32 Measure and Manage Power Usage Across the Organization .................................... 32 Remote Manageability of Power Policy ...................................................................... 33 In-Band Power Metering and Budgeting .................................................................... 33 New Additional Qualifier for the Designed for Windows Server 2008 R2 Logo Program .................................................................................................................. 34 Remote Administration .................................................................................................. 35 Reduced Administrative Effort for Interactive Administrative Tasks ............................... 35 Command-line and Automated Management ............................................................... 36 Remote Management ................................................................................................. 37 Improved Security for Management Data .................................................................. 38 Enhanced Graphical User Interfaces ........................................................................... 38 Extended Scripting Functionality ................................................................................ 39 Portability of Windows PowerShell Scripts and Cmdlets ............................................ 39 Improved Identity Management .................................................................................... 40 Improvements for All Active Directory Server Roles ................................................... 40 Improvements in Active Directory Domain Services ................................................... 40 Improvements in Active Directory Federated Services ............................................... 42 Improved Compliance with Established Standards and Best Practices .......................... 42 Web ................................................................................................................................... 42 Reduced Effort to Administer and Support Web-based Applications ............................ 43 Reduced Support and Troubleshooting Effort ............................................................ 46 Improved FTP Services ................................................................................................... 47 Ability to Extend Functionality and Features .................................................................. 48 Improved .NET Support .................................................................................................. 49 Improved Application Pool Security ............................................................................... 49 IIS.NET Community Portal .............................................................................................. 49 Solid Foundation for Enterprise Workloads ....................................................................... 50 Improved Scalability, Reliability, and Security ................................................................ 50 Increased Processor Performance and Memory Capacity ........................................... 50 Improved Application Platform Security ..................................................................... 51 Availability and Scalability for Applications and Services ........................................... 52 Improved Performance and Scalability for Applications and Services ........................ 54 Improved Storage Solutions ....................................................................................... 56 Improved Protection of Intranet Resources ................................................................ 59 Improved Management of File Services ..................................................................... 60 Improvements in Backup and Recovery ...................................................................... 63 Improved Security for DNS Services ........................................................................... 67 Better Together with Windows 7.................................................................................... 67 Simplified Remote Connectivity for Corporate Computers ........................................ 68 Secured Remote Connectivity for Private and Public Computers ............................... 86 Improved Performance for Branch Offices .................................................................. 88 Improved Security for Branch Offices ......................................................................... 90 Improved Efficiency for Power Management .............................................................. 91 Virtualized Desktop Integration .................................................................................. 91 Higher Fault Tolerance for Connectivity Between Sites and Locations ....................... 92 Protection for Removable Drives ................................................................................ 92 Prevention of Data Loss for Mobile Users .................................................................. 93 Summary ........................................................................................................................... 93 Introduction to Windows Server® 2008 R2 Overview Windows Server 2008 R2 is the latest version of the Windows Server operating system from Microsoft.. Using this Guide This guide is designed to provide you with a technical overview of the new and improved features in Windows Server 2008 R2. The guide is divided into the following key technical investments that are provided in Windows Server 2008 R2:. Windows Server 2008 R2 has been specifically designed to support increased workloads with less resource utilization on server computers. Windows Server 2008 R2 supports these increased workloads while enhancing reliability and security. Better Together With Windows® 7. Windows Server 2008 R2 includes technology improvements designed with Windows 7 enterprise users in mind, augmenting the network experience, security, and manageability. As you read each section, you can identify which Windows Server 2008 R2 features and capabilities will help you create solutions for your organization. You can also see how Windows Server 2008 R2 can help you manage and protect your existing solutions. Virtualization Virtualization is a huge part of today‘s data centers. The operating efficiencies offered by virtualization allow organizations to dramatically reduce the operations effort and power consumption. Windows Server® 2008 R2 provides the following virtualization: Server and desktop virtualization provided by Hyper-V™ in Windows Server 2008 R2. Hyper-V in Windows Server 2008 R2 is a micro-kernelized hypervisor which manages a server‘s system resources when hosting virtualized guests. Server virtualization allows you to provide a virtualized environment for operating systems and applications. When used alone, Hyper-V is typically used for server virtualization. When Hyper-V is used in conjunction with Virtual Desktop Infrastructure (VDI), Hyper-V is used for desktop virtualization. Session virtualization. Virtualizes a processing environment and isolates the processing from the graphics and IO, making it possible to run an application in one location but have it be controlled in another. Session virtualization may allow you to remotely access only a single application, or it may present you with a complete desktop offering multiple applications. Note: There are other types of virtualization which are not discussed in this guide, such as application virtualization provided by Microsoft Application Virtualization version 4.5. For more information on all Microsoft virtualization products and technologies, see ―Microsoft Virtualization: Home‖ at. Server and Desktop Virtualization with Hyper-V Beginning with Windows Server 2008, server virtualization via Hyper-V technology has been an integral part of the operating system. A new version of Hyper-V, is included as a part of Windows Server 2008 R2. Page 2 Microsoft® Hyper-V in R2 supports single and multi-core x64 processors and requires 64-bit machines with AMD-V- or Intel Virtualization Technology-enabled processors. For a complete list of supported guest operating systems please see:;EN-US;954958. There are two manifestations of the Hyper-V technology: Hyper-V is the hypervisor-based virtualization feature of Windows Server 2008. Microsoft Hyper-V Server is the hypervisorbased server virtualization product that allows customers to consolidate workloads onto a single physical server. Hyper-V includes numerous improvements for creating dynamic virtual data centers, including: Increased availability for moving virtual machines within virtual data centers. Increased availability for adding and removing virtual machine storage. Improved management of virtual data centers. Simplified method for physical and virtual computer deployments by using .vhd files. Increased Availability for Moving Virtual Machines One of the most important aspects of any data center is providing the highest possible availability for systems and applications. Virtual data centers are no exception to the need for high availability. Hyper-V in Windows Server 2008 R2 includes the much-anticipated Live Migration feature, which allows you to move a virtual machine between two computers running Hyper-V without any interruption of service. Comparison of Live Migration and Quick Migration Quick Migration is a feature found in both Windows Server 2008 Hyper-V and Windows Server 2008 R2 Hyper-V. By contrast, Live Migration is only in Windows Server 2008 R2. The primary difference between Live Migration and Quick Migration is that a Live Migration moves virtual machines without any perceived downtime or service interruption. The requirements for Live Migration and Quick Migration are very similar. Both Live Migration and Quick Migration can be initiated by: The System Center Virtual Machine Manager console, if Virtual Machine Manager is managing the cluster nodes that are configured to support Live Migration or Quick Migration. Note: Support for Live Migration will be included in System Center Virtual Machine Manager 2008 R2. Page 3 The Failover Cluster Management console, where an administrator can initiate a live migration. A Windows Management Instrumentation (WMI) script. Integration of Live Migration and Failover Clustering Live Migration has two core requirements: First it requires failover clustering in Windows Server 2008 R2; and second, it needs shared disk storage between cluster nodes. The shared disk storage can be provided by a vendor-based solution or by using the Cluster Shared Volumes feature in failover clustering in Windows Server 2008 R2. For more information on Cluster Shared Volumes, see ―Improvements for Virtual Machine Management‖ in ―Improved Availability for Applications and Services‖ later in this guide. disk storage, such as provided by Cluster Shared Volumes or vendorbased solutions. The .vhd files for the virtual machines to be moved by Live Migration must be stored on the same shared disk storage. The following figure illustrates a typical Hyper-V and failover cluster configuration for supporting Live Migration. Page 4 Figure 1: Typical configuration to support Live Migration Live Migration Process The Live Migration process is performed in the following steps: 1. A Hyper-V administrator initiates a Live Migration between the source and target cluster node. Page 5 2. A duplicate virtual machine is created on the target cluster node, as illustrated in the following figure. The source cluster node creates a TCP connection with the target cluster node. This connection is used to transfer the virtual machine configuration data to the target cluster node. A skeletal virtual machine VM is created on the target cluster node and memory is allocated for the destination virtual machine. Figure 2: Creation of target virtual machine on target cluster node 3. All of the current memory in the source virtual machine is copied to the target virtual machine. Page 6 The memory assigned to the source virtual machine is copied over the network to the target virtual machine. This memory is referred to as the working set of the source virtual machine. A page of memory is 4 kilobytes in size. Figure 3: Initial copy of memory from source to target virtual machine Page 7 4. Clients connected to the source virtual machine continue to run on the source virtual machine and create memory pages. 5. Hyper-V tracks the memory pages and continues an iterative copy of those pages until all memory pages are copied to the target virtual machine, as illustrated in the following figure. Page 8 Figure 4: Iterative copy of memory from source to target virtual machine 6. When the working set of the source virtual machine is copied, the source virtual machine is paused and the remaining memory pages are copied. Page 9 Note: The live migration process may be cancelled at any point before this stage of the migration. During this stage of the migration, the network bandwidth available between the source and target cluster nodes is critical to the speed of the migration. Live Migration requires a minimum 1 Gb/E network between cluster nodes and can take advantage of 10 Gb/E networks for even faster migrations. The faster the transmission speed between the cluster nodes, the more quickly the migration will complete. 7. The storage handles to the .vhd files or pass-through disks are transferred from the source cluster node to the target cluster node. 8. When all memory pages are copied to the target virtual machine and the storage handles are moved, the target machine is started and the clients are automatically redirected to the target virtual machine and the source virtual machine is deleted, as illustrated in the following figure. Page 10 Figure 5: Final configuration after Live Migration completes 9. Force physical network switches to re-learn location of migrated virtual machine. Page 11 The Live Migration process will complete in less time than the TCP timeout interval for the virtual machine being migrated. While TCP timeout intervals vary based on network topology and other factors, most migrations will complete within a few seconds. The following variable may affect migration speed: Network available bandwidth between source and destination hosts Hardware configuration of source and destination hosts Load on source and destination hosts Network available bandwidth between Hyper-V hosts and shared storage Increased Availability for Addition and Removal of Virtual Machine Storage Windows Server 2008 R2 Hyper-V supports hot plug-in and hot removal of virtual machine storage. By supporting the addition or removal of Virtual Hard Drive (VHD) files and passthrough disks while a virtual machine is running, Windows Server 2008 R2 Hyper-V makes it possible to quickly reconfigure virtual machines to meet changing requirements. This feature allows the addition and removal of both VHD files and pass-through disks to existing SCSI controllers of virtual machines running the following guest operating systems: Windows Server 2003 x86 and x64 editions Windows® XP x64 edition Windows Server 2008 and Windows Server 2008 R2 x86 and x64 editions Windows Vista® x86 and x64 editions Windows 7® x86 and x64 editions Note: Hot addition and removal of storage requires the guest operating system to run the Hyper-V Integration Services that is supplied with Windows Server 2008 R2. Improved Management of Virtual Datacenters Even with all the efficiency gains with virtualization, virtual machines still need to be managed. The number of virtual machines tends to proliferate much faster than physical computers because machines typically do not require a hardware acquisition. So, management of virtual data centers is even more imperative than ever before. Windows Server 2008 R2 includes the following improvements that will help you manage your virtual data center: Page 12 Reduced effort for performing day-to-day Hyper-V administrative tasks by using management consoles. Improved management of multiple Hyper-V servers in a virtual data center environment by using System Center Virtual Machine Manager 2008. Reduced Administrative Effort The Hyper-V Management console and Failover Cluster Management can be used to manage Live Migrations out of the box. But for data centers intent on leveraging the real power behind Hyper-V in R2 and Live Migration, the Microsoft System Center Virtual Machine Manager (SCVMM) adds significant value in terms of reducing overall administrative effort. VMM can manage both Quick Migrations as well as Live Migrations and has tools that make managing disparate Hyper-V hosts easier as well. This combination gives administrators a one-stop shop when it comes to managing a dynamically changing data center. Additionally, VMM can output Windows PowerShell scripts for all console tasks, which means administrators will also be able utilize the automation advantages of PowerShell without eating a steep learning curve or being programming aficionados. Last, SCVMM also contains an advanced reporting tool that administrators can use in dense virtualization environments to streamline decision making across the breadth of VM management, including performance, placement and purchasing. Improved Management with System Center Virtual Machine Manager 2008 R2 Hyper-V includes the necessary management tools to manage individual server computers running Hyper-V and the virtual machines running on those computers. System Center Virtual Manager 2008 helps you manage your entire virtual data center as an administrative unit. Some of the improved Hyper-V management features provided by System Center Virtual Machine Manager 2008 include: Extended Support for Hyper-V. System Center Virtual Machine Manager (VMM) 2008 supports all Hyper-V functionality while providing VMM-specific functions, such as the Intelligent Placement, the Self-Service Portal, and the Integrated Library. Automated responses to virtual machine performance problems and failures. The Performance and Resource Optimization (PRO) feature in VMM 2008 can dynamically respond to failure scenarios or poorly configured components that are identified in hardware, operating systems, or applications. When combined with PRO-enabled Page 13 Management Packs and System Center Operations Manager 2007, you can receive automatic notifications if a virtual machine, operating system, or application is unhealthy. Improved availability for virtual machines. VMM 2008 includes expanded support for failover clusters that improves the high-availability capabilities for managing missioncritical virtual machines. VMM 2008 is now fully cluster-aware, meaning it can detect and manage Hyper-V host clusters as a single unit. New user-friendly features, such as automatic detection of added or removed virtual hosts and designating high-availability virtual machine with one click, which helps reduce your administrative effort. Quick Storage Migration. Quick Storage Migration enables migration of a VM’s storage both within the same host and across hosts while the VM is running with a minimum of downtime, Simplified Method for Physical and Virtual Computer Deployments Historically, deploying operating systems and applications to physical and virtual computers has used different methods. For virtual computers, the .vhd file format has become a de facto standard for deploying and interchanging pre-configured operating systems and applications. Windows Server 2008 R2 also supports the ability to boot a computer from a .vhd file stored on a local hard disk. This allows you to use pre-configured .vhd files for deploying virtual and physical computers. This helps reduce the number of images that you need to manage and provides an easier method for your testing deployment prior to deployment in your production environment. Hyper-V Processor Compatibility Mode for Live Migration As the scope of virtualization increases rapidly in today‘s enterprise, customers have been chafing against hardware restrictions when performing VM migrations across physical hosts. With previous versions of Hyper-V, such migrations could essentially be performed only across hosts with an identical CPU architecture. Windows Server 2008 R2 Hyper-V, however, introduces a new capability, dubbed processor compatibility mode for live migration. Processor compatibility mode enables IT administrators to freely migrate VMs across physical hosts with differing CPU architectures as long as those architectures are support hardware assisted virtualization and within the same CPU product family, meaning Intel-to-Intel or AMD-to-AMD, but not Intel-to-AMD or vice versa. Processor compatibility mode was developed to address three basic customer scenarios: Page 14 1. A virtual machine running on Host A must be moved to Host B for effective load balancing across the physical hosts. 2. In a host cluster of identical processors, one has a hardware failure. The systems administrator purchases another server and adds it to the cluster; however the new server is using newer CPU technology than the original cluster members, yet must still support VM migrations. 3. A virtual machine running on Host A is saved. Later, the systems administrator needs to restore that VM to active memory on another Hyper-V host, which may not have the identical CPU configuration as the original host. How Processor compatibility mode works When a Virtual Machine (VM) is started on a host, the Hypervisor exposes the set of supported processor features available on the underlying hardware of that host to the VM. These sets of processor features are called the guest visible processor features. This set of processor features is available to the VM until the VM is restarted. When a running VM is migrated to another host, Hyper-V first compares verifies processor features currently available to the VM are also available on the destination host. If the destination processor does support all of the features available to the VM, the migration will fail. ―hidden‖ by the Hypervisor by intercepting a VM‘s CPUID instruction and clearing the returned bits corresponding to the hidden features. When a VM in a processor compatibility mode is started, the following processor features are hidden from the VM: Host running AMD based processor SSSE3, SSE4.1, SSE4.A, SSE5, POPCNT, LZCNT, Misaligned SSE, AMD 3DNow!, Extended AMD 3DNow! Host running Intel based processor SSSE3, SSE4.1, SSE4.2, POPCNT, Misaligned SSE, XSAVE, AVX Page 15. Improved Virtual Machine Memory Management Windows Server 2008 R2 SP1 introduces Hyper-V Dynamic Memory. Dynamic memory is a memory management enhancement for Hyper-V that enables customers to increase the efficiency of memory usage. By dynamically and securely adjusting the distribution of memory among virtual machines, Dynamic Memory helps enable the potential for higher consolidation ratios per physical host server. Dynamic memory dynamically increases and decreases the memory allocated to VMs based on usage. This results in more efficient utilization of memory and facilitates greater consolidation ratios of virtual machines. Dynamic memory is designed for production use and enables customers to obtain benefits on their servers with predictable performance and consistent scalability for their production deployment environments.: Windows Server 2008 R2 with the Hyper-V server role installed Page 16 Microsoft Hyper-V Server 2008 R* Page 17 Terminal Services Becomes Remote Desktop Services Session Virtualization Terminal Services is one of the most widely used features in previous versions of Windows Server. Terminal Services makes it possible to remotely run an application or an entire desktop in one location but have it be controlled and managed in another. Microsoft has evolved this concept considerably in Windows Server 2008 R2, so we‘ve decided to rename Terminal Services to Remote Desktop Services (RDS) to better reflect these exciting new features and capabilities. The goal of RDS is to provide both users and administrators with both the features and the flexibility necessary to build the most robust access experience in any deployment scenario. In addition to enabling a virtual desktop infrastructure (VDI), Remote Desktop Services in Windows Server 2008 R2 covers the same basic technology features as did Terminal Services, which is now referred to as session virtualization. The table below summarizes the new names for TS-to-RDS technologies in R2. Table 1: New Remote Desktop Services Names for Corresponding Terminal Services Names Page 18 Terminal Services name Remote Desktop Services name Terminal Services RDS Session Virtualization Terminal Services RemoteApp RemoteApp Terminal Services Gateway Remote Desktop Gateway Terminal Services Session Broker Remote Desktop Connection Broker Terminal Services Web Access Remote Desktop Web Access Terminal Services CAL Remote Desktop Services CAL Terminal Services Easy Print Remote Desktop Easy Print Remote Desktop Services and Virtual Desktop Infrastructure To expand the Remote Desktop Services feature set, Microsoft has been investing in the Virtual Desktop Infrastructure, also known as VDI, in collaboration with our software and hardware partners. VDI is a centralized desktop delivery architecture, which allows customers to centralize the storage, execution and management of a Windows desktop in the data center. It enables Windows 7 Enterprise and other desktop environments to run and be managed in virtual machines on a centralized server. Increasingly businesses aim to enable their employees and contractors to work from home or from an offshore, outsourced facility. These new work environments provide better flexibility, cost control and lower environmental footprint but increase demand for security and compliance so that precious Corporate data is not at risk. To answer these challenges, Microsoft has updated the Terminal Services Connection Broker, now called Remote Desktop Connection Broker. The new Remote Desktop Connection Broker extends the Session Broker capabilities already found in Windows Server 2008, and creates a unified admin experience for traditional session-based remote desktops and new virtual machine-based remote desktops. The two key deployment scenarios supported by the Remote Desktop Connection Broker are persistent (permanent) VMs and pooled VMs. In the case of a persistent VM, there is a oneto-one mapping of VMs to users; each user is assigned a dedicated VM which can be personalized and customized, and which preserves any changes made by the user. Today, most early adopters of VDI deploy persistent VMs as they provide the greatest flexibility to the end user. In the case of a pooled VM, a single image is replicated as needed for users; Page 19 user state can be stored via profiles and folder redirection, but will not persist on the VM once the user logs off. In either case, the in-box solution supports storage of the image(s) on the Hyper-V host.‘s load). RDS addresses all these challenges by incorporating the following features: Improved User Experience For both VDI and traditional session virtualization (formerly know as Terminal Services) the quality of user experience is more important than ever before. Windows Server 2008 R2 greatly improves the end user experience for VDI and session virtualization through new Remote Desktop Protocol capabilities. New capabilities enabled with Windows Server 2008 R2 SP1 and Microsoft RemoteFX help create a local-like user experience for remote users from any client device, rich or thin.: Extends Remote Desktop Services to enable a virtual desktop infrastructure (VDI). The in-box Remote Desktop Services capability is targeted at low-complexity deployments and as a platform for partner solutions, which can extend scalability and manageability to address the needs of more demanding enterprise deployments. The scope of the VDI architecture can include the following technologies and licenses to provide a comprehensive solution: Hyper-V Live Migration System Center Virtual Machine Manager Microsoft Application Virtualization in Microsoft Desktop Optimization Pack (MDOP). Page 20 Microsoft RemoteFX Windows Virtual Desktop Access (VDA) licensing Provides simplified publishing of, and access to, remote desktops and applications. The feeds described above provide access in Windows 7, but using the new RemoteApp & Desktop Web Access, users will also be able connect to these resources from Windows Vista and Windows XP using a web page. Improved integration with Windows 7 user interface. Once accessed, RAD-delivered programs and desktops show up in the Start Menu of Windows 7 with the same look and feel of locally installed applications. A new System Tray icon shows connectivity status to all the remote desktop and RemoteApp connections to which the user is currently subscribed. The experience is designed so that many users won‘t be able to tell the difference between a local and remote application. Page 21 Figure 6: Updates to the Remote Desktop Services Connection Broker Improved User Experience Through New Remote Desktop Protocol Capabilities These new capabilities, enabled with Windows Server 2008 R2 in combination with Windows 7, improve significantly the experience of remote users, making it more similar to the experience enjoyed by users accessing local computing resources. These improvements include: Multimedia Redirection: Provides high-quality multimedia by redirecting multimedia files and streams so that audio and video content is sent in its original format from the server to the client and rendered using the client‘s local media playback capabilities. True multiple monitor support: Enables support for up to 10 monitors in almost any size, resolution or layout with RemoteApp and remote desktops; applications will behave just like they do when running locally in multi-monitor configurations. Audio Input & Recording: VDI supports any microphone connected to a user‘s local machine, enables audio recording support for RemoteApp and Remote Desktop. Aero® Glass support: VDI provides users with the ability to use the AeroGlass UI for client desktops; ensuring that remote desktop sessions look and feel like local desktop sessions. Enhanced bitmap acceleration: 3D and other rich media applications such as Flash or Silverlight™. NEW in Windows Server 2008 R2 Service Pack 1: Microsoft RemoteFX (see page 28): Page 22‘s desktop. When an admin adds an application or update it automatically appears on users‘ Start menu and via that user‘s 7: Remote Desktop Services Web Access expands RDS features cross-OS Page 23 Administrators faced with larger RAD deployment scenarios will also find additional management features in Windows Server 2008 R2‘s Remote Desktop Services aimed at improving the management experience for all existing scenarios previously addressed by Terminal Services as well as the exciting new scenarios available via RAD.‘s Terminal Services to ensure that MSI install packages can be installed normally and that per-user install settings are correctly propagated. The updates also remove the need to put the server in ‗install mode‘, meaning users no longer need to be logged off during RAD management operations. Remote Desktop Gateway. RDG securely provides access to RAD resources from the Internet without the need for opening additional ports or the use of a VPN. RDG provides this by tunneling RDP over HTTPS and incorporating several new security features: Silent Session Re-authentication. The Gateway administrator can now configure the RDG to run periodic user authentication and authorization on all live connections. This ensures that any changes to user profiles are enforced. For users whose profiles haven‘t. Consent Signing. If your business demands that remote users adhere to legal terms & conditions before accessing corporate resources, the consent signing feature helps you do just that. Page 24 Administrative messaging. The Gateway also provides the flexibility to provide broadcast messages to users before launching any administration activities such as maintenance or upgrades. Partners and Independent Software Vendors (ISVs) also get tools with the new service to more easily enable third-party software manufacturers to build RAD‘t use RDP or remoting protocols. This provides a single UI and point of discoverability for any service. Session broker extensibility. The session. Improved User Experience When Accessing Media Rich Content and hardware-accelerated graphics applications Trends in current IT environments include faster network speeds, massively parallel processors, and an increasing diversity of client devices. The user experience for today’s user includes increased richness in graphics, including 3D user interfaces, video, animations, and other rich media content. Also, hardware acceleration for these user experiences, especially of 3D business applications such as Office 2010 or Bing3D maps, is becoming common place. Microsoft RemoteFX is a feature in Windows Server 2008 R2 SP1 Remote Desktop Services that enables connected users to access media-rich virtual and session-based desktops over the network from a broad range of client devices. RemoteFX helps the user experience for Page 25 remote session in Remote Desktop Services more closely mirror the user experience on a local computer running Windows. RemoteFX delivers value in the following areas: 3D Graphical support for VDI (RDVH) solutions using Virtual GPU. The enhanced features in RemoteFX allow remote users to have access to all the user experience features in all Windows operating systems, especially Windows 7. This includes the 3D aspects of Aero Glass and other DirectX/Direct3D applications. The following figure illustrates the user experience in Remote Desktop Services in Windows Server 2008 R2 SP1 for a Windows 7 guest operating system in VDI. Remote users are able to use all the graphical features that Windows 7 provides. Figure 8: User experience in Windows 7 VDI with Windows Server 2008 R2 SP1 The server running Windows Server 2008 R2 SP1 renders the graphics content locally using its graphic processing unit (GPU) for the Windows 7 Enterprise or Ultimate VDI instances and then sends the rendered bitmap content to the client. Ultra-thin clients or Page 26 even LCD panels can be used to display RemoteFX-based content because the client is only displaying the content and not performing the rendering itself. Improved efficiency over RDP for demanding remote workloads. The server-side encode of the graphical content can be performed more efficiently by the RemoteFX codec within RDP and handled in three ways: Software-based encode codec (Both RDSH and RDVH). The processors in the server encodes the graphics by running a software-implementation of the RemoteFX codec. This method is the most demanding on the server processor resources. Graphics processor-based encode codec (RDVH only). The graphics processor on a graphics adapter in the server encodes the RemoteFX codec. This method reduces the demands on the server processor resources, but increases the demands on the graphics processor. This can only be done in a VDI (RDVH) environment, not RDSH. RemoteFX ASIC-based encode codec (Both RDSH and RDVH).. A RemoteFX Application-Specific Integrated Circuit (ASIC) is a hardware implementation of the RemoteFX codec. This method is the least demanding on the server processor resources and the graphic processor resources. This will enable a similar benefit to the server that TCP offload provides today for TCP/IP networking. Improved support for a broader range of devices. Because of the reduced hardware requirements at the endpoint, RemoteFX provides support for a broader range of devices, including Windows-based computers, traditional thin clients, ultra-light thin clients, mobile devices, and dedicated access devices (such as an LCD display). Software-based decode codec. Like the server protocol encode, RemoteFX also provides a decode codec. This will result in an updated MSTSC client and available on all versions of Windows 7 as RDP 7.1 RemoteFX ASIC-based decode codec. A RemoteFX Application-Specific Integrated Circuit (ASIC) is a hardware implementation of the RemoteFX decode codec. This method is also useful when wanting to create ultra-light solid state thin clients or specialized RemoteFX-enabled devices Generic USB Redirection for VDI (RDVH only). RemoteFX enables generic redirection of nearly any USB device in a Windows 7 Enterprise or Ultimate VDI session which can enable support for multifunction printers and other mainstream USB devices. RemoteFX can provide enhanced features for VDI solutions and for session virtualization. In VDI (RDVH) solutions, RemoteFX provides an improved user experience for users running Page 27 Windows 7 and applications in virtualized environments on Hyper-V. In remote session (RDSH) solutions, RemoteFX provides an improved user experience for remote desktop sessions. Note: Session Virtualization with RemoteFX supports all content types except for 3D content. Management The ongoing management of servers in the data center is one of most time consuming task facing IT professionals today. Today‘s combination of virtual and physical management needs can make this an even more daunting task without proper planning and tools, because management strategies must support the management of both physical and virtual environments. Additionally, these management strategies must address and track power consumption and green IT policies. Because of these customer challenges, a key design goal for Windows Server® 2008 R2 is to reduce the day-to-day management chores of Windows Server 2008 R2 as well as to ease the administrative effort for common day-to-day operational tasks. A final but critical design component was that administrative tasks should be doable either on the server locally or remotely. Thus, the overall management improvements in Windows Server 2008 R2 include the following: Improved data center power consumption management Improved remote administration Reduced administrative effort for administrative tasks performed interactively Enhanced command-line and automated management by using Windows PowerShell™ version 2.0 Improved identity management provided by Active Directory® Domain Services (AD DS) and Active Directory Federated Services Improved compliance with established standards and best practices Page 28: Reduced power usage of individual servers o A new PPM engine o Storage power management o Additional incremental power saving features. Page 29 Figure 9: Power savings with Windows Server 2008 R2 Processor Power Management The PPM engine in Windows Server 2008 R2 has been re-written and improved. It now provides the ability to fine-tune the processor‘s. Page 30 Storage Power Management Another strategy for reducing power used by individual servers is to centralize their storage by using a Storage Area Network (SAN), which has a higher storage-capacity-to-powerconsumption. Windows Server 2008 R2 also supports the ability to boot from a SAN, which eliminates the need for local hard disks (local storage) in the individual server computers and decreases power consumption as a result (see the following figure). Page 31 Figure 10:states, in which the processor consumes very little energy but requires time to return to an operational state. Most of these technologies can also be leveraged in virtualization scenarios, letting you maximize the power efficiency of your virtualized environments as well as your physical systems. Measure and Manage Power Usage Across the Organization Windows Server 2008 R2 also helps provide businesses with the capability to better measure and manage power consumption, both locally and remotely across the enterprise. In Page 32 conjunction with server OEMs, Microsoft is pursuing an ACPI standards-based™. In-Band Power Metering and Budgeting The new power features introduce new opportunities for managing power consumption. An administrator can use the performance monitor on a server to view the moment-by-moment power consumption, or, in a more likely scenario, the IT administrator can write a script or use Microsoft® System Center to centrally collect and monitor power consumption data Page 33™. hardware they are purchasing supports the additional power-saving features can look for the Enhanced Power Management AQ. Page 34 Remote Administration Remote administration of server computers is essential to any efficient data center. It is very rare that server computers are administered locally. Windows Server 2008 R2 has a number of improvements in remote administration, including the following: Remote management through graphical management consoles. Server Manager has been updated to allow remote administration of servers. In addition, many of the management consoles have improved integration with Server Manager and as a result, support remote management scenarios. For more detailed information about each management console, see ―Management Console Improvements‖ later in this guide. Improved remote management from command-line and automated scripts. Windows PowerShell version 2.0 has a number of improvements for remote management scenarios. These improvements allow you to run scripts on one or more remote computers or allow multiple IT professionals to simultaneously run scripts on a single computer. For more detailed information about these remote management scenarios, see ―Enhanced Remote PowerShell Scenarios‖ later in this guide. Reduced Administrative Effort for Interactive Administrative Tasks Many of the management consoles used to manage Windows Server 2008 R2 have been updated or completely redesigned to help reduce administrative effort. Some of the prominent updated and redesigned management consoles are listed in the following table with a description of the improvements. Page 35 Table 2: Updated or Redesigned Management Consoles in Windows Server 2008 R2 Management console Improvements Server Manager Provides support for remote management for computers. Improves integration with many role and role services management console. Based on administrative capabilities provided by Windows PowerShell cmdlets. Task driven user interface. Based on administrative capabilities provided by Windows PowerShell cmdlets. Task driven user interface. Improved tools for day-to-day tasks Tight integration with System Center Virtual Machine Manager for managing multiple Hyper-V servers. Active Directory Administrative Center Internet Information Service Hyper-V Management Console Command-line and Automated Management The Windows PowerShell version 1.0 scripting environment was shipped with Windows Server 2008. Windows Server 2008 R2 includes Windows PowerShell version 2.0, which has a number of improvements over version 1.0, including the following: Improved remote management Improved security for management data including state and configuration information Enhanced graphical user interfaces for creating Windows PowerShell scripts, debugging them, and viewing Windows PowerShell script output Extended scripting functionality supports the creation of more powerful scripts with less development effort. Improved portability of Windows PowerShell scripts and cmdlets between multiple computers. Page 36 Remote Management One of the key benefits in Windows PowerShell version 2.0 is the ability to run scripts remotely with remote management by using the PowerShell Remoting feature. PowerShell Remoting allows you to automate many repetitive administrative tasks and then run those tasks on multiple computers. Running remote scripts is now implicit in Windows PowerShell version 2.0. Windows PowerShell Remote Management Requirements The PowerShell Remoting feature relies on Windows Remote Management (WSManagement) service. In order for PowerShell Remoting to work, the WS-Management service must be installed and running on the remote computer. You can verify that the WSManagement service is running by running the following Windows PowerShell cmdlet: PS> get-service winrm You can configure the Windows Remote Management (WS-Management) service settings, by running the following Windows PowerShell script: & $pshome\Configure-Wsman.ps1 Note: This script does not start or stop the WS-Management service. So you will need to restart the WS-Management service for the configuration settings to take effect. Windows PowerShell Remote Management Scenarios Windows PowerShell version 2.0 supports the following remote management scenarios: Many IT professionals running scripts on a single computer. This scenario is also known as the fan-in scenario. In this scenario, each IT professional could have a customized level of access based on their credentials. One IT professional running scripts on multiple computers from a single console. This scenario is also known as the fan-out scenario. In this scenario, the IT professional could have different levels of access based on their credentials. One IT professional interactively running scripts on a single remote computer. This scenario is also known as the one-to-one scenario. Run PowerShell scripts as a background job. This scenario allows you to run a Windows PowerShell command or expression asynchronously (in the background) without interacting with the console. The command prompt returns immediately and you can query for the job results interactively. You can run background jobs on a local or remote computer. Page 37 Improved Security for Management Data You can limit the access to management data and the ability to run commands, scripts, and other language elements by using Constrained Runspaces. Constrained Runspaces allow creation of Windows PowerShell Runspaces with a set of Constraints. Constraints allow you to specify the restrictions for each PowerShell Runspace. Constrained Runspaces allow you to grant lower privileged IT professionals, such as tier 1 or tier 2 help desk personnel, the ability to examine operational state or configuration but not change operational state or configuration. Enhanced Graphical User Interfaces Another key improvement in Windows PowerShell version 2.0 is the new graphical user interfaces. These graphical user interfaces allow you to: Create and debug Windows PowerShell scripts by using Graphical PowerShell. View Windows PowerShell script output by using the Out-GridView cmdlet. Create and Debug PowerShell Scripts with Graphical PowerShell Graphical PowerShell provides a graphical user interface that allows you to interactively create and debug Windows PowerShell scripts within an integrated development environment similar to Visual Studio®. Graphical PowerShell includes the following features: Syntax coloring for Windows PowerShell scripts (similar to syntax coloring in Visual Studio) Support for Unicode characters Support for composing and debugging multiple Windows PowerShell scripts in a multitabbed interface Ability to run an entire script, or a portion a script, within the integrated development environment Support for up to eight PowerShell Runspaces within the integrated development environment Note: Graphical PowerShell feature requires Microsoft .NET Framework 3.0. View Windows PowerShell Scripts Output with Out-GridView Cmdlet The new Out-GridView cmdlet displays the results of other commands in an interactive table, where you can search, sort, and group the results. For example, you can send the results of a Page 38 get-process, get-wmiobject, or get-eventlog command to out-gridview and use the table features to examine the data. Note: Out-gridview cmdlet feature requires Microsoft .NET Framework 3.0. Extended Scripting Functionality Windows PowerShell 2.0 includes the ability to extend PowerShell scripts functionality by using the following features: Create advanced functions. Advanced functions allow you to write wrappers around existing cmdlets. Windows PowerShell 2.0 searches for functions first and then cmdlets. This allows advanced functions to take precedence over cmdlets. Call .NET application programming interfaces (APIs). This feature allows you to extend your Windows PowerShell with the features provided by any .NET API. Improved script debugging. Windows Windows PowerShell scripts to respond to specific events in event logs. Write cmdlets in PowerShell script. This feature allows you to write cmdlets in Windows PowerShell instead of compiled C# or VB.NET. Script Internationalization. This new feature allows Windows PowerShell script authors to write scripts that can be translated to any language supported by Windows. New and updated cmdlets. Windows PowerShell 2.0 includes over 240 new cmdlets out of the box. Get more information on these at the PowerShell Community Web site. Portability of Windows PowerShell Scripts and Cmdlets Another area of improvement for Windows PowerShell 2.0 is in the area of portability. The improved portability in Windows PowerShell 2.0 allows you to easily move PowerShell scripts and cmdlets between computers. The features that help improve the portability of Windows PowerShell scripts and cmdlets include: New module architecture. This architecture allows the packaging of cmdlets, which includes the definition and packaging of scripts. You can send these packaged modules to other administrators. Page 39 New method of storing configuration information. In Windows PowerShell version 1.0 some of the configuration was put in the registry. In Windows PowerShell version 2.0 the configuration is stored in an .xml file. The .xml file allows the configuration information to be more easily moved from one computer to another. Note: Although you must uninstall PowerShell 1.0 before installing Windows PowerShell 2.0, the registry settings are automatically migrated to the .xml file. Improved Identity Management Identity management has always been one of the critical management tasks for Windowsbased networks. The implications of a poorly managed identity management system are one of the largest security concerns for any organization. Windows Server 2008 R2 includes identity management improvements in the Active Directory Domain Services and Active Directory Federated Services server roles. Improvements for All Active Directory Server Roles Windows Server 2008 R2 includes the following identity management improvements that affect all Active Directory server roles: New forest functional level. Windows Server 2008 R2 includes a new Active Directory forest functional level. Many of the new features in the Active Directory server roles require the Active Directory forest to be configured with this new functional level. Enhanced command line and automated management. Windows PowerShell cmdlets provide the ability to fully manage Active Directory server roles. The Windows PowerShell cmdlets augment the graphical management tools and help automate repetitive management tasks. Improvements in Active Directory Domain Services The Active Directory Domain Service server role in Windows Server 2008 R2 includes the following improvements: Recovery of deleted objects. Active Directory domains now have a Recycle Bin feature that allows you to recover deleted objects. If an Active Directory object is inadvertently deleted, you can restore the object from the Recycle Bin. Improved process for joining domains. Computers can now join a domain without being connected to the domain during the deployment process, also known as an offline domain join. This process allows you to fully automate the joining of a domain during Page 40 deployment. Domain administrators create a file that can be included as a part of the automated deployment process. The file includes all the information necessary for the target computer to join the domain. Improved management of user accounts used as identity for services. One of the time consuming management tasks is to maintain passwords for user accounts that are used as identities for services, also known as service accounts. When the password for a service account changes, the services using that identity must also be updated with the new password. To address this problem, Windows Server 2008 R2 includes a new feature called managed service accounts. In Windows Server 2008 R2, when the password for a service account changes, the managed service account feature automatically updates the password for all the services that use the service account. Reduced effort to perform common administrative tasks. Windows Server 2008 R2 includes a new Active Directory Domain Services management console, Active Directory Administrative Center (as illustrated in the following figure). Figure 11: Active Directory Administrative Center management console Active Directory Administrative Center is a task-based management console that is based on the new Windows PowerShell cmdlets in Windows Server 2008 R2. Page 41 Active Directory Administrative Center is designed to help reduce the administrative effort for performing common administrative tasks. Improvements in Active Directory Federated Services Active Directory Federated Services in Windows Server 2008 R2 includes a new feature called authentication assurance. Authentication assurance allows you to establish authentication policies for accounts that are authenticated in federated domains. For example, you might require smart card authentication, or other biometric authentication, for any users in federated domains. Improved Compliance with Established Standards and Best Practices Windows Server 2008 R2 includes an integrated Best Practices Analyzer for each of the server roles. You can run the Best Practices Analyzer to provide you with a set of configuration recommendations for the server role. The Best Practices Analyzer creates a checklist within Server Manager for the role that you can use to help you perform all the configuration tasks. The following figure illustrates a sample of the recommendations from the Best Practices Analyzer for the Active Directory Domain Services server role. Web Windows Server®, and that increase both reliability and scalability. Additionally, IIS 7.5 has streamlined management capabilities and provides more ways than ever to customize your Web serving environment. Page 42 Reduced Effort to Administer and Support Webbased Applications™ Provider for IIS; Support for .NET on Server Core, enabling ASP.NET and remote management through IIS Manager. Windows Windows PowerShell within IIS 7.5 management might include: Adding/modifying/deleting sites and applications; Migrating site settings; Configuring SSL and other security settings; Page 43 Restricting access by IP address; Backing up IIS configuration and content. Enhancements to IIS Manager New features have been added to IIS Manager for the 7.5 release that make it possible to manage obscure settings such as those used for FastCGI and ASP.NET applications or adding and editing request filtering rules through a graphical user interface. Configuration Editor Configuration Editor (illustrated in the following figure) allows you to manage any configuration section available in the configuration system. Configuration Editor exposes several configuration settings that are not exposed elsewhere in IIS Manager. Figure 12: Configuration Editor user interface IIS Manager UI Extensions Utilizing the extensible and modular architecture introduced with IIS 7.0, the new IIS 7.5 integrates and enhances existing extensions and allows for further enhancements and customizations in the future. The FastCGI module, for example, allows management of Page 44 FastCGI settings while the ASP.NET module allows management of authorization and custom error settings. Request Filtering The Request Filter module in Windows Server 2008 R2 will include the filtering features previously found in URLScan 3.1. By blocking specific HTTP requests, the Request Filter module helps prevent potentially harmful requests from being processed by Web applications on the server. The Request Filtering user interface (illustrated in the following figure) provides a graphical user interface for configuring the Request Filtering module. Figure 13: Request Filtering user interface Managed Service Accounts Windows Server 2008 R2 allows domain-based service accounts to have passwords that are managed by Active Directory® Domain Services (AD DS). These new type of accounts reduce the recurrent administrative task of having to update passwords on processes running with these accounts. IIS 7.5 supports the use of managed service accounts for application pool identities. Page 45 Hostable Web Core Developers are able to service HTTP requests directly in their applications by using the hostable Web core feature. Available through a set of APIs, this feature lets the core IIS Web engine to be consumed or hosted by other applications, allowing those apps to service HTTP requests directly. The hostable Web core feature is useful for enabling basic Web server capabilities for custom applications or for debugging applications. Reduced Support and Troubleshooting Effort by using IIS Failed Request Tracing.. Page 46 Improved FTP Services, as shown in the following figure. This allows administrators to perform common administrative tasks within one common administration console. Figure 14: Integration of the FTP server administration in Internet Information Service Manager Extended support for new Internet standards. The new FTP server includes support for emerging standards, including: Improved security by supporting FTP over secure sockets layer (SSL); Page 47 that now supports all FTP-related traffic, unique tracking for FTP sessions, FTP sub statuses, an additional detail field in FTP logs, and more. Ability to Extend Functionality and Features. The following figure illustrates the placement of IIS Extensions in the IIS 7.5 architecture. Figure 15: Architecture of IIS Extensions in IIS 7.5 in Windows Server 2008 R2 Extensions can be created by Microsoft, partners, independent software vendors, and your organization. Microsoft has developed IIS Extensions since the RTM version of Windows Server 2008. These IIS Extensions are available for download from Page 48. Many of the IIS Extensions developed by Microsoft will be shipped as a part of Windows Server 2008 R2, including: IIS WebDAV; Integrated and enhanced Administration Pack; Windows PowerShell Snap-In for IIS. Improved .NET Support The .NET Framework (versions 2.0, 3.0, 3.5.1 and 4.0) is now available on Server Core as an installation option. By taking advantage of this feature, administrators can enable ASP.NET on Server Core which affords them full use of Windows PowerShell cmdlets. Additionally, .NET support means the ability to perform remote management tasks from IIS manager and host ASP.NET Web applications on Server Core as well. Improved Application Pool Security. IIS.NET Community Portal To stay current with new additions to IIS in Windows Server 2008 or Windows Server 2008 R2, make sure to visit the IIS.NET community portal (). The site includes update news, in-depth instructional articles, a download center for new IIS solutions and free advice via blogs and technical forums. Page 49 Solid Foundation for Enterprise Workloads Windows Server® 2008 R2 has been designed as an enterprise-class operating system, capable of scaling to the largest data center workloads, while helping to ensure strong security and high-availability. Windows Server 2008 R2 allows you to create solutions that can solve your most demanding technical requirements. Specifically, Windows Server 2008 R2 provides enterprise-class foundation for workloads by providing: Improved scaling, reliability, and security for all your solutions. A platform with future growth potential that will allow you to take advantage of future operating systems, such as Windows® 7. Improved Scalability, Reliability, and Security Every application is mission critical to the users that depend on the application for performing their day-to-day job functions. Any outages of services, slow performance, or compromise in security results in loss in productivity and potential damage to your organization. Windows Server 2008 R2 helps you create solutions that are able to support your mission critical applications, while helping to also ensure that you can manage your solutions with less effort than with previous operating system platforms. Windows Server 2008 R2 helps improve the scalability, reliability, and security of your solutions with the following features: Increased processor performance and memory capacity for applications. Improved application platform security for all applications running on Windows Server 2008 R2. Improved availability and scalability for applications and services. Improved security for Domain Name System (DNS) services by using the DNSSEC feature. Increased Processor Performance and Memory Capacity The improvements in computer design have resulted in modern server computers that support ever increasing number of processors and increased memory capacity. Current Page 50 server computers are only shipping with 64-bit processors, with multiple processors, and higher memory capacity than ever before. These improvements allow you to create application platforms that are able to support larger workloads, reduce rack space in your data center, reduce power consumption, provide improved reliability, and reduce your overall administrative effort. Improved Physical Processor and Memory Resources 32-bit processors impose system resource limitations that restrict your ability to handle increased workloads without investing in additional server computers. 64-bit processors allow you to support larger workloads, while minimizing the number of physical computers in your data center. Also, server consolidation by using virtualization requires 64-bit processors to provide the processing and memory resources to support higher ratios of server consolidation. To support the increased processor performance and memory capacity provided by 64-bit processors, Windows Server 2008 R2 is only available for 64-bit processor architectures. Windows Server 2008 R2 supports up to 256 logical processor cores for a single operating system instance. Increased Logical Processor Support Windows Server 2008 R2 Hyper-V™ supports up to 64 logical processors.. This increased processor support makes it possible to run even more demanding workloads on a single computer, or scale workloads to greater extremes to match changing demand. Windows Server 2008 R2 Hyper-V also supports Second-Level Address Translation (SLAT) and CPU Core Parking. SLAT uses special processor functionality available in recent Intel and AMD processors to carry out some virtual machine memory management functions, significantly reducing hypervisor processor time and saving about 1MB of memory per virtual machine. CPU Core Parking enables power savings by scheduling virtual machine execution on only some processor cores and placing the remaining processor cores in a sleep state. Improved Application Platform Security Windows Server operating systems have included the concept of server roles for a number of versions. Windows Server 2008 R2 includes even more granular definition of server roles than in previous Windows Server operating systems. This finer granularity allows you to install only the operating system components and features that you need to support your applications and services, which reduces the attack surface of your solution. Page 51 In addition, the Windows Server 2008 R2 Server Core installation option now supports more server roles, such as .NET application support, than in Windows Server 2008 RTM. The Server Core installation option further reduces the attack surface of your solution by eliminating the graphical user interface on Windows Server 2008. Additional management features for the Server Core installation option, such as improvements in Windows PowerShell™ v2.0 and PowerShell Remoting, help reduce the administrative effort for supporting solutions with the Server Core installation option. Availability and Scalability for Applications and Services Availability is a key element in every solution in your enterprise. Today most mission critical applications are running on Windows Server and those applications require high availability. Failover clustering in Windows Server 2008 R2 has many improvements that can help overall application and operating system availability, including the following: Enhanced cluster validation tool. Windows Server 2008 R2 includes a best practice analyzer test which examines the best practices configuration settings for a cluster and cluster nodes. The test runs only on computers that are currently cluster nodes. Enhanced command line and automated management. Windows PowerShell cmdlets provide the ability to fully manage failover clusters and the applications running on the cluster. The Windows PowerShell cmdlets replace cluster.exe, which provided a command-line and scriptable interface for managing failover clusters in previous versions of Windows Server. Improved performance for intermittent or slow secured network connections. There are improvements in Internet Protocol security (IPsec) reconnection time that is achieved by eliminating some of the initial handshaking when reconnecting due to intermittent or slow connections. Improved network resiliency between cluster nodes. The connectivity between cluster nodes has been revised to give clusters the ability to recover from intermittent or slow connections between cluster nodes without affecting cluster node status. Improving the monitoring of clusters, cluster nodes, and applications. Failover clustering in Windows Server 2008 R2 includes the following improvements that help in failover cluster monitoring: New performance counters that help reduce the support and troubleshooting effort for cluster-based applications. New logging channel that helps clearly identify failover clustering-related events. New support issue solutions that can be accessed directly while viewing the events for the top support issues. Page 52 Secure access to cluster monitoring and configuration information. The failover clustering Windows PowerShell provider leverages the delegated permissions available in PowerShell 2.0 to provide read-only access to cluster monitoring and configuration information. This allows you to allow less privileged IT professionals read-only access, while allowing high privileged IT professionals read and write access. For more information on delegate permissions in Windows PowerShell 2.0, see ―Improved Security for Management Data‖ in ―Management‖ earlier in this guide. Improved migration of supported cluster workloads. You can migrate cluster workloads currently running on Windows Server 2003 and Windows Server 2008 to Windows Server 2008 R2. The migration process supports: Every workload currently supported on Windows Server 2003 and Windows Server 2008, including Distributed File System Namespace (DFS-N), Dynamic Host Configuration Protocol (DHCP), DTC, File Server, Generic Application, Generic Script, Generic Service, Internet Storage Name Service (iSNS), Network File System (NFS), Other Server, Remote Desktop Session Broker, and Windows Internet Naming Service (WINS). Supports most common network configuration. Does not support rolling upgrades of clusters (cluster workloads must be migrated to a new cluster running Windows Server 2008 R2). Includes new high availability roles for failover clustering. Failover clustering in Windows Server 2008 R2 includes new high availability roles, including DFS-Replication, Hyper-V, and Terminal Services Session Broker. Improvements in cluster node connectivity fault tolerance. If a cluster node loses: Connectivity to a shared disk, the cluster node can write to the shared disk through other cluster nodes (also known as dynamic I/O redirection). Network connectivity through the primary network adapter, the cluster node can access the network through the primary network adapter of other cluster nodes. Improvements for virtual machine management. The Live Migration feature in HyperV version 2.0 allows virtual machines to be moved between failover cluster nodes without interruption of services provided by the virtual machines. The Live Migration feature requires shared disk storage between the cluster nodes. The shared disk storage can be provided by any vendor-based solution or by the new Cluster Shared Volumes feature in failover clustering. The Cluster Shared Volumes feature supports a file system that is shared between cluster nodes. This feature is implemented as a filter driver in Windows Server 2008 R2. It is manually enabled by configuring a cluster wide property in Windows PowerShell (%{$_.EnableSharedVolumes=1}). It is not supported with cluster Page 53 nodes in multiple sites. This feature leverages other failover cluster features, such as dynamic I/O redirection to maintain connectivity to disks. The Cluster Shared Volumes feature has no: Special hardware requirements. Special application requirements. File type restrictions. Directory structure or depth limitations. Special agents or additional installations. Proprietary file system (uses NTFS). For more information on the Live Migration feature, see ―Improved Management of Virtual Datacenters‖ in ―Virtualization‖ earlier in this guide.. The Windows Server 2008 R2 features that improve performance and scalability for applications and services include: Support for larger workloads by adding additional servers to a workload (scaling out). Support for larger workloads by utilizing or increasing system resources (scaling up). Increased Workload Support by Scaling Out The Network Load Balancing feature in Windows Server 2008 R2 allows you to combine two or more computers in to a cluster. You can use Network Load Balancing to distribute workloads across the cluster nodes to support larger number of simultaneous users. The Network Load Balancing feature improvements in Windows Server 2008 R2 include: Improved support for applications and services that require persistent connections. Enhanced command line and automated management for Network Load Balancing clusters. Improved health monitoring and awareness for applications and services running on Network Load Balancing clusters. Page 54 Improved Support for Applications and Services that Require Persistent Connections The IP Stickiness feature in Network Load Balancing allows you to configure longer affinity between client and cluster nodes. By default, Network Load Balancing distributes each request to different nodes in the clusters. Some applications and services, such as a shopping cart application, require that a persistent connection is maintained with a specific cluster node. You can configure a timeout setting for connection state to a range of hours or even weeks in length. Examples of applications and services that can utilize this feature include: Universal Access Gateway (UAG) that uses a Secure Sockets Layer (SSL)-based virtual private network (VPN). Web-based applications that maintain user information, such as an ASP.NET shopping cart application. Enhanced Command line and Automated Management Windows PowerShell cmdlets provide the ability to fully manage Network Load Balancing clusters and the applications running on the cluster. The Windows PowerShell cmdlets replace nlb.exe, which provided a command-line and scriptable interface for managing Network Load Balancing clusters in previous versions of Windows Server. These Windows PowerShell cmdlets allow you to: Create and destroy clusters. Add, remove, and control cluster nodes. Add, edit, and remove cluster virtual IP addresses and dedicated IP addresses. Provide support for local and remote management. Improved Health Monitoring and Awareness for Applications and Services The Network Load Balancing Management Pack for Windows Server 2008 R2 allows you to monitor the health of applications and services running in Network Load Balancing clusters. This allows you to identify when applications on cluster nodes or entire cluster nodes have failed and requires attention. Increased Workload. Windows Server 2008 R2 Datacenter Edition supports up to 256 logical processors. Page 55 Reduced operating system overhead for graphical user interface. In addition to reducing the attack surface of the operating system, the Server Core installation option eliminates the graphical user interface, which reduces the amount of processor utilization. The reduction in processor utilization allows more of the processing power to be used for running workloads. Improved performance for storage devices. Windows Server 2008 R2 includes a number of performance improvements for storage devices connected locally, through Internet Small Computer System Interface (iSCSI), and other remote storage solutions. For more information on these improvements in storage device performance, see ―Improved Storage Solutions‖ later in this section. Improved Storage Solutions The ability to quickly access information is more critical today than ever before. The foundation for this high-speed access is based on file services and network attached storage. Microsoft storage solutions are at the core of providing high-performance and highlyavailable file services and network attached storage. The release version of Windows Server 2008 had many improvements in storage technologies. Windows Server 2008 R2 includes additional improvements that help the performance, availability, and manageability of storage solutions. Improved Storage Solution Performance Windows Server 2008 R2 includes a number of performance improvements in storage solutions, including: Reduction in processor utilization to achieve “wire speed” storage performance. Wire speed (or wire speed) refers to the hypothetical maximum data transmission rate of a cable or other transmission medium. Wire speed is dependent on the physical and electrical properties of the cable, combined with the lowest level of the connection protocols. Windows Server 2008 RTM is able to access storage at wire speed, but at a higher processor utilization than Windows Server 2008 R2. Improved storage input and output process performance. One the primary contributors to the storage performance improvements for Windows Server 2008 R2 is the improvement in the storage input and output process, known as NTIO. The NTIO process has been optimized to reduce the overhead in performing storage operations. Page 56 Improved performance when multiple paths exist between servers and storage. When multiple paths exist to storage, you can load-balance storage operations by loadbalancing the storage requests. Windows Server 2008 R2 supports up to 32 paths to storage devices, while Windows Server 2008 RTM only supported two paths. You can configure load-balancing policies to optimize the performance for your storage solution. Improved connection performance for iSCSI attached storage. The iSCSI client in Windows Server 2008 R2 has been optimized to improve the performance for iSCSI attached storage. These improvements include: Offload iSCSI digest. This includes offloading of iSCSI initiator CRC (header and data digests) to hardware, which can result in a 20 percent reduction in processor utilization for iSCSI. The iSCSI digest offload is supported by Intel Nehalem/I7 processors. Support for NUMA IO. This allows the processing of disk IO request to be completed on the same processor on which the request was initiated. Reduction in lock contention. The core IO path for Windows Server 2008 R2 has been optimized to reduce contention for multiple IO threads running concurrently. Improved support for virtual machines. Many of the same optimizations provided for Windows Server 2008 R2 running on a physical computer are available for virtual machines. These improvements affect the network interfaces and iSCSI initiators for virtual machines. This includes support for TCP Chimney, Large Send Offload (LSO) v2, and Jumbo Frames. Each of these improvements can help increase the performance for virtual machines using the iSCSI initiator. Improved support for optimization of storage subsystem. The storage system has been designed to allow hardware vendors to optimize their storage mini-driver. For example, a vendor could optimize the disk cache for their storage mini-driver. Reduced length of time for operating system start. Chkdsk is run during the operating system start when an administrator has scheduled a scan of a disk volume or when volumes are not shutdown properly. Chkdsk performance has been optimized to reduce the length of time required to start the operating system. This allows you to recover faster in the event of an abnormal shutdown of the operating system (such as a power loss). Improved Storage Solution Availability Availability of storage is essential to all mission critical applications in your organization. Windows Server 2008 R2 includes the following improvements to storage solution availability: Page 57 Improved fault tolerance between servers and storage. When multiple paths exist between servers and storage, Windows Server 2008 R2 can failover to an alternate path if the primary path fails. You can select the failover priority by configuring the loadbalancing. Improved Storage Solution Manageability Management of the storage subsystem is another design goal for Windows Server 2008 R2. Some of the manageability improvements in Windows Server 2008 R2 include: Automated deployment of storage subsystem configuration settings. You can automate the storage subsystem configuration settings in Windows Server 2008 R2 by customizing the Unattend.xml file. Improved monitoring of storage subsystem. The storage subsystem in Windows Server 2008 R2 includes the following improvements that help in monitoring: New performance counters that help reduce the support and troubleshooting effort for storage subsystem-related issues. Extended logging for the storage subsystem, including storage drivers. Health-based monitoring of the entire storage subsystem. Improved version control of storage system configuration settings. Windows Server 2008 R2 allows you to take configuration snapshots of the storage subsystem. This allows you to perform version control of configuration settings and allows you to quickly restore to a previous version in the event of a configuration error. Reduction in complexity for connecting iSCSI. Windows Server 2008 R2 includes the ability to discover and log on to a target using the DNS name or IP address of the target. This dramatically reduces the effort required to discover and log on to iSCSI targets. Reduction in iSCSI configuration effort on Server Core installation options. There is a graphical interface for configuring iSCSI which can be started from a command line in Server Core installation options. Reduction in iSCSI remote management. You can remotely manage iSCSI by using the Windows Remote Shell or by using the Psexec. For more information on iSCSI remote management by using: Page 58 Windows Remote Shell, see. Psexec, see. Improved Protection of Intranet Resources The Network Policy Server (NPS) is a Remote Authentication Dial-In User Service (RADIUS) server and proxy and Network Access Protection (NAP) health policy server. NPS evaluates system health for NAP clients, provides RADIUS authentication, authorization, and accounting (AAA), and provides RADIUS proxy functionality. NAP is a platform that includes both client and server components to enable fully extensible system health evaluation and authorization for a number of network access and communication technologies, including: Internet Protocol security (IPsec)-protected communication 802.1X-authenticated access for wireless and wired connections Remote access virtual private network (VPN) connections Dynamic Host Configuration Protocol (DHCP) address allocation Terminal Service (TS) Gateway access The improvements to NPS in Windows Server 2008 R2 include: Automated NPS SQL logging setup. This new feature automatically configures a SQL database, required tables, and store procedure for NPS accounting data, which significantly reduces the NPS deployment effort. NPS logging improvements. The logging improvements enable NPS to simultaneously log accounting data to both a file and a SQL database, support failover from SQL database logging to file logging, and support logging with an additional file format that is structured similar to SQL logging.connected computers must have their antivirus software enabled and a different network policy that specifies that VPN-connected computers must have their anti-virus software enabled and anti-malware installed. Page 59 synchronized across multiple NPS servers running Windows Server 2008 R2. Migration of Windows Server 2003 Internet Authentication Service (IAS) servers. This feature allows you to migrate the configuration settings of an IAS server running on Windows Server 2003 to an NPS server running on Windows Server 2008 R2. Improved Management of File Services Managing data stored on file services is usually challenging because of the sheer number of files being stored on network shared folders. Because users store files on network shared with little or no restrictions, the user storing the files is the only individual who has any knowledge of the information being stored in the file and other characteristics about the file, such as sensitivity or criticality of the information in the file. Even with this knowledge, you cannot rely on the user to properly determine the proper classification of information, data archival schedule, and other common IT operations tasks. You need to be able to centrally categorize these files and then perform IT file operations based on the classification of the files. The Windows File Classification Infrastructure (FCI) in Windows Server 2008 R2 provides insight into your data to help you manage your data more effectively, reduce costs, and mitigate risks. The Windows File Classification Infrastructure allows you to establish policies for classifying files and then performing common administrative tasks based on the classification.. Page 60. Improved Policy-based Classification of Files One of the key advantages to the Windows File Classification Infrastructure is the ability to centrally manage the classification of the files by establishing classification policies. This centralized approach allows you to classify user files without requiring their intervention. You can use the Windows File Classification Infrastructure to: Define classification properties and values, which can be assigned to files on a per-server basis by running classification rules. Property types can include Boolean, date, numbered, ordered lists, and string values.. Improved File Management Tasks The Windows File Classification Infrastructure allows you to perform file management tasks based on the classifications that you define. You can use the Windows File Classification Infrastructure to help you perform common file management tasks, including: Grooming of data. You can automatically delete data by using policies based on data age or classification properties to free valuable storage space and intelligently reduce storage demand growth. Page 61 Custom Tasks. Execute custom commands based on age, location or other classification categories. For example, IT administrators are able to automatically move data based on policies for either centralizing the location of sensitive data or for moving data to a less expensive storage resource. The Windows File Classification Infrastructure allows you to automate any file management task by using the file classifications you establish for your organization. Improved Reporting on Information Stored in Files Most IT organizations have no easy method of providing information about the types of files that are stored and managed. Without classification of the files, there is minimal information that can be used to help identify the usage of the files, the sensitivity of the files, and other relevant information about the files. The Windows File Classification Infrastructure allows you to generate reports in multiple formats that can provide statistical information about the files stored on each file server. You can use the reporting infrastructure to generate information that can be used by another application (such as a comma separated variable format text file that could be imported into Microsoft® Excel®). Improved File Owner Notification of File Management Tasks Another feature of the Windows File Classification Infrastructure that reduces your administrative effort is the ability to send notifications to content owners when an automated file management task runs. For example, when files become old enough to be automatically expired, the content owners can be notified in advance and given the opportunity to prevent the files from being archived or deleted. You can also select the method for notification based on the type of file management task being performed. And the extensible nature of the Windows File Classification Infrastructure allows you to integrate with existing messaging systems or information portals. Improved Development of File Management Tasks You can extend the file management features of the Windows File Classification Infrastructure by creating your own custom file management solution or purchasing a file management solution from an independent software vendor. The architecture of the Windows File Classification Infrastructure allows the use of any supported development environments for Windows Server 2008 R2, including Windows PowerShell and VBScript. Page 62 This architecture allows you to select the level of programming sophistication required to automate your file management tasks. For example, you could write Windows PowerShell scripts to manage files based on the classifications you define for your organization. Improvements in Backup and Recovery. Improvements in Windows Server Backup Windows Server 2008 R2 includes a new version of the Windows Server Backup utility. This new version of Windows Server Backup allows you to: Backup specific files and folders. In Windows Server 2008 RTM you had to backup and entire volume. In Windows Server 2008 R2, you can include or exclude folders or individual files. You can also exclude files based on the file types. Perform incremental backup of system state. Previously, you could only perform a full backup of the system state by using the wbadmin.exe utility. Now you can perform incremental backups of the system state by using Windows Server Backup utility, the wbadmin.exe utility, or from a Windows PowerShell cmdlet. Perform scheduled backups to volumes. You can perform a scheduled backup to existing volumes in Windows Server 2008 R2. In Windows Server 2008, you had to dedicate an entire physical disk to the backup (the target physical disk was partitioned and a new volume was created previously). Perform scheduled backups to network shared folders. You can now perform scheduled backups to a network shared folder, which was not possible in the previous version. Manage backups by using Windows PowerShell. You can manage backup and restore tasks by using Windows PowerShell (including all PowerShell remoting scenarios). This includes the management of on-demand and scheduled backups. Page 63 Improvements in Full Volume Recovery. Comparison of LUN Resynchronization and Traditional Volume Shadow Copy Service Window Server 2008 R2 LUN resynchronization support is an extension of the features provided by the Volume Shadow Copy Service in Windows Server 2008 R2. LUN resynchronization uses the same application programming interfaces (APIs) that are used by the Volume Shadow Copy Service. The following table lists the differences between LUN resynchronization and current features in Volume Shadow Copy Service. Table 3: Comparison of LUN Resynchronization and Traditional Volume Shadow Copy Service LUN Resynchronization Traditional Volume Shadow Copy Service Recovers entire LUN (which may contain multiple volumes). Recovers only a volume. Performed by storage array hardware. Performed by server computer. Typically takes less time than restoring by using traditional Volume Shadow Copy Service. Typically takes more time than restoring by using LUN resynchronization. Comparison of LUN Resynchronization and LUN Swap LUN Swap is a fast volume recovery scenario that has supported since Windows Server 2003 Service Pack 1. In LUN swap, a shadow copy version of a LUN is exchanged with the active The following table lists the differences between LUN resynchronization and LUN Swap. Page 64 Table 4: Comparison of LUN Resynchronization and LUN Swap LUN Resynchronization LUN Swap Source (shadow copy) LUN remains unmodified after the resynchronization completes. Source (shadow copy) LUN becomes the active LUN and is modified. Destination LUN contains the same information as the source LUN, but also any information written during the resynchronization. Contains only the information on the source LUN. Source LUN can be used for recovery again. Must create another shadow copy to perform recovery. Requires the destination LUN exists and is usable. Destination LUN does not have to exist or can be unusable. Source LUN can exist on slower, less expensive storage. Source LUN must have the same performance as the production LUN. Benefits of Performing Full Volume Recovery Using LUN Resynchronization The benefits of LUN resynchronization include the following: Perform recovery of volumes with minimal disruption of service. After the recovery of a volume using LUN resynchronization is initiated, users can continue to access data on the volume while the synchronization is being performed. Although there may be a reduction in performance, users and applications are still able to access their data. Reduce the workload while recovering volumes. Because the hardware storage array is performing the resynchronization, the server hardware resources are only minimally affected. This allows the server to continue processing other workloads with the same performance while the LUN resynchronization process is completing. Integration with existing volume recovery methods. The APIs used to perform LUN resynchronization are the same APIs that are used to perform traditional Volume Shadow Copy Service recovery. This helps ensure that you can the same tools and processes that you are currently using for traditional Volume Shadow Copy Service recovery. Page 65 Compatibility with future improvements. Because LUN resynchronization uses published, supported APIs in Windows Server 2008 R2, future versions of Windows Server will also provide support for LUN resynchronization. Process for Performing Full Volume Recovery Using LUN Resynchronization Before you can perform a full volume recovery using LUN synchronization, you need to have a hardware shadow copy (snapshot) of the LUN. You can make full or differential shadow copies of the LUN. The following is the sequence of events when performing a full volume restore using LUN synchronization: 1. The source and destination LUNs are identified. 2. The LUN resynchronization is initiated between the source (shadow copy) and destination LUNs. 3. During the LUN resynchronization users are able to access the volume being accessed by the following methods: For read operations, volume requests are directed to the source LUN. For write operations, volume requests are directed to the destination LUN. 4. The LUN resynchronization continues by performing a block-level copy from the source (shadow copy) LUN to the destination LUN. 5. The LUN resynchronization completes and all user requests are now performed from the destination LUN. Note: At the end of the LUN resynchronization process, the source LUN is unmodified and the destination LUN contains the same information as the source LUN plus any data that was written to the destination LUN during the LUN resynchronization process. You can find more information about how these steps are performed by viewing the Volume Shadow Copy Service APIs on MSDN® and on the Windows Software Development Kit (SDK) for Windows 7 and Windows Server 2008 R2. Improvements in Data Protection Manager Integration Service Pack 1 for Microsoft System Center Data Protection Manager 2007 provides continuous data protection for Windows application and file servers using seamlessly integrated disk and tape media and includes the following expanded capabilities: Protection of files, configuration, and other information stored on Windows Server 2008 R2. Page 66 Protection of Hyper-V virtualization platforms, including both Windows Server 2008 R2 Hyper-V and the Microsoft Hyper-V Server, has been added to the existing set of protected workloads. Improved Security for DNS Services One common issue with DNS name resolution is that clients can‘t tell the difference between legitimate and illegitimate DNS information and are this vulnerable to spoofing and Man in the Middle attacks. The DNS Security Extensions (DNSSEC) feature in Windows Server 2008 R2 and Windows 7 allows the DNS servers to verify authenticity of a DNS record obtained from a signed zone, and allows clients to establish a trust relationship with the DNS server. The DNS records in a protected DNS zone include a set of public keys that are sent as DNS resource records from the DNS server services on Windows Server 2008 R2 and Windows 7. Through the use of pre-configured Trust Anchors, the DNS server can obtain the public keys of the key pair used to sign the zone and validate the authenticity of the data obtained from the zone. This method prevents interception of DNS queries and returning of illegitimate DNS responses from an untrusted DNS server. Better Together with Windows 7 Windows Server 2008 R2 has many features that are designed to specifically work with client computers running Windows 7, the next version of client operating systems from Microsoft. The features that are only available with running Windows 7 client computers with server computers running Windows Server 2008 R2 include: Simplified remote connectivity for corporate computers by using the DirectAccess feature. Secured remote connectivity for private and public computers by using a combination of the Remote Workspace, Presentation Virtualization, and Remote Desktop Services Gateway features. Improved performance for branch offices by using the Branch Caching feature. Improved security for branch offices by using the read-only DFS feature. More efficient power management by using the new power management Group Policy settings for Windows 7 clients. Page 67 Improved virtualized presentation integration by using the new desktop and application feeds feature. Higher fault tolerance for connectivity between sites by using the Agile VPN feature. Increased protection for removable drives by using the BitLocker™ Drive Encryption (BitLocker) feature to encrypt removable drives. Improved prevention of data loss for mobile users by using the Offline Folders feature. Simplified Remote Connectivity for Corporate Computers One of the common problems facing most organizations is remote connectivity for their mobile users. One of the most common solutions for remote connectivity is for mobile users to connect by using a VPN connection. Depending on the type of VPN, users may install VPN client software on their mobile computer and then establish the VPN connection over public Internet connections. The DirectAccess feature allows Windows 7 client computer to directly connect to intranetbased resources without the complexity of establishing a VPN connection. The remote connection to the intranet is transparently established for the user. From the user‘s perspective, they are unaware they are remotely connecting to intranet resources. Overview of DirectAccess DirectAccess clients use IPv6 to communicate with the enterprise network. DirectAccess provides IPv6 addresses and connectivity to DirectAccess clients over existing IPv4 networks by using IPv4 to IPv6 transition technologies. Some of these technologies includes Teredo, 6to4, IP-HTTPS and ISATAP. Native IPv6 connectivity is also supported if the client is assigned a native IPv6 address. The following figure illustrates an overview of a typical DirectAccess solution. Page 68 Figure 16: Overview of a typical DirectAccess solution The components in a DirectAccess solution are listed in the following table. Page 69 Table 5: Components in a DirectAccess Solution Component Description DirectAccess Client This is a computer running Windows 7 that connects remotely to your intranet-based resources. DirectAccess Server This is a computer running Windows Server 2008 R2 that provides DirectAccess edge services for your organization. In addition to running DirectAccess services, this computer could also run IPv6 transition technologies as well for some deployment models. IPv6 IPv6 is an Internet Protocol designed to solve many of the problems of the current version of IP (known as IPv4) such as address depletion, autoconfiguration, and extensibility. For more information on IPv6, see Internet Protocol Security Internet Protocol security (IPsec) is a framework of open standards for ensuring private, secure communications over IP networks through the use of cryptographic security services. The Internet Engineering Task Force (IETF) IPsec working group defines the IPsec standards. DirectAccess uses IPsec transport mode to secure IP traffic between the DirectAcess client and your network resources by using the authentication and encryption features in IPsec. For more information on IPsec, see Teredo Teredo is an IPv6 transition technology that provides IPv6 connectivity to hosts behind a network address translation (NAT) device. For more information on Teredo, see 6to4 6to4 is an IPv6 transition technology that provides IPv6 connectivity to hosts that have a public IPv4 address. For more information on 6to4, see ISATAP ISATAP is an address assignment and automatic tunneling technology that is used to provide IPv6 connectivity between IPv6/IPv4 hosts across an IPv4 intranet. In DirectAccess, ISATAP is used to allow enterprise resources to use and route IPv6 without requiring infrastructure upgrades. For more information on ISATAP, see IP-HTTPS IP-HTTPS is a new protocol for Windows 7 that allows hosts behind a proxy or firewall to establish connectivity by tunneling IP data inside of an HTTPS tunnel. HTTPS is used instead of HTTP so that proxy servers Page 70 will not attempt to look inside of the data stream and terminate the connection if traffic looks anomalous. HTTPS is not providing security in any way; security is provided by IPsec. Since the data is double-encrypted by default (IPsec and HTTPS), IPHTTPS may not be as performant as other protocols. Additional IP-HTTPS servers can be added and load-balanced if performance is problematic. Microsoft is looking at ways to improve the performance of this protocol in the future. Name Resolution Policy Table Windows 7 introduces a new feature called Name Resolution Policy Table (NRPT) is a new feature in Windows 7 that performs the following functions: Clients can query different DNS servers for different DNS namespaces Optionally, DNS queries for specific namespaces can be secured using IPsec (and other actions can be specified, as well) The NRPT stores a list of namespaces and configuration settings that define the DNS client‘s behavior specific to that namespace. Name resolution requests are matched against the namespaces stored in the NRPT and are processed according to the configuration specified. In DirectAccess, when a name resolution request matches a namespace listed in the NRPT, the NRPT settings determine whether that query will be encrypted (to protect from packet sniffing and other man-in-themiddle attacks) and which DNS servers to send that query to. DirectAccess Connectivity Models DirectAccess supports a number of models for connecting remote users to your intranetbased resources. These models include: Full Intranet Access Selected Server Access End-to-end Access Full Intranet Access The Full Intranet Access model, as illustrated in the following figure, allows DirectAccess clients to connect into all resources inside your intranet. This model provides IPsec-based Page 71 end-to-edge authentication and encryption which terminate at the IPsec gateway or DirectAccess™ server. Figure 17: Full Intranet Access model This model does not require application servers that are running Windows Server 2008 or IPsec-authenticated traffic in the enterprise network. This model most closely resembles current VPN architecture. This model is typically easier to deploy in the short term, but usually needs re-architecting long term. The following table lists the benefits and limitations of the Full Intranet Access Model. Table 6: Benefits and Limitations of the Full Intranet Access Model Benefits Limitations Architecture similar to current VPN deployments. Cannot secure resources based on end-toend policies. Does not require IPsec traffic in the enterprise network. Might place extra load on DirectAccess Server, which can be mitigated by IPsec offload network adapters. Works with any IPv6 capable application servers. Selected Server Access The Selected Server Access model, as illustrated in the following figure, allows remote DirectAccess clients to access selected internal resources only. By leveraging IPsec, the communication between the remote client and the DirectAccess Server can be encrypted, and communication between the client and the application server can be authenticated. This allows you to define policies that restrict certain users or computers from accessing Page 72 particular application servers, or even specifying certain applications that won‘t be able to access intranet resources while accessing the resources remotely. Figure 18: Selected Server Access model The following table lists the benefits and limitations of the Selected Server Access model. Table 7: Benefits and Limitations of the Selected Server Access Model Benefits Limitations Fine grain control over which resources are available. Application servers must be running Windows Server 2008 or later. You can quickly realize the benefits of simplified edge policies and secure resources based on end-to-end policies. You must be familiar with IPsec and prepared to allow this traffic inside the network. End-to-end Access The End-to-end Access model, as illustrated in the following figure, allows remote DirectAccess clients to access directly any intranet-based resources. The connections between the DirectAccess client, the DirectAccess Server, and the intranet-based resources are authenticated and encrypted by using IPsec, This allows you to define policies that restrict certain users or computers from accessing particular application servers, or even specifying certain applications that won‘t be able to access intranet resources while accessing the resources remotely. Note: This model requires all intranet-based resources to support IPv6 and IPsec. Page 73 Figure 19: End to End Access Model The following table lists the benefits and limitations of the End to End Access Model. Table 8: Benefits and Limitations of the End to End Access Model Benefits Limitations Provides end-to-end encryption of data between DirectAccess Client and intranetbased resources. Requires IPv6 on all intranet-based resources. No IPv6 translations services are required, which reduces the workload on DirectAccess Server(s). Requires IPsec on all intranet-based resources DirectAccess Requirements Depending on the DirectAccess model selected, the requirements for deploying DirectAccess may vary. The following tables list the DirectAccess network, infrastructure, software, and hardware requirements. Table 9: DirectAccess Network Requirements Requirement Description IPv6 addressing DirectAccess uses the IPv6 protocol to provide end-to-end connectivity between client computers and enterprise resources. This means that DirectAccess clients will have access only to those servers in your intranet that have a reachable IPv6 address. Those servers can obtain IPv6 connectivity from native IPv6 or an IPv6 Page 74 transition technology. Although IPv6 is a requirement for DirectAccess, IPv6 does not have to be enabled on network infrastructure (such as routers), only on the client and server operating systems. Note: A DirectAccess client can still access an Internet resource using the IPv4 protocol. IPv6 is only required when the DirectAccess client connects to your intranet resources. IPv6 blocking IPv6 and IPv4 protocol 41 (which is used by ISATAP and 6to4 transition technologies) must be allowed to pass through your outward facing firewalls. Internet Protocol Security DirectAccess uses IPsec to provide mutual authentication and encryption between the DirectAccess Client, the DirectAccess Server, and intranet-based resources (depending on the access model). Note: Only Windows Server 2008 and later server operating systems support the termination of IPsec connections over IPv6. For more information, see. Teredo blocking Teredo, which uses IPv4 UDP port 3544, must be allowed to pass through your outward facing firewalls. ICMPv6 In order for IPv6 to work properly, ICMPv6 must be allowed to pass through your outward facing firewalls. NAT-PT devices Network Address Translation – Protocol Translation (NAT-PT) devices can be deployed to provide DirectAccess clients access to you intranet resources that only support IPv4. NAT-PT is generally configured to provide coverage for a particular DNS namespace, and once implemented, will make the necessary translations allowing DirectAccess clients to access any IPv4 resources located within that namespace. ISATAP The ISATAP protocol allows direct client-to-client and client-toserver IPv6 connectivity over an IPv4 infrastructure. When DirectAccess is installed, the ISATAP server registers its name in DNS. In addition, after DirectAccess is installed all Windows-based hosts running Windows Vista® or Windows Server 2008 or later automatically obtain an ISATAP/IPv6 address from the ISATAP Page 75 server. Since IPv6 addresses are preferred over IPv4, this means that when DirectAccess is installed, all Windows Vista, Windows Server 2008, and later operating systems in your domain will to communicate with each other using IPv6. This may have an impact on monitoring and firewall configurations. Table 10: DirectAccess Infrastructure Requirements Requirement Description Active Directory® Domain Services (AD DS) At least one Active Directory domain is required. Workgroup-based networks and computers are not supported. At least one domain controller in the domain containing user accounts must be running Windows Server 2008 R2. Group Policy Group Policy can be used to deploy DirectAccess client policies and is strongly recommended. Public Key Infrastructure A Public Key Infrastructure (PKI) is required to issue the certificates that are required by DirectAcess and IPsec. However, external certificates (or public certificates) are not required. For more information about deploying a PKI, see. IPsec policies DirectAccess uses IPsec policies, so the appropriate infrastructure must exist to manage IPsec policies. For more information, see. IPv6 transition technologies ISATAP, Teredo, 6to4, and IPv6 must be available for use on the DirectAccess server. DNS and ISATAP DirectAccess clients query DNS for the name ‗isatap‘ to locate ISATAP routers. DirectAccess clients also query DNS using the ISATAP protocol. In order to facilitate these requests, all DNS servers must be able to resolve the ISATAP name (‗isatap‘) and at least some DNS servers must be listening on the ISATAP interface. You can enable these capabilities by: Page 76 Ensuring some DNS servers run Windows Server 2008 SP2 or Windows Server 2008 R2 Unblocking ISATAP name resolution on all DNS servers Table 11: DirectAccess Software Requirements Requirement Description DirectAccess Server DirectAccess Server is an optional component of Windows Server 2008 R2 that manages DirectAccess connections, The DirectAccess Server may either terminate or pass IPsec connections. DirectAccess Client DirectAccess Client is an optional component of Windows 7 that allows remote users to connect to DirectAccess Servers. Note: Computers running Windows Vista or earlier operating system versions do not support DirectAccess Table 12: DirectAccess Hardware Requirements Requirement Description DirectAccess Server The hardware requirements for DirectAccess Server are the same as those for Windows Server 2008 R2. However, all DirectAccess servers must have at least two physical network adapters installed. DirectAccess Client The hardware requirements for DirectAccess Server are the same as those for Windows 7. DirectAccess Firewall Placement and Rules Because DirectAccess allows Internet-based clients access to intranet-based resources, placement of firewalls and configuration of firewall rules is important. The following figure illustrates the placement of DirectAccess components in relationship to a typical firewall configuration. Note: The following figure does not represent a design requirement, but rather recommended best practices. Depending on your firewall configuration, placement of DirectAccess components may differ. Page 77 Figure 20: Recommended placement of firewalls for DirectAccess solution The following table lists the recommended DirectAccess firewall rules for the DirectAccess solution illustrated in the previous figure. If the firewall configuration for your organization is different, then adjust the firewall rules accordingly. Page 78 Table 13: Recommended DirectAccess Firewall Rules Firewall Port or Protocol Direction Outer IPv6 Inbound and outbound Outer Encapsulating Security Payload (ESP) on (IP protocol 50) Inbound and outbound Outer Teredo (UDP port 3544) Inbound Outer ISATAP (IP protocol 41) Inbound and outbound Outer Secure HTTP (TCP port 443) Inbound Inner Internet Key Exchange (UDP port 500) Inbound and outbound Inner ESP (IP protocol 50) Inbound and outbound DirectAccess Simultaneous Internet and Intranet Access By default, remote DirectAccess Clients are able to simultaneously access the Internet, your organization‘s intranet, and the local IP subnet. DirectAccess Clients are configured to send all DNS name resolution requests for intranet-based resources to DNS servers in the intranet. DirectAccess Clients send all other DNS name resolution requests to the ISP‘s DNS server(s). This feature is known as split tunneling. You can disable split tunneling through by using Group Policy at Computer Configuration \ Administrative Templates \ Network \ Network connections \ Default value: disabled. You can also use Group Policy to configure Windows Firewall for advanced configuration options such as per-application control of split tunneling. This allows you to configure which applications are allowed to access the intranet-based resources while accessing the intranet remotely. When split tunneling is disabled, all traffic from the DirectAccess Client will be routed to the enterprise network over an IP-HTTPS tunnel. DirectAccess Clients who have had split tunneling disabled are able to access any resources on their local link (such as network printers) but any network traffic that must cross a network router will be forwarded to the DirectAccess Server. Page 79 The IP-HTTPS protocol is always used when split tunneling has been disabled. To reduce load on the DirectAccess Server, packets which are destined for your intranet are encrypted, while packets that are destined outside your intranet are unencrypted. DirectAccess Optional Security Components As an additional level of security protection, you may want to deploy: NAP IPsec enforcement. This prevents unhealthy computers from being able to establish an IPsec connection. NAP IPsec enforcement provides the strongest and most flexible method for maintaining client computer compliance with network health requirements. For more information on NAP IPsec enforcement, see ―Understanding NAP IPsec Enforcement‖ at. Server and domain isolation. Isolates your domain and server resources by limiting. For more information on server and domain isolation, see ―Server and Domain Isolation‖ at. Smartcard enforcement. You can user smartcard authentication to provide the following enforcement: User enforcement. Always require smartcard authentication, regardless of which computer the user logs on to or if the user is connecting locally or remotely, always require Smartcard for login. This feature is enabled by configuring the Smart card required for interactive logon option for each user. Machine enforcement. Always require smartcard authentication, regardless who logs onto the computer or if the computer is connecting locally or remotely. This feature is enabled by configuring the Machine Settings | Local Policies | Security Options |Interactive Logon: Require Smart Card Group Policy. Gateway enforcement. The IPsec gateway requires smartcard authentication before allowing connectivity. This option may be combined with user or machine enforcement to provide a second layer of checking that the user has logged on with a smart card. Alternatively, this option can be used without option user or machine enforcement, which means that users are be able to log onto their computer and access the Internet without a smartcard (assuming split tunneling is not disabled) but would need to insert a smart card to access any intranet-based resources. Page 80 DirectAccess Deployment Scenarios You can deploy DirectAccess solutions to support any number of simultaneous DirectAccess clients. In addition, you can deploy DirectAccess solutions that provide higher-availability and fault tolerance to help avoid any outages. You can improve the scaling and fault tolerance of your DirectAcess clients by using one of the following deployment scenario: Single server Multiple servers with multiple roles Multiple servers with identical roles Use these deployment scenarios as templates for creating your own DirectAccess solution. These deployment scenarios represent best practice recommendations that can be applied to your organization. Single Server In the Single Server deployment scenario, as illustrated in the following figure, all DirectAccess server-side components are running on one computer. Figure 21: Single Server deployment scenario The following table lists the benefits and limitations of the Single Server deployment scenario. Table 14: Benefits and Limitations of the Single Server Deployment Scenario Benefits Limitations Relatively simple deployment scenario, which requires a single computer running Susceptible to a single point of failure. Page 81 DirectAccess Server. Server performance bottlenecks can limit the maximum number of concurrent connections. Multiple Servers with Multiple Roles In the Multiple Servers with Multiple Roles deployment scenario, as illustrated in the following figure, the DirectAccess server-side components are running on more than one computer. This scenario provides improvements in scaling, but does not provide additional fault tolerance or help prevent single point of failure for DirectAccess server-side components. Figure 22: Multiple Servers with Multiple Roles deployment scenario The following table lists the benefits and limitations of the Multiple Servers with Multiple Roles deployment scenario. Table 15: Benefits and Limitations of the Multiple Servers with Multiple Roles Deployment Scenario Benefits Limitations Improves scalability to support larger number of concurrent connections. Susceptible to a single point of failure for each component. Page 82 Requires additional hardware. Requires routing reconfiguration. Multiple Servers with Identical Roles In the Multiple Servers with Identical Roles deployment scenario, as illustrated in the following figure, all DirectAccess server-side components are running on multiple computers. This scenario provides improvements in scaling and fault tolerance. Unlike the other deployment scenarios, this scenario helps eliminate single point of failure for DirectAccess server-side components. Figure 23: Multiple Servers with Identical Roles deployment scenario The following table lists the benefits and limitations of the Multiple Servers with Identical Roles deployment scenario. Table 16: Benefits and Limitations of the Multiple Servers with Identical Roles Deployment Scenario Benefits Limitations Improves scalability to support larger number of concurrent connections. Requires additional hardware. Improves fault-tolerances to help eliminate single point of failure. Requires routing reconfiguration. Page 83 DirectAccess and Failover Clustering You can use Failover Clustering in Windows Server 2008 to improve the availability of DirectAccess Servers. You can use Failover Clustering in conjunction with or in place of the inherent fault tolerance in DirectAccess, such as provided by the Multiple Servers with Identical Roles deployment scenario. The following DirectAccess Server components can be run as workloads in a Failover Cluster: • 6to4 Servers • IPsec DoSP Server • IPsec Gateway For more information on creating a failover cluster, see Sequence for Establishing a DirectAccess Connection The followings steps describe the sequence for establishing a DirectAccess connection between a DirectAccess client running Windows 7, the DirectAccess server, and resources on an intranet: 1. Deploy Windows 7 and DirectAccess Client connectivity policies. 2. Determine connectivity requirements between DirectAccess Client and application and resources in the intranet. 3. Establish the required connections to the DirectAccess Servers. 4. Validate the connection between the DirectAccess Client and the DirectAccess Servers. 5. Forward traffic to intranet resources. Step 1: Deploy Windows 7 and DirectAccess Client Policy Windows 7 needs to be deployed on the mobile computer. In addition, the DirectAccess Client policies need to be deployed. The DirectAccess Client policies can be deployed as a part of the Windows 7 image or in a subsequent deployment. The policies allow you to allow grant access to specific applications or resources to specific user while preventing access to other users. The policies control: The connectivity for an application, resource, or namespace thorough DirectAccess Servers. A schedule that limits the periods of time when remote connectivity is allowed or denied in the policy. Page 84 In addition, the DirectAccess Client needs to do name resolution for the DirectAccess Servers specified in the policy and the resources within your intranet, typically performed by DNS. Step 2: Determine Connectivity Requirements Between Client and Intranet The DirectAccess Client can transparently initiate the network connection between the client and the resources and applications in your organizations intranet. If an application references a computer name within the intranet, the DirectAccess client determines if the server computer must be accessed with, or without, a tunneled connection. After the DirectAccess Client determines the type of connection required, the client establishes the connection directly, through a tunnel, or both as required to access the resource. Step 3: Establish Required Connections The DirectAccess client connects to the DirectAccess Servers based on policy and the current connectivity available. The connection to the DirectAccess Servers is used to connect to your intranet services and resources, including DNS services, Active Directory services, and application-related resources. Step 4: Validate Connection The DirectAccess Server validates all incoming connections by using IPsec authentication in the ―Seamless VPN‖ deployment scenario. After the connection is validated, the appropriate IP addresses are assigned to the DirectAccess Client. The DirectAccess Server is configured to filter out all traffic except Internet Key Exchange (IKE) and Encapsulating Security Payload (ESP) packets. Step 5: Forward Traffic to Intranet After the DirectAccess Client connection is validated, the DirectAccess Server creates a connection between the DirectAccess Client and resources on the intranet. If the address of the resource is published as an address provided by IPv6 Transition services, then IPv6 Transition is required. If your organization has deployed a dual-stack IPv6, then no IPv6 to IPv4 translation is required. Otherwise, traffic between the DirectAccess Client and your intranet resources need to be translated by ISATAP or 6to4. 6to4 allows IPv6 packets to be transmitted over an IPv4 network without the need to configure explicit tunnels. 6to4 does not facilitate interoperation between IPv4-only hosts and IPv6-only hosts, but tunnels IPv6 packets through an IPv4 network, such as the Internet. Page 85 Secured Remote Connectivity for Private and Public Computers Another common problem for remote users is the ability to access intranet-based resources from computers that are not owned by their organization, such as public computers or Internet kiosks. Without a mobile computer provided by their organization, most users are unable to access intranet-based resources. A combination of the Remote Workspace, presentation virtualization, and Remote Desktop Services Gateway features allows users on Windows 7 clients to remotely access the intranetbased resources without requiring any additional software to be installed on the Windows 7 client. This allows the users to remotely access their desktop as though they were working from their computer on the intranet, as illustrated in the following figure. Page 86 Figure 24: Remote user connected to an intranet by using Remote Workspace, presentation virtualization, and Remote Desktop Services Gateway From the user‘s perspective, the desktop on the remote Windows 7 client transforms the look of the user‘s desktop on the intranet, including icons, Start menu items, and installed applications are identical to the user experience on their computer on the intranet. When the remote user closes the remote session, the remote Windows 7 client desktop environment reverts to the previous configuration. Page 87 Improved Performance for Branch Offices One of the largest problems facing branch offices is how to improve the performance of accessing intranet resources in other locations, such as the headquarters or regional data centers. Typically branch offices are connected by wide area networks (WANs) which usually have slower data rates than your intranet. Reducing the network utilization on the WAN network segments provides available network bandwidth for applications and services. The BranchCache feature in Windows Server 2008 R2 and Windows 7 reduces the network utilization on WAN segments that connect branch offices by locally caching frequently used files on computers in the branch office. The type of content that is cached is content returned by Server Message Block (SMB) requests and HTTP requests. The following figure contrasts branch office network utilization with and without the BranchCache feature. Figure 25: The branch office problem BranchCache Modes BranchCache supports the following operational modes: Distributed Mode Hosted caching Page 88 Distributed Mode In distributed mode, content is cached on the branch on client computers running Windows 7. The disadvantage to this solution is that content is cached on client computers, so if the computer containing the cached content is unavailable, the content must be retrieved over the WAN connection, as illustrated in the following figure. The following sequence reflects how the distributed computer downloads the content from the first machine. If a client computer cannot locate a piece of content on the local network, it will return to the server and request a full download. Hosted Caching Mode In the hosted caching mode, content is cached on the branch on client computers running Windows Server 2008 R2. The advantage to this mode is that the server is always available, so the cached content is always available. The unavailability of any client computer running Windows 7 does not affect the availability of the content cache, as illustrated in the following figure. The following sequence reflects how the hosted caching client downloads the content from the first computer. If a client computer cannot locate a piece of content on the local network, the client computer will return to the server and request a full download Page 89 BranchCache Management BranchCache behavior can be configured by using Group Policy. Windows Server 2008 R2 includes a Group Policy administrative template that can be used to administer the BranchCache configuration settings. You can also manage BranchCache by using the NetSH command. For more information configuring BranchCache by using the NetSH command, see ―NetSH Command Index‖ in Windows Branch Cache Deployment Guide. Improved Security for Branch Offices Windows Server 2008 RTM introduced the Read-only Domain Controller feature, which allows a read-only copy of AD DS to be placed in less secured environments, like branch offices. Windows Server 2008 R2 introduces support for read-only copies of information stored in Distributed File System (DFS), as illustrated in the following figure. Figure 26: Read-only DFS in a branch office scenario Page 90 Read-only DFS helps protect your digital assets by allowing branch offices read-only access to information that you replicate to them by using DFS. Because the information is read-only, users are unable to modify the content stored in read-only DFS replicated content and no content changes are replicated to other DFS replica copies in other locations. Improved Efficiency for Power Management Windows 7 includes a number of power management features that allow you to control power utilization in your organization with a finer degree of granularity than in previous operating systems. Windows 7 allows you to take advantages of the latest hardware developments for reducing power consumption in desktop and laptop computers. Windows Server 2008 R2 includes a number of Group Policy settings that allow you to centrally manage the power consumption of computers running Windows 7. Virtualized Desktop Integration Windows 7 introduces the desktop and applications feed feature, which helps integrate desktops and applications virtualized by using Remote Desktop Services with the Windows 7 user interface. This integration makes the user experience for running virtualized applications or desktops the same as running the applications locally. Windows 7 can be configured to subscribe to desktop and application feeds provided by Remote Desktops and RemoteApp programs. These feeds are presented to Windows 7 users using the new RemoteApp and Desktop Connection control panel applet. The RemoteApp and Desktop Web Access control panel applet provides the ability to connect to resources from Windows Vista and Windows XP in addition to Windows 7. The desktop and applications feeds feature includes the following capabilities: Users can subscribe to RemoteApp programs and Remote Desktops by using the RemoteApp and Desktop Connections control panel applet. User experience is seamlessly integrated with Windows 7 because: The RemoteApp programs desktops are added to the Start Menu. A new System Tray icon shows connectivity status to all of the connections to feeds. The administration for RemoteApp, Remote Desktop, and RemoteApp and Desktop Web Access is performed through a unified infrastructure. RemoteApp and Desktop Web Access provide access to RemoteApp and Remote Desktops to previous Windows operating systems by using a Web-based interface. Page 91 Provides supports for managed computers (member computers in an Active Directory domain) and unmanaged computers (standalone computers). User interface always reflects applications and desktops in the Start Menu and in the web-based interface as they are added by the administrator. Access to all desktops and applications requires a single sign-on. Higher Fault Tolerance for Connectivity Between Sites and Locations One of the most common scenarios facing organizations today is connectivity between sites and locations. Many organizations connect their sites and locations by using VPN tunnels over public networks, such as the Internet. One of the problems with existing VPN solutions is they are not resilient to connection failures or device outages. When any outage occurs, the VPN tunnel is terminated and the VPN tunnel must be re-established, resulting in momentary outages in connectivity. The Agile VPN feature in Windows Server 2008 R2 allows a VPN to have multiple network paths between points in the VPN tunnel. In the event of a failure, Agile VPN automatically uses another network path to maintain the existing VPN tunnel, without interruption of connectivity. Protection for Removable Drives In Windows Server 2008 and prior operating systems, BitLocker Drive Encryption (BitLocker) was primarily used to protect the operating system volume. Information stored on other volumes, including removable media, was encrypted by using Encrypted File System (EFS). In Windows 7, you can use BitLocker to encrypt removable drives, such as eSATA hard disks, USB hard disks, USB thumb drivers, or compact flash drives. This allows you to protect information stored on removable media with the same level of protection as the operating system volume. BitLocker requires the use of a Trusted Platform Module (TPM) device or physical key to access information encrypted by BitLocker. You can also require a personal identification number (PIN) in addition to the TPM device or physical key. The keys for BitLocker can also be archived in AD DS, which provide an extra level of protection in the event the physical key is lost or the TPM device fails. This integration between Windows 7 and Windows Server 2008 R2 allows you to protect sensitive information without worrying about users losing their physical key. Page 92 Prevention of Data Loss for Mobile Users The Offline Files feature allows you to designate files and folders stored on network shared folders for use even when the network shared folders are unavailable (offline). For example, a mobile user disconnects a laptop computer from your intranet and works from a remote location. The Offline Files feature has the following operation modes: Online mode. The user is working in online mode when they are connected to the server and most file requests are sent to the server. Offline mode. The user is working in offline mode when they are not connected to the server and all file requests are satisfied from the Offline Files cache stored locally on the computer. In Windows Server 2008 RTM and Windows Vista, the Offline Files feature was configured for online mode by default. In Windows Server 2008 R2 and Windows 7, the Offline Files feature supports transitioning to offline mode when on a slow network by default. This helps reduce the network traffic while connected to your intranet because the users are modifying locally cached copies of the information stored in the Offline Files local cache. However, the information stored in the Offline Files local cache is still protected from loss because the information is synchronized with the network shared folder. Summary Microsoft Windows Server® 2008 R2 gives IT Professionals more control over their server and network infrastructure, and provides an enterprise-class foundation for business workloads. Microsoft enables organizations to deliver rich Web-based experiences efficiently and effectively, by reducing the amount of effort required to administer and support your Web-based applications. The powerful Virtualization technologies in Windows Server 2008 R2 enable you to increase your server consolidation ratios, while reducing the amount of administrative effort required for managing the infrastructure. Through increased automation and improved remote administration, Windows Server 2008 R2 helps organizations save money and time, by reducing travel expenses, decreasing energy consumption, and automating repetitive IT tasks. When combined with Windows 7 client operating system, the Virtual Desktop Infrastructure in Windows Server 2008 enables you to provide your employees with anywhere access to corporate data and resources, while helping to maintain the security of your enterprise systems. Page 93
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project | https://manualzz.com/doc/31029838/windows-server-2008-r2-sp1-technical-overview | CC-MAIN-2019-43 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.