text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
I like Arduino devices, but I don’t quite like Arduino IDE. Among all the reasons, one is its
printf() and
sprintf() implementation on floating point support.
In Arduino programming, you often see code like this:
Serial.print("Temperuature = "); Serial.print(temp); Serial.print("c, Humidity = "); Serial.print(humidity); Serial.println("%");
The code is ugly, repeatitive, and feel like written by someone who is learning programming, but you see this kind of code even in the examples that come with popular and well-written libraries. All those 5 lines of code do is to print one line of text like this to Serial Monitor:
Temperature = 32.6c, Humidity = 80.4%
If you are coming from the background of Python programming, you probably know the Zen of Python, it emphasis on ‘Beautiful is better than ugly’ and ‘Readability counts’. I personally think it should apply to all programming languages because after all ‘code is read more often than it is written’.
Beautiful is better than ugly.
Readability counts.
Code is read more often than it is written.
printf() function
If you have experience in C or C++ programming, you probably know the C library function
printf(), so why not using
printf()?
printf("Temperature = %.1fc, Humidity = %.1f%%\n", temp, humidity);
The first argument of the function is a formatter string that contains the text to be written to
stdout. It can optionally contain embedded format tags that are replaced by the values specified in subsequent additional arguments and formatted as requested. The function return an integer that represented the total characters written to the the
stdout if successful, or a -1 is returned if for some reason it is failed to write to
stdout.
The function is available by default in Arduino (
stdio.h is included in
<Arduino.h> automatically), so if you write the code as shown above, it will compiled and run without any problem, but you won’t see any output on the Serial Monitor, this is because Arduino does not use
stdout, all the print (and keyboard input) are done via Serial interface.
sprintf() function
What about
sprintf() which is available in both C and C++?
sprintf() allows you send formatted output to an array of character elements where the resulting string is stored. You can then send the formatted string via
Serial.print() function.
char buffer[50]; sprintf(buffer, "temperature = %.1fc, Humidity = %.1f%%\n", temp, humidity); Serial.print(buffer);
This seems to be a good solution. But if you are rushing to use
sprintf() in your Arduino sketch, not so fast until you see the result from this sketch:
void setup() { Serial.begin(115200); int x = 10; float y = 3.14159; char name[] = "Henry"; char buff[50]; sprintf(buff, "this is an integer %d, and a float %f\n", x, y); Serial.print(buff); sprintf(buff, "Hello %s\n", name); Serial.print(buff); }
If you run the sketch, you will see this result:
this is an integer 10, and a float ? Hello Henry.
This could cost hours of frustration in finding out why the code is not working as it is supposed to be on Arduino. It turns out that Arduino’s
sprintf() does not support floating point formatting.
Why Arduino sprintf() does not support floating point?
So what’s going on? Why the
sprintf() on Arduino behave differently from standard C++ library. To find out why, you have to know where the
sprintf() is coming from. Arduino compiler is based on gcc, or more precisely, avr-gcc for Atmel’s AVR microcontrollers. Together with avr-binutils, and ave-libc form the heart of toolchain for the Atmel AVR microcontrollers. All the printf-like functions in avr-gcc come from
vprintf() function in avr-libc. I found the answer to my question in AVR libc documentation on
vprintf(), it said:
“Since the full implementation of all the mentioned features becomes fairly large, three different flavours of vfprintf() can be selected using linker options. The default vfprintf() implements all the mentioned functionality except floating point conversions… “
it further mentioned that:
“If the full functionality including the floating point conversions is required, the following options should be used:”
-Wl,-u,vfprintf -lprintf_flt -lm
As Arduino compiler options and linker process are hardcoded in its Java code and there is no way for user to select the optional build flags. The problem existed since day 1 of Arduino release and for years people seeking for a solution.
The reason that
Serial.print(float) is able to print the floating point is because Arduino painfully and tediously implemented the
Serial.print() function (the source code can be viewed at ArduinoCore-avr github page, it is almost like a hack) to support the floating point print.
One workaround is to use the
dtostrf() function available in the avr-libc (
dtostrf() is avr-libc specific function, not available in standard gcc:
dtostrf(floatvar, StringLengthIncDecimalPoint, numVarsAfterDecimal, charbuf);
The
dtostrf() converts a float to a string before passing it into the buffer.
float f = 3.14159; char floatString[10]; dtostrf(f,4,2,floatString); sprintf(buffer, "myFloat in string is = %s\n", floatString); Serial.print(buffer);
The
dtostrf() works but it introduce extra line for setup the buffer before printing out the formatted string, it must well just use
Serial.print(). Another workaround looks cleaner but only works for positive number.
sprintf(str, "String value: %d.%02d", (int)f, (int)(f*100)%100);
There is a discussion on Arduino Forum to manually swap out
vprint_std.o (i.e. the standard default version of
vprintf()) from
libc.a and replace it with
vprint_flt.o (the floating point version of
vprintf()). This hack works, it is a little bit terious, but you only need to do it once until you update the Arduino IDE in future, the update process will overide your hack and you will have to re-do it again.
During my search of a solution, I also found PrintEx library on github. The PrintEx offers a clean API and much more features without compromising the performance with reasonable memory footprint, and is the best solution in my opinion.
#include <PrintEx.h> PrintEx myPrint = Serial; void setup(){ Serial.begin(115200); float pi=3.14159; myPrint.printf("pi = %.2f\n", pi); }
Although PrintEx library is almost perfect and solve the problem, but personally I think it is still a workaround. The original problem is trivia to fix, we need all those workarounds, hacking or external library just because the inflexibility of Arduino IDE. So why should we still using Arduino IDE if Arduino is not able to offer a solution for at least the past 8 years?
Fix the sprintf() with PlatformIO
This is when I realised that PlatformIO might be able to solve the problem. I have been using PlatformIO as my Arduino development environment for a couple of years now. PlatformIO is a better IDE than Arduino IDE in terms of project-based library dependency management, configuration and more importantly, allows me to customise my build flags for each project.
If you never use PlatformIO for Arduino programming, the following two videos from Robin Reiter on YouTube provides a quick installation guide for setting up PlatformIO.
PlatformIO put the configuration settings of each project in a file called
platforio.ini, I can add customised
build_flags into my Arduino Nano’s
platformio.ini as:
[env:nanoatmega328] platform = atmelavr board = nanoatmega328 framework = arduino build_flags = -Wl,-u,vfprintf -lprintf_flt -lm
The first three parameters about platform, board and framework were created automatically when I create a project via PlatformIO. The last line about
build_flags is what I need to add manually based the
vprintf information provided by avr-libc. This allows the compiler to replace the default
lprintf_std with
lprintf_flt during the build process so that the compiled code will have floating point support on
vprintf.
I compiled and upload the sketch to my Arduino Nano via PlatformIO, and run the sketch, the correct floating point result shown up on Serial Monitor!
this is an integer 10, and a float 3.14159 Hello Henry
The
vprintf floating point support added about 1500 bytes to the memory usage which might be a big deal when avr libs was developed 20 years ago, but for Arduino adding 1500 bytes extra memory usage out of total 37000+ bytes is not a big deal for many applications, and now I have a fully function
sprintf() instead of the previous half-baked version!
Create a print() function
sprintf() requires some setup and take 2 lines of code to print a formatted string, but it is still better and more readable than the 5 lines that you seen at the beginning of this article. To make it a one liner, I create a function and wrapped all the code with C++ Template Parameter Pack pattern which was introduced since C++11.
template <typename... T> void print(const char *str, T... args) { int len = snprintf(NULL, 0, str, args...); if (len) { char buff[len+1]; snprintf(buff, len+1, str, args...); Serial.print(buff); } }
Now I have a one liner
print() function that is elegant and easy to read:
print("temperature = %.1fc, Humidity = %.1f%%\n", temp, humidity);
What other Arduino-compatible platforms handle floating point
After done all those research and implement my own
ESP8266/ESP32
Both ESP8266 and ESP32 Arduino core implementation added a printf method to its Serial class, so that you can do call
Serial.printf() like the way c
printf() do. What this means is that ESP32 and ESP8266 supports floating point on Arduino out-of-box.
The following code running on ESP32 and ESP8266 will provide the correct floating point results. How nice!
float number = 32.3; char str[30]; void setup() { Serial.begin(115200); // This is ESP32/ESP8266 specific Serial method Serial.printf("floating point = %.2f\n", number); // which is equivalent to this (BTW, this also works) sprintf(str, "floating point = %.3f\n", number); Serial.print(str); } void loop() { }
STM32duino
STM32duino does not supports floating by default, it follows faithfully to the Arduino APIs. However, on the drop down menu of Arduino IDE, you can select the optional c runtime to pull in the printf floating point support during compilation. This is similar to the intention of original AVR implementation and similar to what PlatformIO did, but with an even nicer and user-friendly UI. Why Arduino.cc can’t do that?
Final Word
The avr function
dtostrf() remain a viable workaround because it is part of the Arduino core and implemented across all Arduion-compatible platforms.
The
sprintf() floating point support on Arduino is a well known issue for years, and is actually trivia to fix. Arduino announced the release of alpha version of Arduino Pro IDE in Oct 2019. I have not try the Arduino Pro IDE yet, but I sincerely hope that Arduino.cc learnt something from other platform owners, and gives users more flexibility in project management and build options.
I hope now you will know Arduino a little bit better than before.
10 comments by readers
Thank you for posting this excellent blog article. I learned a huge amount. Many new Arduino developers fall foul of the sprintf/float sinkhole and most developers just write many lines of unsightly code to work around. I agree with you : can do better (and you have!). I used PrintEx but found a bug in PrintEx sprintf (does not write ‘\0’ at end of string sometimes) and I used your elegant C++11 template parameter pack solution to wrap it and fix the problem. I am still using the Arduino and I will give the Pro IDE a go soon…
I found that this works for an esp32 including negative numbers. The following prints out a negative number in two decimals:
Thanks for the tutorial!
For ESP32 (and ESP8266), it supports floating point out of the box. It will be better and simpler to just use
I switched over to PlatformIO – I’ve already used Atom for web pages, anyway. There’s a problem with Beginning Arduino project 10 – “Serial Controlled Mood Lamp”
The Arduino IDE Serial Monitor accumulates your text in a text box until you hit RETURN or click the SEND button, but PlatformIO seems to send characters as soon as they are available. So if you type slowly, or pause, the Arduino board gets only a fraction of your intended command.
Suggestions on how to make the Serial Monitor wait until a complete line has been entered?
It seems that for some reason your device monitor setting for
send_on_enteris enabled, take a look at on how to change it.
Nice effort on Tutorial, just pity you have missed-out the obvious, and a solution…
Due to the tedious work necessary to print More complex data, it is natural need for users to add the printf support themselves. For example:
This should have been first stated and dealt with long before dealing with less common derivations.
Openly, the ESP8266 libraries are far better done then those from Arduino foundation itself, especially when you go into less basic projects, but do have a steeper learning curve…
Yeah… And did you bother to read the full article? At the end it says:.
Thank you for saving my day! As HeatPumper mentioned above, sprintf misses the terminating zero. This is easily fixed in your “Create a print() function”:
Just replace:
by:
Thanks for pointing that out, I took a look at my little function again and made some small changes to use
snprintf()instead of
sprintf()as
snprintf()will always terminate the string with
\0even when it is truncated, so it is safer, and there is no need to manually add
\0.
I just added this in \hardware\arduino\avr\platform.txt to make sprintf work with float:
# These can be overridden in platform.local.txt
compiler.c.extra_flags=
compiler.c.elf.extra_flags=-Wl,-u,vfprintf -lprintf_flt -lm | https://www.e-tinkers.com/2020/01/do-you-know-arduino-sprintf-and-floating-point/ | CC-MAIN-2021-49 | en | refinedweb |
Abstraction of the transformation source module. More...
#include <transform-base.hpp>
Abstraction of the transformation source module.
This module can only accept input data from constructor
Definition at line 288 of file transform-base.hpp.
Definition at line 154 of file transform-base.cpp.
Connect to an intermediate transformation module.
Definition at line 166 of file transform-base.cpp.
References ndn::security::transform::Upstream::appendChain().
Connect to the last transformation module.
This method will trigger the source to pump data into the transformation pipeline.
Definition at line 176 of file transform-base.cpp.
References ndn::security::transform::Upstream::appendChain(), and pump().
Pump all data into next transformation module.
Definition at line 160 of file transform-base.cpp.
Referenced by operator>>().
Get the source module index (should always be 0).
Definition at line 319 of file transform-base.hpp. | https://ndnsim.net/2.6/doxygen/classndn_1_1security_1_1transform_1_1Source.html | CC-MAIN-2021-49 | en | refinedweb |
Type conversion and type casting are the same in C#. It is converting one type of data to another type. In C#, type casting has two forms −
Implicit type conversion − These conversions are performed by C# in a type-safe manner. For example, are conversions from smaller to larger integral types and conversions from derived classes to base classes.
Explicit type conversion − These conversions are done explicitly by users using the pre-defined functions. Explicit conversions require a cast operator.
The following is an example showing how to cast double to int −
using System; namespace Demo { class Program { static void Main(string[] args) { double d = 9322.46; int i; // cast double to int i = (int)d; Console.WriteLine(i); Console.ReadKey(); } } }
9322 | https://www.tutorialspoint.com/What-is-the-difference-between-type-conversion-and-type-casting-in-Chash | CC-MAIN-2021-49 | en | refinedweb |
Project Workflows and Automation¶
Workflows are used to run multiple dependent steps in a graph (DAG) which execute project functions and access project data, parameters, secrets.
MLRun support running workflows on a
local or
kubeflow pipeline engine, the
local engine runs the workflow as a
local process which is simpler for debug and running simple/sequential tasks, the
kubeflow (“kfp”) engine runs as a task over the
cluster and support more advanced operations (conditions, branches, etc.). you can select the engine at runtime (kubeflow
specific directives like conditions and branches are not supported by the
local engine).
Workflowes are saved/registered in the project using the
set_workflow(),
workflows are executed using the
run() method or using the CLI command
mlrun project.
Please refer to the tutorials section for complete examples.
Composing workflows¶
Workflows are written as a python function which make use of function operations (run, build, deploy)
operations and can access project parameters, secrets and artifacts using
get_param(),
get_secret() and
get_artifact_uri().
For workflows to work in Kubeflow you need to add a decorator (
@dsl.pipeline(..)) as shown below.
Example workflow:
from kfp import dsl import mlrun from mlrun.model import HyperParamOptions funcs = {} DATASET = "iris_dataset" in_kfp = True @dsl.pipeline(name="Demo training pipeline", description="Shows how to use mlrun.") def newpipe(): project = mlrun.get_current_project() # build our ingestion function (container image) builder = mlrun.build_function("gen-iris") # run the ingestion function with the new image and params ingest = mlrun.run_function( "gen-iris", name="get-data", params={"format": "pq"}, outputs=[DATASET], ).after(builder) # train with hyper-paremeters train = mlrun.run_function( "train", name="train", params={"sample": -1, "label_column": project.get_param("label", "label"), "test_size": 0.10}, hyperparams={ "model_pkg_class": [ "sklearn.ensemble.RandomForestClassifier", "sklearn.linear_model.LogisticRegression", "sklearn.ensemble.AdaBoostClassifier", ] }, hyper_param_options=HyperParamOptions(selector="max.accuracy"), inputs={"dataset": ingest.outputs[DATASET]}, outputs=["model", "test_set"], ) print(train.outputs) # test and visualize our model mlrun.run_function( "test", name="test", params={"label_column": project.get_param("label", "label")}, inputs={ "models_path": train.outputs["model"], "test_set": train.outputs["test_set"], }, ) # deploy our model as a serverless function, we can pass a list of models to serve serving = mlrun.import_function("hub://v2_model_server", new_name="serving") deploy = mlrun.deploy_function( serving, models=[{"key": f"{DATASET}:v1", "model_path": train.outputs["model"]}], ) # test out new model server (via REST API calls), use imported function tester = mlrun.import_function("hub://v2_model_tester", new_name="live_tester") mlrun.run_function( tester, name="model-tester", params={"addr": deploy.outputs["endpoint"], "model": f"{DATASET}:v1"}, inputs={"table": train.outputs["test_set"]}, )
Saving workflows¶
If we want to use workflows as part of an automated flow we should save them and register them in the project.
We use the
set_workflow() method to register workflows, we specify a workflow name,
the path to the workflow file, and the function
handler name (or it will look for a handler named “pipeline”), and can
set the default
engine (local or kfp).
if we set the
embed flag to True, the workflow code will be embedded in the project file (can be used if we want to
describe the entire project using a single YAML file).
We can define the schema for workflow arguments (data type, default, doc, etc.) by setting the
args_schema with a list
of
EntrypointParam objects.
Example:
# define agrument for the workflow arg = mlrun.model.EntrypointParam( "model_pkg_class", type="str", default="sklearn.linear_model.LogisticRegression", doc="model package/algorithm", ) # register the workflow in the project and save the project project.set_workflow("main", "./myflow.py", handler="newpipe", args_schema=[arg]) project.save() # run the workflow project.run("main", arguments={"model_pkg_class": "sklearn.ensemble.RandomForestClassifier"})
Running workflows¶
We use the
run() method to execute workflows, we specify the workflow using its
name
or
workflow_path (path to the workflow file) or
workflow_handler (the workflow function handler).
We can specify the input
arguments for the workflow and can override the system default
artifact_path.
Workflows are asynchronous by default, we can set the
watch flag to True and the run operation will block until
completion and print out the workflow progress, alternatively you can use
.wait_for_completion() on the run object.
The default workflow engine is
kfp, we can override it by specifying the
engine in the
run() or
set_workflow() methods,
using the
local engine will execute the workflow state machine loaclly (its functions will still run as cluster jobs).
if we set the
local flag to True the workflow will use the
local engine AND the functions will will run as local process,
this mode is used for local debugging of workflows.
When running workflows from a git enabled context it first verifies that there are no uncommitted git changes
(to guarantee that workflows which load from git will not use old code versions), you can suppress that check by setting the
dirty flag to True.
Examples:
# simple run of workflow 'main' with arguments, block until it completes (watch=True) run = project.run("main", arguments={"param1": 6}, watch=True) # run workflow specified with a function handler (my_pipe) run = project.run(workflow_handler=my_pipe) # wait for pipeline completion run.wait_for_completion() # run workflow in local debug mode run = project.run(workflow_handler=my_pipe, local=True, arguments={"param1": 6}) | https://docs.mlrun.org/en/latest/projects/workflows.html | CC-MAIN-2021-49 | en | refinedweb |
domain
Codegen helping you define domain models
See all snapshots
domain appears in
domain-0.1.1.2@sha256:09687ec84c87c077ea3fc2e8f7f42d008105fe30ecd6d90bd093017e2fbd8270,3884
Module documentation for 0.1.1.2
About
Template Haskell codegen removing noise and boilerplate from domain models.
Problem
Imagine a real-life project, where you have to define the types for your problem domain: your domain model. How many types do you think there’ll be? A poll among Haskellers shows that highly likely more than 30. That is 30 places for you to derive or define instances, work around the records problem and the problem of conflicting constructor names. That is a lot of boilerplate and noise, distracting you from your actual goal of modeling the data structures or learning an existing model during maintenance. Also don’t forget about the boilerplate required to generate optics for your model to actually make it accessible.
Mission
In its approach to those problems this project sets the following goals:
- Let the domain model definition be focused on data and nothing else.
- Let it be readable and comfortably editable, avoiding syntactic noise.
- Separate its declaration from the problems of declaration of instances, accessor functions, optics and etc.
- Have the records problem solved.
- Have the problem of conflicting constructor names solved.
- Avoid boilerplate in all the above.
- Avoid complications of the build process.
Solution
This project introduces a clear boundary between the data model declaration and the rest of the code base. It introduces a YAML format designed specifically for the problem of defining types and relations between them and that only. We call it Domain Schema.
Schemas can be loaded at compile time and transformed into Haskell declarations using Template Haskell. Since it’s just Template Haskell, no extra build software is needed to use this library. It is a normal Haskell package.
Schema gets analysed allowing to generate all kinds of instances automatically using a set of prepackaged derivers. An API is provided for creation of custom derivers for extending the library or handling special cases.
Tutorial and Case in Point
We’ll show you how this whole thing works on an example of a model of a service address.
Schema
First we need to define a schema. For that we create the following YAML document:
# Service can be either located on the network or # by a socket file. # # Choice between two or more types can be encoded using # "sum" type composition, which you may also know as # "union" or "variant". That's what we use here. ServiceAddress: sum: network: NetworkAddress local: FilePath # Network address is a combination of transport protocol, # host and port. All those three things at once. # # "product" type composition lets us encode that. # You may also know it as "record" or "tuple". NetworkAddress: product: protocol: TransportProtocol host: Host port: Word16 # Transport protocol is either TCP or UDP. # We encode that using enumeration. TransportProtocol: enum: - tcp - udp # Host can be adressed by either an IP or its name, # so "sum" again. Host: sum: ip: Ip name: Text # IP can be either of version 4 or version 6. # We encode it as a sum over words of the accordingly required # amount of bits. Ip: sum: v4: Word32 v6: Word128 # Since the standard lib lacks a definition # of a 128-bit word, we define a custom one # as a product of two 64-bit words. Word128: product: part1: Word64 part2: Word64
As you can see in the specification above we’re not concerned with typeclass instances or problems of name disambiguation. We’re only concerned with data and relations that it has. This is what we mean by focus. It makes the experience of designing and maintaining a model distraction free.
Those three methods of defining types (product, sum, enum) are all that you need to define a model of any complexity. If you understand them, there’s nothing new to learn.
Codegen
Now, having that schema defined in a file at path
schemas/model.yaml,
we can load it in a Haskell module as follows:
{-# LANGUAGE TemplateHaskell, StandaloneDeriving, DeriveGeneric, DeriveDataTypeable, DeriveLift, FlexibleInstances, MultiParamTypeClasses, DataKinds, TypeFamilies #-} module Model where import Data.Text (Text) import Data.Word (Word16, Word32, Word64) import Domain declare (Just (False, True)) mempty =<< loadSchema "schemas/model.yaml"
And that will cause the compiler to generate the following declarations:
data ServiceAddress = NetworkServiceAddress !NetworkAddress | LocalServiceAddress !FilePath data NetworkAddress = NetworkAddress { networkAddressProtocol :: !TransportProtocol, networkAddressHost :: !Host, networkAddressPort :: !Word16 } data TransportProtocol = TcpTransportProtocol | UdpTransportProtocol data Host = IpHost !Ip | NameHost !Text data Ip = V4Ip !Word32 | V6Ip !Word128 data Word128 = Word128 { word128Part1 :: !Word64, word128Part2 :: !Word64 }
As you can see in the generated code the field names from the schema get translated to record fields or constructors depending on the type composition method.
In this example the record fields are prefixed with type names for disambiguation, but by modifying the options passed to the
declare function it is possible to remove the type name prefix or prepend with underscore, you can also avoid generating record fields altogether (to keep the value-level namespace clean).
The constructor names are also disambiguated by appending the type name to the label from schema. Thus we are introducing a consistent naming convention, while avoiding the boilerplate in the declaration of the model.
Instances
If we introduce the following change to our code:
-declare (Just (False, True)) mempty +declare (Just (False, True)) stdDeriver
We’ll get a ton of instances generated including the obvious
Show,
Eq and even
Hashable for all the declared types. We’ll also get some useful ones, which you wouldn’t derive otherwise.
deriving instance Show ServiceAddress deriving instance Eq ServiceAddress deriving instance Ord ServiceAddress deriving instance GHC.Generics.Generic ServiceAddress deriving instance Data.Data.Data ServiceAddress deriving instance base-4.14.1.0:Data.Typeable.Internal.Typeable ServiceAddress instance hashable-1.3.0.0:Data.Hashable.Class.Hashable ServiceAddress deriving instance template-haskell-2.16.0.0:Language.Haskell.TH.Syntax.Lift ServiceAddress instance GHC.Records.HasField "network" ServiceAddress (Maybe NetworkAddress) where GHC.Records.getField (NetworkServiceAddress a) = Just a GHC.Records.getField _ = Nothing instance GHC.Records.HasField "local" ServiceAddress (Maybe FilePath) where GHC.Records.getField (LocalServiceAddress a) = Just a GHC.Records.getField _ = Nothing instance (a ~ NetworkAddress) => GHC.OverloadedLabels.IsLabel "network" (a -> ServiceAddress) where GHC.OverloadedLabels.fromLabel = NetworkServiceAddress instance (a ~ FilePath) => GHC.OverloadedLabels.IsLabel "local" (a -> ServiceAddress) where GHC.OverloadedLabels.fromLabel = LocalServiceAddress instance (mapper ~ (NetworkAddress -> NetworkAddress)) => GHC.OverloadedLabels.IsLabel "network" (mapper -> ServiceAddress -> ServiceAddress) where GHC.OverloadedLabels.fromLabel = \ fn -> \ a -> case a of NetworkServiceAddress a -> NetworkServiceAddress (fn a) a -> a instance (mapper ~ (FilePath -> FilePath)) => GHC.OverloadedLabels.IsLabel "local" (mapper -> ServiceAddress -> ServiceAddress) where GHC.OverloadedLabels.fromLabel = \ fn -> \ a -> case a of LocalServiceAddress a -> LocalServiceAddress (fn a) a -> a instance (a ~ Maybe NetworkAddress) => GHC.OverloadedLabels.IsLabel "network" (ServiceAddress -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of NetworkServiceAddress a -> Just a _ -> Nothing instance (a ~ Maybe FilePath) => GHC.OverloadedLabels.IsLabel "local" (ServiceAddress -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of LocalServiceAddress a -> Just a _ -> Nothing deriving instance Show NetworkAddress deriving instance Eq NetworkAddress deriving instance Ord NetworkAddress deriving instance GHC.Generics.Generic NetworkAddress deriving instance Data.Data.Data NetworkAddress deriving instance base-4.14.1.0:Data.Typeable.Internal.Typeable NetworkAddress instance hashable-1.3.0.0:Data.Hashable.Class.Hashable NetworkAddress deriving instance template-haskell-2.16.0.0:Language.Haskell.TH.Syntax.Lift NetworkAddress instance GHC.Records.HasField "protocol" NetworkAddress TransportProtocol where GHC.Records.getField (NetworkAddress a _ _) = a instance GHC.Records.HasField "host" NetworkAddress Host where GHC.Records.getField (NetworkAddress _ a _) = a instance GHC.Records.HasField "port" NetworkAddress Word16 where GHC.Records.getField (NetworkAddress _ _ a) = a instance (mapper ~ (TransportProtocol -> TransportProtocol)) => GHC.OverloadedLabels.IsLabel "protocol" (mapper -> NetworkAddress -> NetworkAddress) where GHC.OverloadedLabels.fromLabel = \ fn (NetworkAddress a b c) -> ((NetworkAddress (fn a)) b) c instance (mapper ~ (Host -> Host)) => GHC.OverloadedLabels.IsLabel "host" (mapper -> NetworkAddress -> NetworkAddress) where GHC.OverloadedLabels.fromLabel = \ fn (NetworkAddress a b c) -> ((NetworkAddress a) (fn b)) c instance (mapper ~ (Word16 -> Word16)) => GHC.OverloadedLabels.IsLabel "port" (mapper -> NetworkAddress -> NetworkAddress) where GHC.OverloadedLabels.fromLabel = \ fn (NetworkAddress a b c) -> ((NetworkAddress a) b) (fn c) instance (a ~ TransportProtocol) => GHC.OverloadedLabels.IsLabel "protocol" (NetworkAddress -> a) where GHC.OverloadedLabels.fromLabel = \ (NetworkAddress a _ _) -> a instance (a ~ Host) => GHC.OverloadedLabels.IsLabel "host" (NetworkAddress -> a) where GHC.OverloadedLabels.fromLabel = \ (NetworkAddress _ b _) -> b instance (a ~ Word16) => GHC.OverloadedLabels.IsLabel "port" (NetworkAddress -> a) where GHC.OverloadedLabels.fromLabel = \ (NetworkAddress _ _ c) -> c deriving instance Enum TransportProtocol deriving instance Bounded TransportProtocol deriving instance Show TransportProtocol deriving instance Eq TransportProtocol deriving instance Ord TransportProtocol deriving instance GHC.Generics.Generic TransportProtocol deriving instance Data.Data.Data TransportProtocol deriving instance base-4.14.1.0:Data.Typeable.Internal.Typeable TransportProtocol instance hashable-1.3.0.0:Data.Hashable.Class.Hashable TransportProtocol deriving instance template-haskell-2.16.0.0:Language.Haskell.TH.Syntax.Lift TransportProtocol instance GHC.Records.HasField "tcp" TransportProtocol Bool where GHC.Records.getField TcpTransportProtocol = True GHC.Records.getField _ = False instance GHC.Records.HasField "udp" TransportProtocol Bool where GHC.Records.getField UdpTransportProtocol = True GHC.Records.getField _ = False instance GHC.OverloadedLabels.IsLabel "tcp" TransportProtocol where GHC.OverloadedLabels.fromLabel = TcpTransportProtocol instance GHC.OverloadedLabels.IsLabel "udp" TransportProtocol where GHC.OverloadedLabels.fromLabel = UdpTransportProtocol instance (a ~ Bool) => GHC.OverloadedLabels.IsLabel "tcp" (TransportProtocol -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of TcpTransportProtocol -> True _ -> False instance (a ~ Bool) => GHC.OverloadedLabels.IsLabel "udp" (TransportProtocol -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of UdpTransportProtocol -> True _ -> False deriving instance Show Host deriving instance Eq Host deriving instance Ord Host deriving instance GHC.Generics.Generic Host deriving instance Data.Data.Data Host deriving instance base-4.14.1.0:Data.Typeable.Internal.Typeable Host instance hashable-1.3.0.0:Data.Hashable.Class.Hashable Host deriving instance template-haskell-2.16.0.0:Language.Haskell.TH.Syntax.Lift Host instance GHC.Records.HasField "ip" Host (Maybe Ip) where GHC.Records.getField (IpHost a) = Just a GHC.Records.getField _ = Nothing instance GHC.Records.HasField "name" Host (Maybe Text) where GHC.Records.getField (NameHost a) = Just a GHC.Records.getField _ = Nothing instance (a ~ Ip) => GHC.OverloadedLabels.IsLabel "ip" (a -> Host) where GHC.OverloadedLabels.fromLabel = IpHost instance (a ~ Text) => GHC.OverloadedLabels.IsLabel "name" (a -> Host) where GHC.OverloadedLabels.fromLabel = NameHost instance (mapper ~ (Ip -> Ip)) => GHC.OverloadedLabels.IsLabel "ip" (mapper -> Host -> Host) where GHC.OverloadedLabels.fromLabel = \ fn -> \ a -> case a of IpHost a -> IpHost (fn a) a -> a instance (mapper ~ (Text -> Text)) => GHC.OverloadedLabels.IsLabel "name" (mapper -> Host -> Host) where GHC.OverloadedLabels.fromLabel = \ fn -> \ a -> case a of NameHost a -> NameHost (fn a) a -> a instance (a ~ Maybe Ip) => GHC.OverloadedLabels.IsLabel "ip" (Host -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of IpHost a -> Just a _ -> Nothing instance (a ~ Maybe Text) => GHC.OverloadedLabels.IsLabel "name" (Host -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of NameHost a -> Just a _ -> Nothing deriving instance Show Ip deriving instance Eq Ip deriving instance Ord Ip deriving instance GHC.Generics.Generic Ip deriving instance Data.Data.Data Ip deriving instance base-4.14.1.0:Data.Typeable.Internal.Typeable Ip instance hashable-1.3.0.0:Data.Hashable.Class.Hashable Ip deriving instance template-haskell-2.16.0.0:Language.Haskell.TH.Syntax.Lift Ip instance GHC.Records.HasField "v4" Ip (Maybe Word32) where GHC.Records.getField (V4Ip a) = Just a GHC.Records.getField _ = Nothing instance GHC.Records.HasField "v6" Ip (Maybe Word128) where GHC.Records.getField (V6Ip a) = Just a GHC.Records.getField _ = Nothing instance (a ~ Word32) => GHC.OverloadedLabels.IsLabel "v4" (a -> Ip) where GHC.OverloadedLabels.fromLabel = V4Ip instance (a ~ Word128) => GHC.OverloadedLabels.IsLabel "v6" (a -> Ip) where GHC.OverloadedLabels.fromLabel = V6Ip instance (mapper ~ (Word32 -> Word32)) => GHC.OverloadedLabels.IsLabel "v4" (mapper -> Ip -> Ip) where GHC.OverloadedLabels.fromLabel = \ fn -> \ a -> case a of V4Ip a -> V4Ip (fn a) a -> a instance (mapper ~ (Word128 -> Word128)) => GHC.OverloadedLabels.IsLabel "v6" (mapper -> Ip -> Ip) where GHC.OverloadedLabels.fromLabel = \ fn -> \ a -> case a of V6Ip a -> V6Ip (fn a) a -> a instance (a ~ Maybe Word32) => GHC.OverloadedLabels.IsLabel "v4" (Ip -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of V4Ip a -> Just a _ -> Nothing instance (a ~ Maybe Word128) => GHC.OverloadedLabels.IsLabel "v6" (Ip -> a) where GHC.OverloadedLabels.fromLabel = \ a -> case a of V6Ip a -> Just a _ -> Nothing deriving instance Show Word128 deriving instance Eq Word128 deriving instance Ord Word128 deriving instance GHC.Generics.Generic Word128 deriving instance Data.Data.Data Word128 deriving instance base-4.14.1.0:Data.Typeable.Internal.Typeable Word128 instance hashable-1.3.0.0:Data.Hashable.Class.Hashable Word128 deriving instance template-haskell-2.16.0.0:Language.Haskell.TH.Syntax.Lift Word128 instance GHC.Records.HasField "part1" Word128 Word64 where GHC.Records.getField (Word128 a _) = a instance GHC.Records.HasField "part2" Word128 Word64 where GHC.Records.getField (Word128 _ a) = a instance (mapper ~ (Word64 -> Word64)) => GHC.OverloadedLabels.IsLabel "part1" (mapper -> Word128 -> Word128) where GHC.OverloadedLabels.fromLabel = \ fn (Word128 a b) -> (Word128 (fn a)) b instance (mapper ~ (Word64 -> Word64)) => GHC.OverloadedLabels.IsLabel "part2" (mapper -> Word128 -> Word128) where GHC.OverloadedLabels.fromLabel = \ fn (Word128 a b) -> (Word128 a) (fn b) instance (a ~ Word64) => GHC.OverloadedLabels.IsLabel "part1" (Word128 -> a) where GHC.OverloadedLabels.fromLabel = \ (Word128 a _) -> a instance (a ~ Word64) => GHC.OverloadedLabels.IsLabel "part2" (Word128 -> a) where GHC.OverloadedLabels.fromLabel = \ (Word128 _ b) -> b
Labels
Among the generated instances you’ll find instances for the
IsLabel class. It is a class powering Haskell’s
OverloadedLabels extension. The instances we define for it let us reduce the boilerplate in the way we address our model. Here’s how.
We can access the members of records:
getNetworkAddressPort :: NetworkAddress -> Word16 getNetworkAddressPort = #port
Yep. Finally. Address your fields without crazy prefixes or dealing with disambiguation otherwise.
Labels will be unprefixed regardless of what you choose to do about record fields. You can also name them whatever you like. Literally, even
type and
data make up valid labels, and unless you choose to generate unprefixed record fields, you can freely use them.
We get accessors to the members of sums as well:
getHostIp :: Host -> Maybe Ip getHostIp = #ip
Yep. Sum types can have accessors if you look at them from a certain perspective.
Accessors to enums - why not?
isTransportProtocolTcp :: TransportProtocol -> Bool isTransportProtocolTcp = #tcp
We get shortcuts to enums:
tcpTransportProtocol :: TransportProtocol tcpTransportProtocol = #tcp
We can instantiate sums:
ipHost :: Ip -> Host ipHost = #ip
We can map over both record fields and sum variants:
mapNetworkAddressHost :: (Host -> Host) -> NetworkAddress -> NetworkAddress mapNetworkAddressHost = #host
mapHostIp :: (Ip -> Ip) -> Host -> Host mapHostIp = #ip
There’s a few things worth noticing here. Unfortunately the type inferencer will be unable to automatically detect the type of the mapping lambda parameter, so it needs to have an unambiguous type. This means that often times you’ll have to provide an explicit type for it. But there’s a solution.
There is a “domain-optics” library which provides an integration with the “optics” library. By including the derivers from it in the parameters to the
declare macro, you’ll be able to map as follows without type inference issues:
mapNetworkAddressHost :: (Host -> Host) -> NetworkAddress -> NetworkAddress mapNetworkAddressHost = over #host
You can read more about the “optics” library integration in the Optics section.
If we can map, then we can also set:
setNetworkAddressHost :: Host -> NetworkAddress -> NetworkAddress setNetworkAddressHost host = #host (const host)
Optics
Extensional “domain-optics” library provides integration with “optics”. By using the derivers from it we can get optics using labels as well.
Coming back to our example here’s all we’ll have to do to enable our model with optics:
{-# LANGUAGE TemplateHaskell, StandaloneDeriving, DeriveGeneric, DeriveDataTypeable, DeriveLift, FlexibleInstances, MultiParamTypeClasses, DataKinds, TypeFamilies, UndecidableInstances #-} module Model where import Data.Text (Text) import Data.Word (Word16, Word32, Word64) import Domain import DomainOptics declare (Just (False, True)) (stdDeriver <> labelOpticDeriver) =<< loadSchema "schemas/model.yaml"
Here are some of the optics that will become available to us:
networkAddressHostOptic :: Lens' NetworkAddress Host networkAddressHostOptic = #host
hostIpOptic :: Prism' Host Ip hostIpOptic = #ip
tcpTransportProtocolOptic :: Prism' TransportProtocol () tcpTransportProtocolOptic = #tcp
As you may have noticed, we avoid the “underscore-uppercase” naming convention for prisms. With labels there’s no longer any need for it.
We recommend using “optics” instead of direct
IsLabel instances, because functions like
view,
over,
set,
review make your intent clearer to the reader in many cases and in some cases provide better type inference. | https://www.stackage.org/package/domain | CC-MAIN-2021-49 | en | refinedweb |
For other versions, see the Versioned plugin docs.
For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Elasticsearch provides near real-time search and analytics for all types of data. The Elasticsearch output plugin can store both time series datasets (such as logs, events, and metrics) and non-time series data in Elasticsearch.
You can learn more about Elasticsearch on the website landing page or in the Elasticsearch documentation.
Compatibility Note
When connected to Elasticsearch 7.x, modern versions of this plugin
don’t use the document-type when inserting documents, unless the user
explicitly sets
document_type.
If you are using an earlier version of Logstash and wish to connect to Elasticsearch 7.x, first upgrade Logstash to version 6.8 to ensure it picks up changes to the Elasticsearch index template.
If you are using a custom
template,
ensure your template uses the
_doc document-type before
connecting to Elasticsearch 7.x.
You can run Elasticsearch on your own hardware or use our hosted Elasticsearch Service that is available on AWS, GCP, and Azure. Try the Elasticsearch Service for free.
This plugin will persist events to Elasticsearch in the shape produced by your pipeline, and cannot be used to re-shape the event structure into a shape that complies with ECS. To produce events that fully comply with ECS, you will need to populate ECS-defined fields throughout your pipeline definition.
However, the Elasticsearch Index Templates it manages can be configured to
be ECS-compatible by setting
ecs_compatibility.
By having an ECS-compatible template in place, we can ensure that Elasticsearch
is prepared to create and index fields in a way that is compatible with ECS,
and will correctly reject events with fields that conflict and cannot be coerced.
The Elasticsearch output plugin can store both time series datasets (such as logs, events, and metrics) and non-time series data in Elasticsearch.
The data stream options are recommended for indexing time series datasets (such as logs, metrics, and events) into Elasticsearch:
Example: Basic default configuration
output { elasticsearch { hosts => "hostname" data_stream => "true" } }
This example shows the minimal settings for processing data streams. Events
with
data_stream.*` fields are routed to the appropriate data streams. If the
fields are missing, routing defaults to
logs-generic-logstash.
Example: Customize data stream name
output { elasticsearch { hosts => "hostname" data_stream => "true" data_stream_type => "metrics" data_stream_dataset => "foo" data_stream_namespace => "bar" } }
You cannot use dynamic variable substitution when
ilm_enabled is
true and
when using
ilm_rollover_alias.
If you’re sending events to the same Elasticsearch cluster, but you’re targeting different indices you can:
- use different Elasticsearch outputs, each one with a different value for the
indexparameter
- use one Elasticsearch output and use the dynamic variable substitution for the
indexparameter
Each Elasticsearch output is a new client connected to the cluster:
- it has to initialize the client and connect to Elasticsearch (restart time is longer if you have more clients)
- it has an associated connection pool
In order to minimize the number of open connections to Elasticsearch, maximize the bulk size and reduce the number of "small" bulk requests (which could easily fill up the queue), it is usually more efficient to have a single Elasticsearch output.
Example:
output { elasticsearch { index => "%{[some_field][sub_field]}-%{+YYYY.MM.dd}" } }
What to do in case there is no field in the event containing the destination index prefix?
You can use the
mutate filter and conditionals to add a
[@metadata] field
to set the destination index for each event. The
[@metadata] fields will not
be sent to Elasticsearch.
Example:
filter { if [log_type] in [ "test", "staging" ] { mutate { add_field => { "[@metadata][target_index]" => "test-%{+YYYY.MM}" } } } else if [log_type] == "production" { mutate { add_field => { "[@metadata][target_index]" => "prod-%{+YYYY.MM.dd}" } } } else { mutate { add_field => { "[@metadata][target_index]" => "unknown-%{+YYYY}" } } } } output { elasticsearch { index => "%{[@metadata][target_index]}" } } (DLQ) for more information about processing events in the DLQ.
The Index Lifecycle Management feature requires plugin version
9.3.1 or higher. setting detects whether the Elasticsearch instance
supports ILM, and uses" } } to the Elasticsearch Bulk API as a single request. However, if a batch exceeds 20MB we break it up into multiple bulk requests. If a single document exceeds 20MB it compression, and handles compressed responses from Elasticsearch.
To enable request compression, use the
http_compression
setting on this plugin.
Authentication to a secure Elasticsearch cluster is possible using one of the
user/
cloud_auth or
api_key options.
Authorization to a secure Elasticsearch cluster requires
read permission at
index level and
monitoring permissions at cluster level. The
monitoring
permission at cluster level is necessary to perform periodic connectivity
checks.
This plugin supports the following configuration options plus the Common Options described later.
Also see Common Options for a list of options supported by all output plugins.
doc.
Authenticate using Elasticsearch API key. Note that this option also requires
enabling the
ssl option.
Format is
id:api_key where
id and
api_key are as returned by the
Elasticsearch Create API key API.
HTTP Path to perform the _bulk requests to this defaults to a concatenation of the path parameter and "_bulk"
The .cer or .pem file to validate the server’s certificate.
Cloud authentication string ("<username>:<password>" format) is an alternative
for the
user/
password pair.
For more details, check out the Logstash-to-Cloud documentation.
Cloud ID, from the Elastic Cloud web console. If set
hosts should not be used.
For more details, check out the Logstash-to-Cloud documentation.
- Value can be any of:
true,
falseand
auto
- Default is
falsein Logstash 7.x and
autostarting in Logstash 8.0.
Defines whether data will be indexed into an Elasticsearch data stream.
The other
data_stream_* settings will be used only if this setting is enabled.
Logstash handles the output as a data stream when the supplied configuration
is compatible with data streams and this value is set to
auto.
Automatically routes events by deriving the data stream name using specific event
fields with the
%{[data_stream][type]}-%{[data_stream][dataset]}-%{[data_stream][namespace]} format.
If enabled, the
data_stream.* event fields will take precedence over the
data_stream_type,
data_stream_dataset, and
data_stream_namespace settings,
but will fall back to them if any of the fields are missing from the event.
The data stream dataset used to construct the data stream at index time.
The data stream namespace used to construct the data stream at index time.
Automatically adds and syncs the
data_stream.* event fields if they are missing from the
event. This ensures that fields match the name of the data stream that is receiving events.
If existing
data_stream.* event fields do not match the data stream name
and
data_stream_auto_routing is disabled, the event fields will be
overwritten with a warning.
The data stream type used to construct the data stream at index time.
Currently, only
logs,
metrics and
synthetics are supported.
Enable
doc_as_upsert for update mode.
Create a new document with source if
document_id doesn’t exist in Elasticsearch.
The document ID for the index. Useful for overwriting existing entries in Elasticsearch with the same ID.
This option is deprecated due to the removal of types in Elasticsearch 6.0. It will be removed in the next major version of Logstash.
This value is ignored and has no effect for Elasticsearch clusters
8.x.
This sets the document type to write events to. Generally you should try to write only
similar events to the same type. String expansion
%{foo} works here.
If you don’t set a value for this option:
- for elasticsearch clusters 8.x: no value will be used;
- for elasticsearch clusters 7.x: the value of _doc will be used;
- for elasticsearch clusters 6.x: the value of doc will be used;
- for elasticsearch clusters 5.x and below: the event’s type field will be used, if the field is not present the value of doc will be used.
- Value type is string
Supported values are:
disabled: does not provide ECS-compatible templates
v1: provides defaults that are compatible with v1 of the Elastic Common Schema
Default value depends on which version of Logstash is running:
- When Logstash provides a
pipeline.ecs_compatibilitysetting, its value is used as the default
- Otherwise, the default value is
disabled.
Controls this plugin’s compatibility with the Elastic Common Schema (ECS), including the installation of ECS-compatible index templates. The value of this setting affects the default values of:).
Examples:
`"127.0.0.1"` `["127.0.0.1:9200","127.0.0.2:9200"]` `[""]` `[""]` `[""]` (If using a proxy on a subpath)
Exclude dedicated master nodes from the
hosts list to
prevent Logstash from sending bulk requests to the master nodes. This parameter
should reference only data or client nodes in Elasticsearch.
Any special characters present in the URLs here MUST be URL escaped! This means
# should be put in as
%23 for instance.
Enable gzip compression on requests.
This setting allows you to reduce this plugin’s outbound network traffic by compressing each bulk request to Elasticsearch.
This output plugin reads compressed responses from Elasticsearch regardless of the value of this setting.
- Value can be any of:
true,
false,
auto
- Default value is
auto
The default setting of
auto will automatically enable
Index Lifecycle Management,.
Updating the pattern will require the index template to be rewritten.
The pattern must finish with a dash and a number that will be automatically incremented when indices rollover.
The pattern is a 6-digit string padded by zeros, regardless of prior index name. Example: 000001. See Rollover path parameters API docs for details.
Modify this setting to use a custom Index Lifecycle Management policy, rather than the default. If this value is not set, the default policy will be automatically installed into Elasticsearch
If this setting is specified, the policy must already exist in Elasticsearch cluster.
- Value type is string
Default value depends on whether
ecs_compatibilityis enabled:
- ECS Compatibility disabled:
logstash
- ECS Compatibility enabled:
ecs-logstash.
ilm_rollover_alias does NOT support dynamic variable substitution as
index does.
- Value type is string
Default value depends on whether
ecs_compatibilityis enabled:
- ECS Compatibility disabled:
"logstash-%{+yyyy.MM.dd}"
- ECS Compatibility enabled:
"ecs-logstash-%{+yyyy.MM.dd}"}.
Logstash uses
Joda
formats for the index pattern from event timestamp. =>
"%{[@metadata][pipeline]}". The pipeline parameter won’t be set if the value
resolves to empty string (""). setting accepts only URI arguments to prevent leaking credentials.
An empty string is treated as if proxy was not set. This is useful when using
environment variables e.g.
proxy => '${LS_PROXY:}'..
A routing override to be applied to all processed events.
This can be dynamic using the
%{foo} syntax.
Set script name for scripted update mode
Example:
output { elasticsearch { script => "ctx._source.message = params.event.get('message')" } }
Set the language of the used script.
When using indexed (stored) scripts on Elasticsearch 6.
- Value type is string
Default value depends on whether
ecs_compatibilityis enabled:
- ECS Compatibility disabled:
logstash
- ECS Compatibility enabled:
ecs-logstash version to use for indexing. Use sprintf syntax like
%{my_version} to use
a field value here. See the
versioning support
blog for more information.
- Value can be any of:
internal,
external,
external_gt,
external_gte,
force
- There is no default value for this setting.
The version_type to use for indexing. See the versioning support blog and Version types in the Elasticsearch documentation." } }
Variable substitution in the
id field only supports environment variables
and does not support the use of values from the secret store. | https://www.elastic.co/guide/en/logstash/7.16/plugins-outputs-elasticsearch.html | CC-MAIN-2021-49 | en | refinedweb |
Multiprocessing in Python is a built-in package that allows the system to run multiple processes simultaneously. It will enable the breaking of applications into smaller threads that can run independently. The operating system can then allocate all these threads or processes to the processor to run them parallelly, thus improving the overall performance and efficiency.
Why Use Multiprocessing In Python?
Performing multiple operations for a single processor becomes challenging. As the number of processes keeps increasing, the processor will have to halt the current process and move to the next, to keep them going. Thus, it will have to interrupt each task, thereby hampering the performance.
You can think of it as an employee in an organization tasked to perform jobs in multiple departments. If the employee has to manage the sales, accounts, and even the backend, he will have to stop sales when he is into accounts and vice versa.
Suppose there are different employees, each to perform a specific task. It becomes simpler, right? That’s why multiprocessing in Python becomes essential. The smaller task threads act like different employees, making it easier to handle and manage various processes. A multiprocessing system can be represented as:
- A system with more than a single central processor
- A multi-core processor, i.e., a single computing unit with multiple independent core processing units
In multiprocessing, the system can divide and assign tasks to different processors.
What Is the Multiprocessing Module?
The Python multiprocessing module provides multiple classes that allow us to build parallel programs to implement multiprocessing in Python. It offers an easy-to-use API for dividing processes between many processors, thereby fully leveraging multiprocessing. It overcomes the limitations of Global Interpreter Lock (GIL) by using sub-processes instead of threads. The primary classes of the Python multiprocessing module are:
- Process
- Queue
- Lock
Let’s use an example to better understand the use of the multiprocessing module in Python.
Example - Using the Process Class to Implement Multiprocessing in Python
# importing Python multiprocessing module
import multiprocessing
def prnt_cu(n):
print("Cube: {}".format(n * n * n))
def prnt_squ(n):
print("Square: {}".format(n * n))
if __name__ == "__main__":
# creating multiple processes
proc1 = multiprocessing.Process(target=prnt_squ, args=(5, ))
proc2 = multiprocessing.Process(target=prnt_cu, args=(5, ))
# Initiating process 1
proc1.start()
# Initiating process 2
proc2.start()
# Waiting until proc1 finishes
proc1.join()
# Waiting until proc2 finishes
proc2.join()
# Processes finished
print("Both Processes Completed!")
Output
Now, it’s time to understand the above code and see how the multiprocessing module and process class help build parallel programs.
- You first used the “import multiprocessing” command to import the module.
- Next, you created the Process class objects: proc1 and proc2. The arguments passed in these objects were:
- After the object construction, you must use the start() method to start the processes.
- Lastly, you used the join() method to stop the current program’s execution until it executes the processes. Thus, the program will first run proc1 and proc2. It will then move back to the following statements of the running program.
What Are Pipes Used For In Multiprocessing In Python?
While using multiprocessing in Python, Pipes acts as the communication channel. Pipes are helpful when you want to initiate communication between multiple processes. They return two connection objects, one for each end of the Pipe, and use the send() & recv() methods to communicate. Let’s look at an example for a clear understanding. In the below code, you will use a Pipe to send some info from the child to the parent connection.
import multiprocessing
from multiprocessing import Process, Pipe
def exm_function(c):
c.send(['Hi! This is child info'])
c.close()
if __name__ == '__main__':
par_c, chi_c = Pipe()
mp1 = multiprocessing.Process(target=exm_function, args=(chi_c,))
mp1.start()
print (par_c.recv() )
mp1.join()
Output
What Are the Queues Used For In Multiprocessing In Python?
The Queue in Python is a data structure based on the FIFO (First-In-First-Out) concept. Like the Pipe, even a queue helps in communication between different processes in multiprocessing in Python. It provides the put() and get() methods to add and receive data from the queue. Here’s an example to show the use of queue for multiprocessing in Python. This code will create a function to check if a number is even or odd and insert it in the queue. You will then initiate the process and print the numbers.
import multiprocessing
def even_no(num, n):
for i in num:
if i % 2 == 0:
n.put(i)
if __name__ == "__main__":
n = multiprocessing.Queue()
p = multiprocessing.Process(target=even_no, args=(range(10), n))
p.start()
p.join()
while n:
print(n.get())
Output
What Are The Locks Used For In Multiprocessing In Python?
The lock is used for locking the processes while using multiprocessing in Python. With its acquire() and release() methods, you can lock and resume processes. Thus, it allows you to execute specific tasks based on priority while stopping the other processes. The below code uses the lock mechanism on an ATM-like system.
import multiprocessing
# Withdrawal function
def wthdrw(bal, lock):
for _ in range(10000):
lock.acquire()
bal.value = bal.value - 1
lock.release()
# Deposit function
def dpst(bal, lock):
for _ in range(10000):
lock.acquire()
bal.value = bal.value + 1
lock.release()
def transact():
# initial balance
bal = multiprocessing.Value('i', 100)
# creating lock object
lock = multiprocessing.Lock()
# creating processes
proc1 = multiprocessing.Process(target=wthdrw, args=(bal,lock))
proc2 = multiprocessing.Process(target=dpst, args=(bal,lock))
# starting processes
proc1.start()
proc2.start()
# waiting for processes to finish
proc1.join()
proc2.join()
# printing final balance
print("Final balance = {}".format(bal.value))
if __name__ == "__main__":
for _ in range(10):
# performing transaction process
transact()
Output
Looking forward to making a move to the programming field? Take up the Python Training Course and begin your career as a professional Python programmer
Conclusion
In this article, you learned about what is multiprocessing in Python and how to use it. The most practical use of multiprocessing is sharing CPU resources and ATM operations, as you have seen in the last example. Because of the ease it provides to manage multiple processes, the concept of multiprocessing in Python will undoubtedly get a lot of traction. It will be a wise move to get a firm understanding and hands-on practice.
If you are a newbie and this is too hard to digest, you can begin learning the basics first. You can refer to Simplilearn’s Python Tutorial for Beginners to get acquainted with the basic Python concepts. Once you are clear with the basics, you can opt for our Online Python Certification Course to excel in Python development.
Do you have any questions for us? Leave them in the comments section of this article. Our experts will get back to you on the same, ASAP.
Happy learning! | https://www.simplilearn.com/tutorials/python-tutorial/multiprocessing-in-python?source=frs_recommended_resource_clicked | CC-MAIN-2021-49 | en | refinedweb |
Product Information
Check out the improvements in SAP Fiori launchpad content administration and operations with SP01 of SAP Fiori front-end server 2020
Recently, Support Package 01 for the SAP Fiori front-end server 2020 has been released and it comes with various improvements to the launchpad administration tools, like the content manager and application manager, plus a new support tool and brand new task list to help you monitor your system health much more easily. In this blog, I would like to give you a glimpse into these new tools and features, so you can start using them right now.
1. Improvements in the SAP Fiori launchpad content manager
Let us take a look at the content manager enhancements first, as this is a widely used tools that has been enhanced with some frequently requested functionality.
To simplify the role administration and assignment of SAP Fiori launchpad content to roles, the Roles tab has been enhanced with various functionality:
- A new Copy button allows the content admin to copy roles (1) for example to create custom roles from SAP delivered roles quickly provided the user has the required permissions in PFCG.
- You can now add, remove and mass-remove catalogs, groups, and spaces to/from the selected role (2) directly in the content manager.
- There are now new views to display the assigned groups, spaces, and tile/target mapping combinations for the selected role (3). When clicking one of the four Show… buttons, the lower part of the screen will change to display either the catalogs, groups, space(s), or tile-target mapping combinations assigned to the selected role.
On the other side, there is also an additional button to display the usage in the role for a selected tile/target mapping combination on the Tiles/Target Mappings tab.
Besides that, there are also lots of new links within content manager that allow you to jump to other tools like the SAP Fiori launchpad application manager or the Manage Launchpad Spaces app much more easily. You can find these links in the lower part of the screen, e.g. on the roles tab, you can display all catalogs assigned to a role and then directly open one of those catalogs in the launchpad designer or application manager – or display the space assigned to a role and open it directly from here in the Manage Launchpad Spaces app.
Finally, you can launch the new SAP Fiori launchpad content aggregator tool from anywhere in SAP Fiori launchpad content manager by clicking Goto > Launchpad Content Aggregator.
2. The new Launchpad Content Aggregator
The launchpad content aggregator is a tool that helps analyzing SAP Fiori launchpad content. It allows administrators to create an aggregated view of all SAP Fiori launchpad content assigned to a set of roles and of the services required to run the applications.
You can access the launchpad content aggregator directly from the content manager via GoTo > Launchpad Content Aggregator. Then you just select the roles that you want to analyze and choose whether you want to see service information for OData and ICF services as well, then execute.
The tool will give you a list of all catalogs with their assigned tiles/target mappings and lots of additional properties like for example the Fiori ID, the target mapping parameters, or the App Type. All properties displayed in the launchpad content manager are also part of this list. If you select to show service information as well, for each tile and target mapping, the required ICF service URL or OData service name and namespace will be displayed with their activation status.
To facilitate further analysis of this large amount of data, it can easily be exported to an excel spreadsheet with a click on the Export button.
3. App Support Tool
- App-specific configuration issues
- General authorization errors
- Gateway errors from the SAP Gateway error logs
- General ABAP runtime errors
In addition, users can download logs and forward them to the administrator. Permissions to access the different pieces of error information can be assigned in a fine granular way to users and administrators. For more information, see blog post about the App Support for the SAP Fiori launchpad by Tobias Moessle.
4. Health check task lists
There is a new task list available to perform some checks related to SAP Fiori launchpad setup and operations. The task list /UI2/FLP_HEALTH_CHECKS provides administrators with the following information:
- Are the required OData and ICF services for the launchpad active?
- Are the system aliases consistently configured?
To learn more about the new tools and features, please also check the What’s new section in the documentation.
Hope that was useful! Stay tuned,
Sibylle
Thanks for sharing the information. | https://blogs.sap.com/2021/03/11/check-out-the-improvements-in-sap-fiori-launchpad-content-administration-and-operations-with-sp01-of-sap-fiori-front-end-server-2020/ | CC-MAIN-2021-49 | en | refinedweb |
The problem Destination City Leetcode Solution provides us with some relations between cities. The input is given as line separated pair of cities. Each line in input denotes a direct road from the starting point to the endpoint. It is given in the problem, that the cities do not form a circular route. It is also stated that the input has a destination city. A destination city is defined as a city that does not have any outgoing road. So as usual before diving deep into the solution, let’s take a look at a few examples.
paths = [["London","New York"],["New York","Lima"],["Lima","Sao Paulo"]]
Sao Paulo
Explanation: So if we go by the input, London has a direct road to New York. New York has a direct road to Lima. In the end, Lima has a direct road to Sao Paulo. So, the destination city must be Sao Paulo because it has no outgoing roads.
paths = [["A","Z"]]
Z
Explanation: We have a single direct road starting from A to Z. Thus Z is our destination city.
Approach to Destination City Leetcode Solution
The problem Destination City Leetcode Solution asked us to find the destination. The input has provided us with some direct roads among the cities. It is given that a destination city does not have an outgoing road. The problem can be easily solved using a hashmap. In the hashmap, we keep track of outgoing roads. We traverse the paths vector and increment the number of outgoing roads of the cities. Then afterward we check if there is any city among the paths vector, that does not have an outgoing road. We return that city as the answer.
Code for Destination City Leetcode Solution
C++ code
#include <bits/stdc++.h> using namespace std; string destCity(vector<vector<string>>& paths) { unordered_map<string, int> outdegree; for(auto x: paths){ outdegree[x[0]]++; } for(auto x: paths) if(outdegree[x[0]] == 0) return x[0]; else if(outdegree[x[1]] == 0) return x[1]; return paths[0][0]; } int main(){ vector<vector<string>> paths = {{"London","New York"},{"New York","Lima"},{"Lima","Sao Paulo"}}; string output = destCity(paths); cout<<output; }
Sao Paulo
Java code
import java.util.*; import java.lang.*; import java.io.*; class Main { public static String destCity(List<List<String>> paths) { HashMap<String, Integer> outdegree = new HashMap<String, Integer>(); for(List<String> x: paths) if(outdegree.containsKey(x.get(0))) outdegree.put(x.get(0), outdegree.get(x.get(1))+1); else outdegree.put(x.get(0), 1); for(List<String> x: paths) if(!outdegree.containsKey(x.get(0))) return x.get(0); else if(!outdegree.containsKey(x.get(1))) return x.get(1); return paths.get(0).get(0); } public static void main (String[] args) throws java.lang.Exception{ List<List<String>> paths = new ArrayList<List<String>>(); paths.add(new ArrayList(Arrays.asList("London","New York"))); paths.add(new ArrayList(Arrays.asList("New York","Lima"))); paths.add(new ArrayList(Arrays.asList("Lima","Sao Paulo"))); System.out.print(destCity(paths)); } }
Sao Paulo
Complexity Analysis
Time Complexity
O(N), since we used Hashmap, the time complexity is reduced to linear.
Space Complexity
O(N), space is required to store the number of outgoing roads in the hashmap. Thus the space complexity is also linear. | https://www.tutorialcup.com/leetcode-solutions/destination-city-leetcode-solution.htm | CC-MAIN-2021-49 | en | refinedweb |
DataSource.FetchAppointments Event
Allows you to load appointments only for the specified date range.
Namespace: DevExpress.Xpf.Scheduling
Assembly: DevExpress.Xpf.Scheduling.v21.2.dll
Declaration
public event FetchDataEventHandler FetchAppointments
Public Event FetchAppointments As FetchDataEventHandler
Event Data
The FetchAppointments event's data class is FetchDataEventArgs. The following properties provide information specific to this event:
The event data class exposes the following methods:
Remarks
Specify AppointmentMappings.QueryStart and AppointmentMappings.QueryEnd mappings to handle the FetchAppointments event.
These mappings allow you to calculate the correct interval that is used in a SELECT query when you handle the FetchAppointments event. The use of the AppointmentMappings.Start and AppointmentMappings.End properties is not recommended in this scenario because such an interval may not include appointment patterns and the corresponding appointment exceptions.
The example below illustrates how to fetch appointments from a DbContext source. The FetchMode property is set to Bound.
public class SchedulingDataContext : DbContext { public SchedulingDataContext() : base(CreateConnection(), true) { } static DbConnection CreateConnection() { //... } public DbSet<AppointmentEntity> AppointmentEntities { get; set; } //... } private void dataSource_FetchAppointments(object sender, DevExpress.Xpf.Scheduling.FetchDataEventArgs e) { using (var dbContext = new SchedulingDataContext()) { e.Result = dbContext.AppointmentEntities .Where(x => x.QueryStart <= e.Interval.End && x.QueryEnd >= e.Interval.Start) .ToArray(); } }
The FetchRange property specifies the time interval for which to fetch the appointments. FetchAppointments loads items for the SchedulerControl.VisibleIntervals extended up to the FetchRange.
Refer to the Load Data on Demand topic for more information. | https://docs.devexpress.com/WPF/DevExpress.Xpf.Scheduling.DataSource.FetchAppointments | CC-MAIN-2021-49 | en | refinedweb |
#include <mysql/components/service.h>
Go to the source code of this file.
A handle type for a iterator to a Component.
A handle type for a iterator to metadata of some Component.
Service for listing all metadata for a Component specified by the iterator.
Service to query specified metadata key directly for the specified Component by iterator to it.
Service for listing all Components by iterator.
Service for providing Components from a specified scheme of URN.
All scheme loading Services are the same although they have separate names (aliased to the main type) to allow a single component implement several scheme loaders, to not break the recommendation to keep implementation names the same as the component name, and to be able to create wrappers and other solutions that require to have multiple implementations of a single type.
Service for managing the list of loaded Components. | https://dev.mysql.com/doc/dev/mysql-server/latest/include_2mysql_2components_2services_2dynamic__loader_8h.html | CC-MAIN-2021-49 | en | refinedweb |
SQL.Open only creates the DB object, but dies not open any connections to the database. If you want to test your connections you have to execute a query to force opening a connection. The common way for this is to call Ping() on your DB object.
See and
Quoting from the doc of
sql.Open():
Open may just validate its arguments without creating a connection to the database. To verify that the data source name is valid, call Ping.
As stated,
Open() may not open any physical connection to the database server, but it will validate its arguments. That being said if arguments are valid, it may return
nil error even if the database server is not reachable, or even if the host denoted by
dataSourceName does not exist.
To answer your other question:
What is the point of check for errors after this function if it does not return errors?
You have to check returned errors because it can return errors. For example if the specified
driverName is invalid, a non-nil error will be returned (see below).
To test if the database server is reachable, use
DB.Ping(). But you can only use this if the returned error is
nil, else the returned
DB might also be
nil (and thus calling the
Ping()method on it may result in run-time panic):
if db, err := sql.Open("nonexistingdriver", "somesource"); err != nil { fmt.Println("Error creating DB:", err) fmt.Println("To verify, db is:", db) } else { err = db.Ping() if err != nil { fmt.Println("db.Ping failed:", err) } }
Output (try it on the Go Playground):
Error creating DB: sql: unknown driver "nonexistingdriver" (forgotten import?) To verify, db is: <nil>
sql.open("postgres", "postgres://postgres:postgres/xxxx")连接数据库出错的时候,也不会报错, 很奇怪,那这种错误是怎么处理的呢?
package main import ( "database/sql" "fmt" _ "github.com/lib/pq" "log" ) var db *sql.DB func main() { defer func() { fmt.Println(recover()) }() var ss string var err error // var err error if db != nil { db.Close() } else { db, err = sql.Open("postgres", "postgres://postgres:postgres@127.0.0.1/xinyi?sslmode=disable") if err != nil { log.Println("Can't connect to postgresql database") } else { err = db.Ping() if err != nil { fmt.Println("db.Ping failed:", err) } } err = db.QueryRow("select value from configs where key =$1", "line1_batch").Scan(&ss) if err != nil { log.Println("query error") } fmt.Println(ss) } }
-----------------------------------------------------------------------------------------------------
SQL Drivers
Go’s standard library was not built to include any specific database drivers. Here is a list of available third party SQL drivers .
Setup
First we will need to import the packages that our program will use.
import (
“database/sql”
_ “github.com/lib/pq”
)
Here, we import the “database/sql” library which provides a generic interface for working with SQL databases. The second import, _”github.com/lib/pq”, is the actual postgresql driver. The underscore before the library means that we import pq without side effects. Basically, it means Go will only import the library for its initialization. For pq, the initialization registers pq as a driver for the SQL interface.
Open
Next we will need to open the database. It is important to note that calling “Open” does not open a connection to the database. The return from “Open” is a DB type and an error. The DB type represents a pool of connections which the sql package manages for you.
db, err := sql.Open(“postgres”,”user=Arnold dbname=TotalRecall sslmode=disable”)
“Open” returns an error which validates the arguments of a database open
if err != nil {
log.Fatal(“Error: The data source arguments are not valid”)
}
Ping
Since the error returned from “Open” does not check if the datasource is valid calling Ping on the database is required
err = db.Ping()
if err != nil {
log.Fatal(“Error: Could not establish a connection with the database”)
}
Prepare
Once the DB has been set up, we can start safely preparing query statements. “Prepare” does not execute the statement.
queryStmt, err := db.Prepare(“SELECT name FROM users WHERE id=$1”)
if err != nil {
log.Fatal(err)
}
QueryRow
We can now “QueryRow” off of the prepared statement and store the returned row’s first column into the “name string”. “QueryRow” only queries for one row.
var name string
err = queryStmt.QueryRow(15).Scan(&name)
In addition, a common error check is for “No Rows”. Some programs handle “No Rows” differently from other scanning errors. Errors like this are specific to the library, not Go in general.
if err == sql.ErrNoRows {
log.Fatal(“No Results Found”)
}
if err != nil {
log.Fatal(err)
}
You can also skip explicitly preparing your Query statements.
var lastName string
err = db.QueryRow(“SELECT last_name FROM users WHERE id=$1”, 15).Scan(&lastName)
if err == sql.ErrNoRows {
log.Fatal(“No Results Found”)
}
if err != nil {
log.Fatal(err)
}
Query
We can also handle a Query that returns multiple rows and stores the result into a “names” slice. In the code below you will see “rows.Next”, which moves the cursor to the next result row. If there is no next row or error preparing the next row, a false will be returned.
var names []string
rows, err := queryStmt.Query(15)
defer rows.Close()
for rows.Next() {
var name string
if err := rows.Scan(&name); err != nil {
log.Fatal(err)
}
names = append(names, name)
}
This next check is for any errors encountered during the iteration.
err = rows.Err()
if err != nil {
log.Fatal(err)
}
Conclusion
Golang’s standard sql package is extremely simple, yet powerful. This post covers the basics of the sql package. If you would like to learn more, visit the official docs at:. Feel free to leave any comments or questions.
有疑问加站长微信联系(非本文作者)
感谢作者:oxspirt
查看原文:golang 查询数据库操作 | https://studygolang.com/articles/10166 | CC-MAIN-2021-49 | en | refinedweb |
boost/beast/core/buffers_prefix.hpp
// // Copyright (c) 2016-2019 Vinnie Falco (vinnie dot falco at gmail dot com) // // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at) // // Official repository: // #ifndef BOOST_BEAST_BUFFERS_PREFIX_HPP #define BOOST_BEAST_BUFFERS_PREFIX_HPP #include <boost/beast/core/detail/config.hpp> #include <boost/beast/core/buffer_traits.hpp> #include <boost/optional/optional.hpp> // for in_place_init_t #include <algorithm> #include <cstdint> #include <type_traits> #if BOOST_WORKAROUND(BOOST_MSVC, < 1910) #include <boost/type_traits/is_convertible.hpp> #endif namespace boost { namespace beast { /** A buffer sequence adaptor that shortens the sequence size. The class adapts a buffer sequence to efficiently represent a shorter subset of the original list of buffers starting with the first byte of the original sequence. @tparam BufferSequence The buffer sequence to adapt. */ template<class BufferSequence> class buffers_prefix_view { using iter_type = buffers_iterator_type<BufferSequence>; BufferSequence bs_; std::size_t size_ = 0; std::size_t remain_ = 0; iter_type end_{}; void setup(std::size_t size); buffers_prefix_view( buffers_prefix_view const& other, std::size_t dist); public: /** The type for each element in the list of buffers. If the type `BufferSequence` meets the requirements of <em>MutableBufferSequence</em>, then `value_type` is `net::mutable_buffer`. Otherwise, `value_type` is `net::const_buffer`. @see buffers_type */ #if BOOST_BEAST_DOXYGEN using value_type = __see_below__; #elif BOOST_WORKAROUND(BOOST_MSVC, < 1910) using value_type = typename std::conditional< boost::is_convertible<typename std::iterator_traits<iter_type>::value_type, net::mutable_buffer>::value, net::mutable_buffer, net::const_buffer>::type; #else using value_type = buffers_type<BufferSequence>; #endif #if BOOST_BEAST_DOXYGEN /// A bidirectional iterator type that may be used to read elements. using const_iterator = __implementation_defined__; #else class const_iterator; #endif /// Copy Constructor buffers_prefix_view(buffers_prefix_view const&); /// Copy Assignment buffers_prefix_view& operator=(buffers_prefix_view const&); /** Construct a buffer sequence prefix. @param size The maximum number of bytes in the prefix. If this is larger than the size of passed buffers, the resulting sequence will represent the entire input sequence. @param buffers The buffer sequence to adapt. A copy of the sequence will be made, but ownership of the underlying memory is not transferred. The copy is maintained for the lifetime of the view. */ buffers_prefix_view( std::size_t size, BufferSequence const& buffers); /** Construct a buffer sequence prefix in-place. @param size The maximum number of bytes in the prefix. If this is larger than the size of passed buffers, the resulting sequence will represent the entire input sequence. @param args Arguments forwarded to the contained buffer's constructor. */ template<class... Args> buffers_prefix_view( std::size_t size, boost::in_place_init_t, Args&&... args); /// Returns an iterator to the first buffer in the sequence const_iterator begin() const; /// Returns an iterator to one past the last buffer in the sequence const_iterator end() const; #if ! BOOST_BEAST_DOXYGEN std::size_t buffer_bytes_impl() const noexcept { return size_; } #endif }; //------------------------------------------------------------------------------ /** Returns a prefix of a constant or mutable buffer sequence. The returned buffer sequence points to the same memory as the passed buffer sequence, but with a size that is equal to or smaller. No memory allocations are performed; the resulting sequence is calculated as a lazy range. @param size The maximum size of the returned buffer sequence in bytes. If this is greater than or equal to the size of the passed buffer sequence, the result will have the same size as the original buffer sequence. @param buffers An object whose type meets the requirements of <em>BufferSequence</em>. The returned value will maintain a copy of the passed buffers for its lifetime; however, ownership of the underlying memory is not transferred. @return A constant buffer sequence that represents the prefix of the original buffer sequence. If the original buffer sequence also meets the requirements of <em>MutableBufferSequence</em>, then the returned value will also be a mutable buffer sequence. */ template<class BufferSequence> buffers_prefix_view<BufferSequence> buffers_prefix( std::size_t size, BufferSequence const& buffers) { static_assert( net::is_const_buffer_sequence<BufferSequence>::value, "BufferSequence type requirements not met"); return buffers_prefix_view<BufferSequence>(size, buffers); } /** Returns the first buffer in a buffer sequence This returns the first buffer in the buffer sequence. If the buffer sequence is an empty range, the returned buffer will have a zero buffer size. @param buffers The buffer sequence. If the sequence is mutable, the returned buffer sequence will also be mutable. Otherwise, the returned buffer sequence will be constant. */ template<class BufferSequence> buffers_type<BufferSequence> buffers_front(BufferSequence const& buffers) { auto const first = net::buffer_sequence_begin(buffers); if(first == net::buffer_sequence_end(buffers)) return {}; return *first; } } // beast } // boost #include <boost/beast/core/impl/buffers_prefix.hpp> #endif | https://www.boost.org/doc/libs/develop/boost/beast/core/buffers_prefix.hpp | CC-MAIN-2021-49 | en | refinedweb |
Pushing the JNI Boundaries: Java Meets Assembly
Pushing the JNI Boundaries: Java Meets Assembly
This lesson in assembly and Java will teach you how to use the Java Native Interface to work directly with an assembler.
Join the DZone community and get the full member experience.Join For Free
The Java Native Interface (JNI) is used by developers to call native code compiled from other languages. Most online resources, including the Javadocs, showcase examples based on either C or C++. Nevertheless, is it possible to go a level lower and call code compiled from assembly without any intermediary C or C++ layer?
This article will walk you through a simple example. No prior knowledge of the JNI is necessary and only basic understanding of assembly fundamentals will be enough for you to survive this journey. The original source code is available on GitHub alongside a minimal build script.
Example Outline
Consider the following Java class:
public class JNIArraySum { public static native long computeNativeArraySum(int[] array, int arrayLength); }
The objective is to write an assembly implementation of this native method declaration that sums all elements within an integer array and returns the result as a long. An equivalent plain Java implementation would resemble the following:
public long computeArraySum(int[] array) { long sum = 0L; for (int value : array) sum += value; return sum; }
Naming Conventions
When looking for a method within a native library, the JVM follows a well-defined naming convention, described in the documentation. In our case, we need to concatenate the prefix
Java, the class name, and the method name, all separated by underscores:
Java_JNIArraySum_computeNativeArraySum.
Let's slowly warm up and start writing a bit of assembly! This mangled method name has to be visible from other translation units, which can be achieved by declaring a symbol as follows:
global Java_JNIArraySum_computeNativeArraySum
The above uses NASM/Yasm syntax, but other assemblers may rely on different keywords, for instance
public in FASM. On macOS, prepend an additional underscore to match the operating system's calling convention:
global _Java_JNIArraySum_computeNativeArraySum
Arguments Passed by Java
The JVM passes several arguments to the native function when called:
- A pointer to
JNIEnv, which we will come back to later.
- A reference to the calling Java class or object.
- The parameters defined in the Java method's declaration (
int[] arrayand
int arrayLength).
Depending on the targeted hardware architecture, these arguments can be held in registers, on the stack or even in other memory structures. In the x84-64 world, the first argument (
JNIEnv) is put in the rdi register, the second one (calling the Java object) is put in rsi, the third one (
int[] array) is put in rdx, and finally, the fourth one (
int arrayLength) is put in rcx.
With this in mind, we can start writing the
Java_JNIArraySum_computeNativeArraySum function:
Java_JNIArraySum_computeNativeArraySum: push rdx ; save Java array pointer for later use push rdi ; save JNIEnv pointer for later use push rcx ; save array length for later use
Nothing fancy happening here: Some of the parameters are simply pushed onto the stack for later use. Don't forget to prefix the function name with an extra underscore if you're on macOS.
Data Type Mappings
Java data types need to be mapped to native data types as these may not have the same representation. The JNI provides a number of different functions to do so, all listed in the docs. The example at hand operates on a Java integer array that can be mapped to a native one using the
GetIntArrayElements function. Once all necessary computations are performed on the native array, the JVM must be informed that those resources are no longer needed. This can be done with the
ReleaseIntArrayElements function, which will free any memory buffers that were used for the native data type mapping.
These JNI functions seem promising, but how does one call them from assembly?
Calling JNI Functions
Remember the
JNIEnv pointer passed in the rdi register? This pointer is itself a pointer to the JNI function table, which contains pointers to the individual interface functions. Lost with all these pointers? Things can be summed up thanks to this diagram:
Let's follow the first arrow using assembly and store the table pointer in the rax register:
mov rax, [rdi] ; get location of JNI function table
The index of each JNI function can be found in the documentation. To map an index to an actual position in memory, multiplying the index by the size of a pointer in the targeted architecture and adding the resulting offset to the address of the table will do the trick. In x84-64, each pointer is 8 bytes (64 bits). Therefore, to call
GetIntArrayElements with index 187:
mov rsi, rdx ; set array parameter for GetIntArrayElements call [rax + 187 * 8]
The first instruction copies the Java integer array pointer to the register used to store the first argument of function calls, rsi. The second instruction maps this Java integer array to a native array and puts the resulting memory location in the rax register.
Summing Elements
We now have all the data structures required to compute the sum. The following code snippet loops through the array using rax as the current address, rcx as the upper bound and rbx as the accumulator for the sum:
pop rcx ; retrieve array length lea rcx, [rax + 4 * rcx] ; compute loop end address (after last array element) mov r8, rax ; copy native array pointer for later use xor rbx, rbx ; initialise sum accumulator add_element: movsx r9, dword [rax] ; get current element add rbx, r9 ; add to sum add rax, 4 ; move array pointer to next element cmp rax, rcx ; has all array been processed? jne add_element
Note that the number 4 above corresponds to the number of bytes per integer.
The native array can now be released by calling the
ReleaseIntArrayElements JNI function with index 195 :
pop rdi ; retrieve JNIEnv pop rsi ; retrieve Java array pointer push rbx ; store sum result mov rax, [rdi] ; get location of JNI function table mov rdx, r8 ; set elems parameter for ReleaseIntArrayElements call [rax + 195 * 8]
To finish the computation, in addition to the final
ret instruction, the result must be stored in rax:
pop rax ; retrieve sum result ret
Compiling and Running
We're done with all the assembly code! Feel free to pull down the full source if you've missed something. To compile it using NASM and targeting a Linux x86-64 system, a command similar to the following can be run:
nasm -f elf64 -o ArraySum.o ArraySum.asm
To compile with Yasm, invoke
yasm instead of
nasm. On macOS, replace
elf64 by
macho64.
The produced object files must then be linked, for instance using good old GCC or Clang:
gcc -shared -z noexecstack -o libArraySum.so ArraySum.o clang -shared -o libArraySum.so ArraySum.o
Let's add a bit of sugar to the initial Java code in order to wire things together and print the result:
import java.io.File; public class JNIArraySum { private static final int[] ARRAY_TO_SUM = { 2, 41, 92, 9, 52, 27, 20, 0, 22, 35, 3, 57, 33, 4, 40, 44, 59, 31, 71, 5 }; public static void main(String[] args) { File file = new File("libArraySum.so"); System.load(file.getAbsolutePath()); long sum = computeNativeArraySum(ARRAY_TO_SUM, ARRAY_TO_SUM.length); System.out.println("The result of the sum is: " + sum); // expected result: 647 } public static native long computeNativeArraySum(int[] array, int arrayLength); }
Finally to compile the Java code and run it:
javac JNIArraySum.java java JNIArraySum
Hurray, the program prints the expected result when run!
Conclusive Notes
Hopefully, you will now be able to write Java code that can interoperate with low-level assembly. Obviously, not many people in their right mind would want to sum an array this way, especially without any vectorization, but beyond its educational value, this example can be a basis for further experimentation! These techniques can also easily be extended to match different assembler or operating system requirements. Long live assembly and Java!
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/pushing-the-jni-boundaries-java-meets-assembly?utm_source=securitydailynews.com | CC-MAIN-2020-34 | en | refinedweb |
22-03-2017
Hi,
Run Modes is the most interesting feature in AEM. This allows you to tune your AEM instance for a specific purpose; for e.g., author/publish, QA, development, intranet or others.
Why Run Modes?
Uniquely identify an environment and instances
Unique configurations based on environment
OSGI Component Creation for a specific environment
Bundle Creation for a specific environment
There are two types of run modes:
Secondary Run Modes are:
How to check the run mode of a running AEM instances?
Go to Felix Console.
Go to Status tab in Navigation and click on sling settings option.
Here you can see the Run Modes
You can directly go to
USE CASE OF RUN MODE IN THE PROJECT
Problem Statement:.
package com.aem.sgaem.project.services; import org.apache.felix.scr.annotations.Activate; import org.apache.felix.scr.annotations.Component; import org.apache.felix.scr.annotations.ConfigurationPolicy; import org.apache.felix.scr.annotations.Service; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @Component(label = "SGAEM - OSGi Configuration for Publish Run Mode", immediate = true, policy = ConfigurationPolicy.REQUIRE) @Service(ComponentPublish.class) public class ComponentPublish { private static final Logger LOGGER = LoggerFactory.getLogger(ComponentPublish.class); @Activate protected void activate() { LOGGER.debug("Run Modes Activated"); } }
��.
22-03-2017
Hi,
This is good information for the AEM platform, so I'm moving the post to the platform topic.
This topic is specific to the Communities capability.
- JK
05-04-2017
Good one Saurabh to put all together related to Run modes at one shot.
06-04-2017
Thanks a lot for sharing the information with the community.
Looking foreword to many more such content here.
~kautuk
Thanks for sharing the information
07-04-2017
Thanks.
05-09-2017
hi Saurabh,
Thanks for this knowledgeable article can you please add how to read the properties from sling:osgiConfig nodes. I have tried it using the help of ConfigurationAdmin is this only way to read the properties or do we have some other way as well.
Thanks
Sahil Garg
04-01-2018
Nice one but it would be wise to see how you could address cloud configuration with run mode such like S&P or facebook or google etc. | https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/run-modes-with-use-cases-in-aem-6-2/m-p/228169 | CC-MAIN-2020-34 | en | refinedweb |
How To Use a Profiler To Get Better Performance
The site has had many articles about improving the performance of your app, but never discussed the basic methodology on which all optimizations should be based. Today’s article will go over a scientific approach to optimizing that makes use of a tool known as a profiler and demonstrate using an AS3 application just why it’s so important to usage such a tool.
A profiler is a tool that gathers statistics about the performance cost of each function in your app and presents it to you in a useful way. Usually, you’ll get a long list of functions with the top function taking the most time to complete and the bottom function taking the least. You see at a glance which functions are worth your time to optimize and which are not. This information is often surprising, even to programmers with many years of experience optimizing for performance. As an experiment, take a look at the following simple AS3 app and see if you can guess the performance problem.
package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.utils.getTimer; public class ProfileMe extends Sprite { private static const SIZE:int = 5000; private var logger:TextField = new TextField(); private var vec:Vector.<Number> = new Vector.<Number>(SIZE); public function ProfileMe() { addEventListener(Event.ENTER_FRAME, onEnterFrame); stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; logger.text = "Running test..."; logger.y = 100; logger.autoSize = TextFieldAutoSize.LEFT; addChild(logger); } private function onEnterFrame(ev:Event): void { logger.text = ""; var beforeTime:int; var afterTime:int; var totalTime:int; row("Operation", "Time"); beforeTime = getTimer(); buildVector(); afterTime = getTimer(); totalTime += afterTime - beforeTime; row("buildVector", (afterTime-beforeTime)); beforeTime = getTimer(); vec.sort(vecCompare); afterTime = getTimer(); totalTime += afterTime - beforeTime; row("sort", (afterTime-beforeTime)); row("total", totalTime); } private function buildVector(): void { var SIZE:int = ProfileMe.SIZE; var vec:Vector.<Number> = this.vec; for (var i:int; i < SIZE; ++i) { vec[i] = Math.abs(i) * Math.ceil(i) * Math.cos(i) * Math.exp(i) * Math.floor(i) * Math.round(i) * Math.sin(i) * Math.sqrt(i); } } private function vecCompare(a:Number, b:Number): int { if (a < b) { return -1; } else if (a > b) { return 1; } return 0; } private function row(...cols): void { logger.appendText(cols.join(",")+"\n"); } } }
In step one, the app builds a
Vector of
Number out of a lot of
Math calls. In step two, the app calls
Vector.sort to sort the list. I ran this test on the following environment:
- Flex SDK (MXMLC) 4.5.1.21328, compiling in release mode (no debugging or verbose stack traces)
- Release version of Flash Player 11.1.102.63
- 2.4 Ghz Intel Core i5
- Mac OS X 10.7.3
And got these results
In a debug version of Flash Player, which is required to run the profiler, I got:
So clearly the
Math calls are faster than the
Vector sorting. In this simple app it was easy to add
getTimer calls around the only two functions. But what if your app consists of thousands or tens of thousands of lines of code? Clearly, it’s impractical to add so many
getTimer calls, even if you limit yourself to what you guess are the expensive portions of your app.
Enter the profiler. There are many available for AS3, usually as part of an IDE like Flash Builder, FlashDevelop, or FDT. Instead, we’ll be using TheMiner (formerly FlashPreloadProfiler) which is built in pure AS3 code rather than as an external tool. To set it up, let’s add a few lines of code to the above app:
DEBUG::profile { if (Capabilities.isDebugger) { addChild(new TheMiner()); } }
DEBUG::profile is simply a
Boolean compile-time constant that lets us turn off the profiler with a compiler setting. Even if it’s enabled, it requires a debug version of the Flash Player to run, so we don’t try to run if
Capabilities.isDebugger is
false.
Next, we simply download the TheMiner SWC and add it to the application. If you’re compiling with the command line tool MXMLC or COMPC, your new command will look like this:
mxmlc --library-path+=TheMiner_en_v1_3_10.swc ProfileMe.as
Now when we run the app we see a UI for the profiler at the top:
Clicking on the “Performance Profiler” button, we see:
Here we immediately see the source of the problem in top listed function:
Notice how the sorting functions (the first two) dwarf the building functions (the second two). Together, they’re taking over 98% of the total run time! It would be a waste of our time to worry about the building functions, so let’s optimize the sorting ones. To do that, we’ll use skyboy‘s fastSort function instead of plain old
Vector.sort. It’s a simple one line change from:
vec.sort(vecCompare);
To:
fastSort(vec, vecCompare);
With this in place, I now get these results in a release player:
And in a debug player:
So in release we’ve optimized the total application from 79 milliseconds to 24, nearly a 3x improvement. If we had spent our time optimizing out all of the
Math calls with something like a lookup table, we could have only possibly gotten a 1 millisecond savings, which would be about 1% faster.
In conclusion, a profiler is definitely a tool that you want to use while performance tuning your app. It helps you quickly and easily identify the performance problems and, perhaps even more importantly, the performance problems you don’t have. Don’t waste time optimizing (and often uglifying) your code if you don’t have to. Instead, try out a profiler like TheMiner and speed up your app without taking shots in the dark.
Questions? Comments? Spot a bug or typo? Post a comment!
#1 by jpauclair on March 12th, 2012 · | Quote
Wow… awesome!
#2 by Simon on March 12th, 2012 · | Quote
Should that say “So clearly the Math calls are faster than the Vector sorting.” instead of “So clearly the Math calls are slower than the Vector sorting.”
#3 by jackson on March 12th, 2012 · | Quote
Thanks for spotting that. I’ve updated the article.
#4 by Henke37 on March 12th, 2012 · | Quote
Uhm, the tables point at a “compute” row, this is a little confusing given that the method is called buildVector.
#5 by jackson on March 12th, 2012 · | Quote
That was the time to compute the values of the
Vector, but I can see how “buildVector” would be a clearer name so I’ve updated the article. Thanks for the tip.
#6 by Bob on March 12th, 2012 · | Quote
Isn’t Adobe coming out with some super new Profiler soon? Goggles perhaps?
#7 by jackson on March 12th, 2012 · | Quote
Yes, they gave a talk about it at MAX.
#8 by Martin on March 12th, 2012 · | Quote
Thanks for that.
One thing, I don´t get:
I just dived into skyboys sorting-functions, to get a deeper understanding of sorting arrays and vecs.
You use it like this :
But the function is awaititing other params:
Hm.. I don´t get it. You can sort on fields in skyboys fucntion, but you cannot pass your own sorting function.
#9 by jackson on March 12th, 2012 · | Quote
You’re right! It looks like it’s getting transformed to a
uintof 0, but the
Vectoris still sorted. Since the sort was trivial in the first place, the net result is the same: a trivially-sorted
Vectorin much less time. The actual guts of the optimization isn’t so much the point as using a profiler to find and optimize a chunk of your program. Still, it’s misleading in the article so thank you for pointing it out. :)
#10 by Martin on March 13th, 2012 · | Quote
Ah ok. Here we got it:
if (!(rest[0] is Number)) rest[0] = 0;
A bit offtopic, but it was making me crazy.
Thanks.
Martin
#11 by skyboy on March 14th, 2012 · | Quote
I intend to add sort functions back — previous implementation was scrapped due to poor (imo) implementation — but I’ll be investigating methods more thoroughly in an attempt to best Array since Array’s native implementation has no overhead for calling Function objects vs. instance methods (gah! cheats.).
#12 by jackson on March 14th, 2012 · | Quote
Cool, that will certainly come in handy.
#13 by skyboy on March 19th, 2012 · | Quote
I have added them back in now, and so passing in the sort function makes a substantial difference (~5x) vs. passing in Array.NUMERIC; though in the version used by this article, String sorting was invoked.
#14 by jpauclair on March 12th, 2012 · | Quote
Hey again,
Looks there are some unexpected behaviour…
The buildVector function create a list that is mostly made of negative infinity and positive infinity.
for some reason, the vec.sort() REAAALy don’t like that.
on my comp, with this “buildVector”, vec.sort() is 10x slower than fastSort.
BUT!
if we replace it with a simple Math.random(), fastSort takes twice the time it was before, and vec.sort become 4 time faster than fastSort!!
The other thing is that fastSort is using void pointer everywhere.
So when using native type like int, uint and Number, there are HUGE allocation of memory (void to Number conversion)
So If we take a normal vector with valid values…
The result for me on a 50 000 length vector is more something like this:
Vector.sort(Compare) : 100ms + 400Ko allocation
fastSort : 400ms + 5Mo / loop allocation
And Now the nice thing!
I updated the fastSort code to use native int, uint and Number.
Result?
NEW fastSort: 20ms + ZERO allocation
Here is the new Code:
;)
#15 by jackson on March 12th, 2012 · | Quote
That’s a really nice optimization to
fastSortand if I did this article over again I would definitely use your version. That said, the actual magnitude of the optimization is a bit peripheral to this article. The main point I was trying to make was the importance of using a profiler to find and track down performance issues in order to spend your optimization time wisely. Of course it’s nice that the optimization can be super effective, but the idea is that even a tiny (2%) optimization to the
Vectorsorting would be more effective than a super (100%) optimization to
buildVector.
#16 by skyboy on March 14th, 2012 · | Quote
I’ve just pushed the related update I had in progress (though another feature; specifying the start index, isn’t correctly implemented everywhere. tomorrow). I’ve consistently seen my method outperform Array::sortOn (by as much as 3x) across various machines with random data*.
For the sortOn tests:
* With reverse-sorted data, Array::sortOn outperforms fastSort by 50%; for sorted data, fastSort outperforms Array::sortOn by skipping the sorting phase if the presort (for NaN) finds they are all in order (excluding NaN, which is shuffled off to the end of the Array and not counted in the sorted-or-not).
#17 by skyboy on March 15th, 2012 · | Quote
In my testing, I found that accessing an Array from a variable typed as * is 5-20% faster than a variable typed as Array; and a variable typed Object is 10-30% faster than a variable typed Array.
I can’t explain this behavior.
#18 by jackson on March 15th, 2012 · | Quote
Me neither, especially since I’m seeing the opposite on my Core i5 Mac:
Array: 302
*: 354
Object: 369
What environment did you test in?
#19 by skyboy on March 15th, 2012 · | Quote
32 bit x86 XP SP3; FP 11.1.102.55 release standalone on an intel celeron (northwood 0.13 micrometer; 8 KB L1 cache, 128 KB L2 cache)
#20 by AlexG on March 19th, 2012 · | Quote
How do you think which profiler is the best? And specifically, which is better, the Flash Builder profiler or theMiner ?
#21 by jackson on March 19th, 2012 · | Quote
Calling one profiler “the best” is really tough as each has its pros and cons. In the end they all use the
flash.samplerpackage so they’re operating on the same set of data. I used TheMiner in the article because it’s really simple to add to a project and there is a free version for non-commercial use as well as its predecessor (FlashPreloadProfiler) for free commercial use (I think), so it’s directly usable by all my readers. This is in contrast to Flash Builder and FDT, which have fine profilers but charge a fee to use their products after the trial version expires.
I think it’s a good idea to try out all of the profilers (e.g. Flash Builder, FDT, FlashDevelop, TheMiner) and decide which you like best. But keep in mind that Adobe’s next-gen profiler is on the way… | https://jacksondunstan.com/articles/1763 | CC-MAIN-2020-34 | en | refinedweb |
Django 3.1 will be released in early August 2020 and comes with a number of major new features and many minor improvements including asynchronous views and middleware support, asynchronous tests, JSONField for all supported database backends (not just PostgreSQL), an updated admin page,
SECURE_REFERRER_POLICY, and much more. The official release notes are the canonical reference, but this post will cover some of the highlights as well as tips for upgrading.
How to Upgrade
You really should strive to be on the latest version of both Django and Python. This will result in faster, more supported, and feature-laden code. To upgrade, create a new virtual environment, install Django 3.1, and run your test suite (you have one, yes?) by adding the
-Wa flag to show full deprecation warnings.
(env) $ python -Wa manage.py test
If you don't have tests in your project yet, you can still use
python -Wa manage.py runserver to see some warnings. Refer to the official upgrade list for more advice.
Django 3.1 requires Python 3.6, 3.7, or 3.8. You can read more about supported Python versions on the Django prerequisites page. If you're curious about Django's version and release policy, the download page shows the supported versions timeline and future LTS (long-term support) releases.
For further context, the Django Chat podcast co-hosted by Django Fellow Carlton Gibson has episodes on Django Versions and Django Security Releases.
Asynchronous Views, Middleware, Tests
With version 3.0, Django began its async journey in earnest with ASGI support. In 3.1, Django now supports a fully asychronous request path with views, middleware, and tests/test client.
A basic example, provided in the docs, is to make a request that waits for half a second and then returns a response.
# views.py async def my_view(request): await asyncio.sleep(0.5) return HttpResponse('Hello, async world!')
This works whether you are running WSGI or ASGI mode, which was first added in 3.0.
A more relevant example would be to asynchronously fetch a URL in Django using HTTPX, the next-generation version of the popular requests library.
Cross-db JSONField
JSONField support for models and forms has now been extended to all Django-supported backends (MariaDB, MySQL, Oracle, and SQLite), not just PostgreSQL. This provides full support for JSONField queries.
# models.py from django.db import models class SoccerInfo(models.Model): data = models.JSONField() SoccerInfo.objects.create(data={ 'team': 'Liverpool FC', 'coaches': ['Klopp', 'Krawietz'], 'players': {'forwards': ['Salah', 'Firmino', 'Mané']}, }) SoccerInfo.objects.filter( data__team='Liverpool', data__coaches__contains='Klopp', data__players__has_key='forwards', ).delete()
Admin Layout
The admin now has a sidebar on the lefthand side for easier naviation on large screens. The historical breadcrumbs pattern is still available. If you'd like to disable the new sidebar, set AdminSite.enable_nav_sidebar to
False.
Here is the older admin view:
And here is the new admin view with sidebar:
pathlib
Django has switched from using os.path to the more modern and concise pathlib. If you create a new project using the
startproject command, the automatically generated
settings.py file now defaults to
pathlib.
Here is the Django 3.0 version:
# settings.py import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } }
And here is the newer Django 3.1 version:
# settings.py from pathlib import Path BASE_DIR = Path(__file__).resolve(strict=True).parent.parent DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', } }
SECURE_REFERRER_POLICY
The SECURE_REFERRER_POLICY now defaults to 'same-origin', which is more secure. This is one of many steps the Django team makes around security. You can--and should--run the Django deployment checklist to ensure your project is secure before pushing to production:
python manage.py check --deploy.
Conclusion
There are many more new features, extensively documented in the official release notes, which are well-worth reading in full as well as the accompanying Django 3.1 documentation.
Every major new Django release is a team effort and 3.1 is no exception. The Django Fellows, Carlton Gibson and Mariusz Feliask, are resonsible for an enormous amount of work to ensure the release process occurs smoothly and on time. Particular thanks are due to Andrew Godwin for his work on async features and Sage Abdullah for adding cross-db JSONFields.
If you'd like to support the continued development of Django, please do so on the official fundraising page or by becoming a GitHub Sponsor.
Posted on by:
William S. Vincent
I teach Django at LearnDjango.com. Django Software Foundation Board Member. Author of three books, co-host of Django Chat podcast, and co-author of Django News newsletter.
Discussion
Great content.
Thanks for sharing.
Thanks William | https://practicaldev-herokuapp-com.global.ssl.fastly.net/learndjango/what-s-new-in-django-3-1-320f | CC-MAIN-2020-34 | en | refinedweb |
Analytics for Go
Our Go library lets you record analytics data from your Go code. The requests hit our servers, and then we route your data to any analytics service you enable on your destinations page.
This library is open-source, so you can check it out on Github.
All of Segment’s server-side libraries are built for high-performance, so you can use them in your web server controller code. This library uses a tunable buffer to batch messages, optimized for throughput and reduced network activity.
Getting Started
Install the Package
Install
analytics-go using
go get:
go get gopkg.in/segmentio/analytics-go.v3
Then import it and initialize an instance with your source’s Write Key. Of course, you’ll want to replace
YOUR_WRITE_KEY with your actual Write Key which you can find in Segment under your source settings.
package main import "gopkg.in/segmentio/analytics-go.v3" func main() { client := analytics.New("YOUR_WRITE_KEY") defer client.Close() // Use the client. }
That will create a
client that you can use to send data to Segment for your source.
The default initialization settings are production-ready and queue 20 messages before sending a batch request, and a 5 second interval.:
client.Enqueue(analytics.Identify{ UserId: "019mr8mf4r", Traits: analytics.NewTraits(). SetName("Michael Bolton"). SetEmail("mbolton@example.com"). Set("plan", "Enterprise"). Set("friends", 42), })
This call is identifying Michael by his unique User ID (the one you know him by in your database) and label him with
name,
plan and:
client.Enqueue(analytics.Track{ UserId: "f4ca124298", Event: "Signed Up", Properties: analytics.NewProperties(). Set("plan", "Enterprise"), })
This example
track call tells us that your user just triggered the Signed Up event choosing the “Enterprise” plan.
track event properties can be anything you want to record. In this case, plan type. set up in combination with the Go library, page calls are already tracked for you by default. However, if you want to record your own page views manually and aren’t using our client-side library, read on!
Example
page call:
client.Enqueue(analytics.Page{ UserId: "f4ca124298", Name: "Go Library", Properties: analytics.NewProperties(). SetURL(""), })
The
page call has the following fields:
Find details on the
page:
client.Enqueue(analytics.Group{ UserId: "019mr8mf4r", GroupId: "56", Traits: map[string]interface{}{ "name": "Initech", "description": "Accounting Software", }, }):
client.Enqueue(analytics.Alias{ PreviousId: "anonymousUser", UserId: "019mr8mf4r", })
The
alias call has the following fields:
Here’s a full example of how we might use the
alias call:
// the anonymous user does actions ... client.Enqueue(analytics.Track{ Event: "Anonymous Event", UserId: anonymousUser, }) // the anonymous user signs up and is aliased client.Enqueue(analytics.Alias{ PreviousId: anonymousUser, UserId: "019mr8mf4r", }) // the identified user is identified client.Enqueue(analytics.Identify{ UserId: "019mr8mf4r", Traits: map[string]interface{}{ "name": "Michael Bolton", "email": "mbolton@example.com", "plan": "Enterprise", "friends": 42, }, }) // the identified user does actions ... client.Enqueue(analytics.Track{ Event: "Item Viewed", UserId: "019mr8mf4r", Properties: map[string]interface{}{ "item": "lamp", }, })
For more details about
alias, including the
alias call payload, check out our Spec.
Development Settings
You can use the
BatchSize field of your configuration to 1 during development to make the library flush every time a message is submitted, so that you can be sure your calls are working properly.
func main() { client, _ := analytics.NewWithConfig("YOUR_WRITE_KEY", analytics.Config{ BatchSize: 1, }) }
Logging
The
Verbose field of your configuration controls the level of logging, while the
Logger field provides a hook to capture the log output:
func main() { client, _ := analytics.NewWithConfig("YOUR_WRITE_KEY", analytics.Config{ Verbose: true, Logger: analytics.StdLogger(log.New(os.Stderr, "segment ", log.LstdFlags)), }) }
Selecting Destinations
The
alias,
group,
identify,
page and
track calls can all be passed an object of
context.integrations that lets you turn certain integrations on or off. By default all destinations are enabled.
Here’s an example
track call with the
context.integrations object shown.
client.Enqueue(analytics.Track{ Event: "Membership Upgraded", UserId: "019mr8mf4r", Integrations: map[string]interface{}{ "All": false, "Mixpanel": true, }, })
In this case, we’re specifying that we want this
Track to only go to Vero.
All: false says that no destination should be enabled unless otherwise specified.
Vero: true turns on Vero,, Kissmetrics, etc..
Context
You can send Context fields in two ways with the Go library.
Firstly, you can set a global context field that will be set on all messages from the client.
client, _ := analytics.NewWithConfig("h97jamjwbh", analytics.Config{ DefaultContext: &analytics.Context{ App: analytics.AppInfo{ Name: "myapp", Version: "myappversion", }, }, })
Secondly, you can set a context field on specific events.
client.Enqueue(analytics.Identify{ UserId: "019mr8mf4r", Traits: analytics.NewTraits(). Set("friends", 42), Context: &analytics.Context{ Extra: map[string]interface{}{ "active": true, }, }, })
Note that any custom fields must be set under the
Extra field. They will automatically be inlined into the serialized
context structure. For instance, the identify call above would be serialized to:
{ "type": "identify", "userId": "019mr8mf4r", "traits": { "friends": 42, }, "context": { "active": true, "library": { "name": "analytics-go", "version": "3.0.0" } } }
Batching
Our libraries are built to support high performance environments. That means it is safe to use analytics-go on a web server that’s serving hundreds of requests per second.
Every method you call does not result in an HTTP request, but is queued in memory instead. Messages are flushed in batch in the background, which allows for much faster operation. If batch messages are not arriving in your debugger and no error is being thrown you may want to slow the speed of your scipt down. This is because we run a message batching loop in a go-routine so if the script runs too fast it won’t execute on the network calls before it exits the loop.
By default, our library will flush:
- every 20 messages (control with
FlushAt)
- if 5 seconds has passed since the last flush (control with
FlushAfter)
There is a maximum of
500KB per batch request and
32KB per call.
Sometimes you might not want batching (eg. when debugging, or in short-lived programs). You can turn off batching by setting the
FlushAt argument to
1, and your requests will always be sent right away.
Options
If you hate defaults you can configure analytics-go has a lot of configuration options. You can read more in the Godocs.
Version 2 (Deprecated)
If you’re looking for documentation for the v2 version of the library, click here.
Migrating from v2
v3 is a rewrite of our v2 version of the Go Library. We recommend using v3 as it supports many new features, has significant design improvements and is better tested.
v3 is currently in the
v3.0 branch to minimize breaking changes for customers not using a package manager. You can refer to the documentation for your package manager to see how to use the
v3.0 branch.
e.g. with govendor, you would run the command:
govendor fetch github.com/segmentio/analytics-go@v3.0
Alternatively, you can also use
gopkg.in. First run
go get gopkg.in/segmentio/analytics-go.v3 and replace your imports with
import "gopkg.in/segmentio/analytics-go.v3".
To help with migrating your code, we recommend checking out a simple example that we’ve written in v2 and v3 so you can easily see the differences.
The first difference you’ll notice is that
Client is now an interface. It has a single method -
Enqueue that can accept messages of all types.
track := analytics.Track{ Event: "Download", UserId: "123456", Properties: map[string]interface{}{ "application": "Segment Desktop", "version": "1.1.0", "platform": "osx", }, } // in v2, you would call the `Track` method with a `Track` struct. client.Track(&track) // in v3, you would call the `Enqueue` method with a `Track` struct. // Note that a pointer is not used here. client.Enqueue(track)
Secondly, you’ll notice that there are new types such as
analytics.Properties and
analytics.Traits. These can be used to replace
map[string]interface{}. They provide type safety and reduce the chance of accidentally sending fields that are named incorrectly.
For instance, the following two examples are functionally equivalent in v3.
client.Enqueue(analytics.Track{ UserId: "f4ca124298", Event: "Signed Up", Properties: analytics.NewProperties(). SetCategory("Enterprise"). Set("application", "Segment Desktop"), })
client.Enqueue(analytics.Track{ UserId: "f4ca124298", Event: "Signed Up", Properties: map[string]interface{}{ "category": "Segment Desktop", "application": "Segment Desktop", }, })
Lastly, you’ll notice that configuration is provided during initialization and cannot be changed after initialization. The various configuration options are documented in the GoDocs.
These are examples of applying same configuration options in v2 and v3.
// Example in v2: client := analytics.New("h97jamjwbh") client.Interval = 30 * time.Second client.Verbose = true client.Size = 100 // Example in v3: client, _ := analytics.NewWithConfig("h97jamjwbh", analytics.Config{ Interval: 30 * time.Second, BatchSize: 100, Verbose: true, })
What’s new in v3
v3 is a rewrite of our v2 version of the Go Library with many new features!
- New type safe API to set properties, traits and context fields. This is less error prone than using the
map[string]interface{}type (though you can still do so).
client.Enqueue(analytics.Track{ UserId: "f4ca124298", Event: "Signed Up", Properties: analytics.NewProperties(). SetCategory("Enterprise"), SetCoupon("synapse"), SetDiscount(10), })
Dynamically split batches into appropriately sized chunks to meet our API size limits. Previously you would have to calculate the batch size depending on this size of your data to figure out the appropriate size.
Improved logging abstraction. Previously we relied solely on the standard library
log.Loggertype which cannot distinguish between error and non-error logs. v3 has it’s own
Loggerinterface that can be used to capture errors for your own reporting purposes. An adapter for the standard library logger is also included.
Ability to configure the retry policy based on the number of attempts.
client, _ := analytics.NewWithConfig("h97jamjwbh", analytics.Config{ RetryAfter: func(attempt int) time.Duration { return time.Duration(attempt * 10) }, })
- Configurable default context on all messages.
client, _ := analytics.NewWithConfig("h97jamjwbh", analytics.Config{ DefaultContext: &analytics.Context{ App: analytics.AppInfo{ Name: "myapp", Version: "myappversion", }, }, })
Troubleshooting
If you’re having trouble we have a few tips that help common problems.
No events in my debugger
Double check that you’ve followed all the steps in the Quickstart.
Make sure that you’re calling one of our API methods once the library is successfully installed—
identify,
track, etc.: 12 Jun 2020
Need support?
Questions? Problems? Need more info? Contact us, and we can help! | https://segment.com/docs/connections/sources/catalog/libraries/server/go/?utm_source=newsletter&utm_medium=email&utm_campaign=gonl | CC-MAIN-2020-34 | en | refinedweb |
Custom Thread pool implementation in Java
Carvia Tech | July 25, 2020 | 3 min read | 3,132 views
Thread pool executor requires a Queue for holding tasks and a collection of Worker Threads that will pick up tasks from the work queue start running them. Let us try to write our own simple Thread Pool Executor implementation. It is a typical Producer Consumer Problem statement.
Below Java Program provides basic implementation for a working proof of concept for a threadpool executor.
import java.util.concurrent.BlockingQueue; import java.util.concurrent.LinkedBlockingQueue; public class CustomThreadPoolExecutor { private final BlockingQueue<Runnable> workerQueue; private final Thread[] workerThreads; public CustomThreadPoolExecutor(int numThreads) { workerQueue = new LinkedBlockingQueue<>(); workerThreads = new Thread[numThreads]; int i = 0; for (Thread t : workerThreads) { t = new Worker("Custom Pool Thread " + ++i); t.start(); } } public void addTask(Runnable r) { try { workerQueue.put(r); } catch (InterruptedException e) { e.printStackTrace(); } } class Worker extends Thread { public Worker(String name) { super(name); } public void run() { while (true) { try { workerQueue.take().run(); } catch (InterruptedException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } } } public static void main(String[] args) { CustomThreadPoolExecutor threadPoolExecutor = new CustomThreadPoolExecutor(10); threadPoolExecutor.addTask(() -> System.out.println("First print task")); threadPoolExecutor.addTask(() -> System.out.println("Second print task")); } }
The above program will create pool of 10 worker threads and initialize them all.
Explanation
We are using two classes from standard java library in this implementation:
- LinkedBlockingQueue
An optionally-bounded blocking queue based on linked nodes. This queue orders elements FIFO (first-in-first-out). It is thread-safe in nature and acts as a temporary storage of runnable tasks that are due for execution.
- Thread
All the threads get initilized and started at the creation of ThreadPoolExecutor. All threads listen on the shared workqueue for incoming tasks in never ending loop.
What we missed
An production grade implementation would have lot more sophistication than the basic implementation provided here, for example:
Capability to tune number of items that can fit into queue.
Capability to configure idle and max worker threads, incase of no work items, number of active threads will reduce to idle threads.
Lazy initialization
Proper exception handling is not implemented in this article
Production usage advise
Always use Executor Framework provided in
java.util.concurrent package for executing tasks in your production code. It is simple to use, provides interface-based task execution facility and time tested.
ExecutorService exec = Executors.newSingleThreadExecutor(); exec.execute(runnable); exec.shutdown();
Why do we need ThreadPool executor?
There are many reasons one would like to use threadpool executor:
Creating and destroying threads is a IO extensive operation, which has impact on performance and memory consumption of an application. So its ideal to create threads once and reuse them later on.
We do not want to run out of threads when heavy load arrives on an application. Threadpool holds tasks in a queue, so if lot of tasks arrives in a very short amount of time, queue will hold the tasks until a worker thread becomes available for the processing. This approach prevents resource exhaustion in production environment.
If due to some reasons, thread gets killed, ThreadPoolExecutor will recreate the thread and put it back to the pool.
Top articles in this category:
- Java 8 Parallel Stream with ThreadPool
- How will you implement your custom threadsafe Semaphore in Java
- Difference between Implementing Runnable and Extending Thread
- What is ThreadLocal in Java, where will you use this class
- What is Deadlock in Java? How to troubleshoot and how to avoid deadlock
- What is difference between sleep() and wait() method in Java?
- What will happen if we don't synchronize getters/accessors of a shared mutable object in multi-threaded applications | https://www.javacodemonk.com/implement-custom-thread-pool-in-java-without-executor-framework-ca10e61d | CC-MAIN-2020-34 | en | refinedweb |
Statistics › Random ›
Mother Of All
Random number generator class using George Marsaglia's mother of all algorithm.Controller: CodeCogs
Interface
C++
Class MotherOfAllThis class provides a long period random number generator using George Marsaglia's mother of all algorithm. It produces uniformly distributed pseudo-random 32-bit values with period of about
Algorithm:The arrays mother1 and mother2 store carry values in their first element and random 16-bit numbers in elements 1 to 8. These random numbers are moved to elements 2 to 9 and a new carry and number are generated and placed in elements 0 and 1. A 32-bit random number is obtained by combining the output of the two generators and returned in rnd. The arrays mother1 and mother2 are filled with random 16-bit values on the first call to Next(). To give you an idea of the running time for each of the functions, here are the results for generating 100,000,000 random numbers on a 750MHz microprocessor :
- genReal 16 seconds
- genInt 16 seconds
References:
- George Marsaglia's original post of the mother of all generator algorithm,
- MOTHER/motherofall.h> using namespace std; int main() { Stats::Random::MotherOfAll A(time(0) / MOTHERDIV); Stats::Random::MotherOfAll B(0.345);. | https://codecogs.com/library/statistics/random/motherofall.php | CC-MAIN-2021-31 | en | refinedweb |
What’s this again ?
My god, what is Feature Scaling? a new barbaric Anglicism (sorry as i’m french 😉 )? a new marketing side effect? Or is there something really relevant behind it? To be honest, Feature Scaling is a necessary or even essential step in upgrading the characteristics of our Machine Learning model. Why ? quite simply because behind each algorithm are hidden mathematical formulas. And these mathematical formulas do not appreciate the variations in scale of values between each characteristic. And this is especially true with regard to gradient descent!
If you do nothing you will observe slowness in learning and reduced performance.
Let’s take an example. Imagine that you are working on modeling around real estate data. You will have characteristics of the type: price, surface, number of rooms, etc. Of course, the value scales of these data are totally different depending on the characteristics. However, you will have to process them using the same algorithm. This is where the bottom hurts! your algorithm will indeed have to mix prices of [0… 100,000] €, surfaces of [0… 300] m2, numbers of rooms ranging from [1 .. 10] rooms. Scaling therefore consists of bringing this data to the same level.
Fortunately Scikit-Learn will once again chew up our work, but before using this or that technique we must understand how each one works.
Preparation of tests
First of all, we will create random data sets as well as some graph functions that will help us better understand the effects of the different techniques used (above).
Here is the Python code:
import pandas as pd import numpy as np from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import RobustScaler from sklearn.preprocessing import MaxAbsScaler import matplotlib import matplotlib.pyplot as plt import seaborn as sns def plotDistribGraph(pdf): fig, a = plt.subplots(ncols=1, figsize=(16, 5)) a.set_title("Distributions") for col in pdf.columns: sns.kdeplot(pdf[col], ax=a) plt.show() def plotGraph(pdf, pscaled_df): fig, (a, b) = plt.subplots(ncols=2, figsize=(16, 5)) a.set_title("Avant mise à l'echelle") for col in pdf.columns: sns.kdeplot(pdf[col], ax=a) b.set_title("Apres mise à l'echelle") for col in pdf.columns: sns.kdeplot(pscaled_df[col], ax=b) plt.show() def plotGraphAll(pdf, pscaled1, pscaled2, pscaled3): fig, (a, b, c, d) = plt.subplots(ncols=4, figsize=(16, 5)) a.set_title("Avant mise à l'echelle") for col in pdf.columns: sns.kdeplot(pdf[col], ax=a) b.set_title("RobustScaler") for col in pscaled1.columns: sns.kdeplot(pscaled1[col], ax=b) c.set_title("MinMaxScaler") for col in pscaled2.columns: sns.kdeplot(pscaled2[col], ax=c) d.set_title("StandardScaler") for col in pscaled3.columns: sns.kdeplot(pscaled3[col], ax=d) plt.show() np.random.seed(1) NBROWS = 5000 df = pd.DataFrame({ 'A': np.random.normal(0, 2, NBROWS), 'B': np.random.normal(5, 3, NBROWS), 'C': np.random.normal(-5, 5, NBROWS), 'D': np.random.chisquare(8, NBROWS), 'E': np.random.beta(8, 2, NBROWS) * 40, 'F': np.random.normal(5, 3, NBROWS) }
In this code, apart from the trace functions we create 6 datasets in a single DataFrame ( Pandas ).
Let’s take a look at what our datasets look like:
plotDistribGraph(df)
These datasets are based on Gaussian (A, B, C and F), X2 (D) and beta (E) distributions ( thanks to the Numpy np.random functions ).
This code is reusable on purpose so that you can vary the datasets and test the techniques presented.
The techniques
Basically Scikit-Learn ( sklearn.preprocessing ) provides several scaling techniques, we’ll go over 4 of them:
- MaxAbsScaler
- MinMaxScaler
- StandardScaler
- RobustScaler
MaxAbsScaler ()
This scaling technique is useful when the distribution of values is sparse and you have a lot of outiers. Indeed the other techniques will tend to erase the impact of the outliers which is sometimes annoying. It is therefore interesting:
- Because robust to very small standard deviations
- It preserves null entries on a scattered data distribution
scaler = MaxAbsScaler() keepCols = ['A', 'B', 'C'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df)
Pour résumer : cette technique se contente de rassembler les valeurs sur une plage de [-1, 1].
To summarize: this technique just collects the values over a range of [-1, 1].
MinMaxScaler ()
This technique transforms the characteristics (xi) by adapting each one over a given range (by default [-1 .. 1]). It is possible to change this range via the parameters feature_range = (min, max). To keep it simple, here is the transformation formula for each characteristic:
Let’s see it at work:
scaler = MinMaxScaler() keepCols = ['A', 'B', 'C'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df)
If this technique is probably the best known, it works especially well for cases where the distribution is not Gaussian or when the [itg-glossary href = ” / “Glossary-id =” 15640 ″] Standard Deviation [/ itg-glossary] is low. However and unlike the MaxAbsScaler () technique, MinMaxScaler () is sensitive to outliers. In this case, we quickly switch to another technique: RobustScaler ().
RobustScaler ()
The RobustScaler () technique uses the same scaling principle as MinMaxScaler (). However, it uses the interquartile range instead of the min-max, which makes it more reliable with respect to outliers. Here is the formula for reworking the characteristics:
Q1 (x): 1st quantile / 25%
Q3 (x): 3rd quantile / 75%
Let’s see it at work:
scaler = RobustScaler() keepCols = ['A', 'B', 'E'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df)
StandardScaler ()
We will finish our little tour (not exhaustive) of scaling techniques with probably the least risky: StandardScaler ().
This technique assumes that data is normally distributed. The function will recalculate each characteristic (Cf. formula below) so that the data is centered around 0 and with a [itg-glossary href = “” glossary-id = “15640 ″] Standard deviation [/ itg-glossary] of 1.
stdev (x): “Standard Deviation” in English means [itg-glossary href = “” glossary-id = “15640 ″] Standard Deviation [/ itg -glossary]
Let’s see it at work:
scaler = StandardScaler() keepCols = ['A', 'B', 'C'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df)
Conclusion
Let’s simply summarize the Feature Scaling techniques that we have just encountered:
- MaxAbsScaler: to be used when the data is not in normal distribution. Takes into account outliers.
- MinMaxScaler: calibrates the data over a range of values.
- StandardScaler: recalibrates the data for normal distributions.
- RobustScaler: same as Min-Max but uses interquartile range instead of Min and Max values. | http://aishelf.org/feature-scaling/ | CC-MAIN-2021-31 | en | refinedweb |
The Deque is similar to the double-ended queue that helps to add or remove elements from each end of the data structure. In this tutorial, we will discover & understand completely what is Deque interface, declaration, working of java deque, how to create a deque in java using classes that implement it, and its methods with example programs.
This tutorial on Deque Interface in Java includes the following topics
- Java DeQue
- Deque Interface Declaration
- Working of Deque
- Classes that implement Deque
- Creating a Deque
- Java Deque Example
- Methods of Deque Interface in Java
- Deque Implementation In Java
- Deque as Stack Data Structure
- Implementation of Deque in ArrayDeque Class
Java DeQue
Deque in Java is an interface present in java.util package. The Deque interface in Java was added by Java 6 version. It extends the Queue interface and declares the behavior of a double-ended queue. In Deque we can add or remove elements from both sides of the queue. The Deque can function as standard, first-in, first-out queues or as last-in, first-out stacks. The Deque doesn’t allow you to insert the null element.
Deque Interface Declaration
public interface Deque<E> extends Queue<E>
Do Check:
Working of Deque
Well in a normal queue, we will add elements from the rear and remove them from the front. But in a deque, we should insert and remove elements from both the rear and front.
Classes that implement Deque
If you want to use the Deque functionalities then you must add these two classes that implement Deque Interface. They are as follows:
- LinkedList
- ArrayDeque
Creating a Deque
Prior to using a deque in java, we should create an instance of one of the classes that implement the java deque interface. Let’s have a look at the example of creating a deque instance by creating a LinkedList interface or by creating an ArrayDeque instance:
// LinkedList implementation of Deque Deque deque = new LinkedList(); // Array implementation of Deque Deque deque = new ArrayDeque();
Java Deque Example
import java.util.*; class DequeExample{ public static void main(String args[]){ Deque dq = new LinkedList(); //adding the element in the deque dq.add("Ajay"); dq.add("Vijay"); dq.add("Rahul"); dq.addFirst("Amit"); dq.addLast("Sumit"); System.out.println("Deque elements are: " +dq); //remove last element System.out.println("remove last: " +dq.removeLast()); /*return the element at the head of a deque but not removed. It returns null if the deque is empty.*/ System.out.println("peek(): " +dq.peek()); //returns and remove the head element of the deque. System.out.println("poll(): " +dq.poll()); /* returns and remove the first element of the deque. It returns null if the deque is empty.*/ System.out.println("pollFirst(): " +dq.pollFirst()); //dispalying deque element System.out.println("After all operation deque elements are: " +dq); } }
Output:
Methods of Deque Interface in Java
1. add(E e): This method is used to insert a specified element to the tail.
2. addFirst(E e): This method is used to insert a specified element to the head.
3. addLast(E e): This method is used to insert the specified element to the tail.
4. E getFirst(): This method returns the first element in the deque.
5. E getLast(): This method returns the last element in the deque.
6. offer(E e): This method adds an element to the tail of the deque and returns a boolean value.
7. offerFirst(E e): This method adds an element to the head of the queue and returns a boolean value if the insertion was successful.
8. offerLast(E e): This method adds an element to the tail of the queue and returns a boolean value if the insertion was successful.
9. removeFirst(): It removes the element at the head of the deque.
10. removeLast(): It removes the element at the tail of the deque.
11. push(E e): This method adds the specified element at the head of the queue
12. pop(): It removes the element from the head and returns it.
13. poll(): returns and remove the head element of the deque.
14. pollFirst(): returns and remove the first element of the deque. It returns null if the deque is empty.
15. pollLast(): returns and removes the last element of the deque. It returns null if the deque is empty.
16. peek(): return the element at the head of a deque but not removed. It returns null if the deque is empty.
17. peekFirst(): return the first element at the head of a deque but not removed. It returns null if the deque is empty.
18. peekLast(): return the last element at the head of a deque but not removed. It returns null if the deque is empty.
Deque Implementation In Java:
Deque as Stack Data Structure
The implementation of a stack can be provided by the Stack Class of the Java Collections framework.
Yet, it recommends using Deque as a stack instead of the Stack class. It happens due to the methods of Stack are synchronized.
Following are the methods the Deque interface grants to implement stack:
- push() – Adds the specified element at the head of the queue
- pop() – Removes the element from the head and returns it.
- peek() – Return the element at the head of a deque but not removed. It returns null if the deque is empty.
Implementation of Deque in ArrayDeque Class Example
import java.util.Deque; import java.util.ArrayDeque; class Main {:
| https://btechgeeks.com/deque-in-java-with-example/ | CC-MAIN-2021-31 | en | refinedweb |
Algorithms are one of the most common themes in coding interviews. In order to gain an advantage in interviews, it is important to be very familiar with the top algorithms and their implementations.
In today’s tutorial, we will be exploring graph algorithms. We’ll begin with an introduction to graph theory and graph algorithms. Next, we will learn how to implement a graph. Finally, we will examine common graph problems you can expect to see in a coding interview.
Today, we will learn:
This curates path takes you through all that you need to know to crack your Python interviews with confidence.
Ace the Python Coding Interview
An algorithm is a mathematical process to solve a problem using a well-defined or optimal number of steps. It is simply the basic technique used to get a specific job done.
A graph is an abstract notation used to represent the connection between all pairs of objects. Graphs are widely-used mathematical structures visualized by two basic components: nodes and edges.
Graph algorithms are used to solve the problems of representing graphs as networks like airline flights, how the Internet is connected, or social network connectivity on Facebook. They are also popular in NLP and machine learning to form networks.
Some of the top graph algorithms include:
While graphs form an integral part of discrete mathematics, they also have practical uses in computer science and programming, including the following:
A graph, denoted by G, is represented by a set of vertices (V) or nodes linked at edges (E). The number of edges you have depends on the vertices. The edges may be directed or undirected.
In a directed graph, the nodes are linked in one direction. The edges here show a one-way relationship.
In an undirected graph, the edges are bi-directional, showing a two-way relationship.
Example: A good use-case of an undirected graph is Facebook friend suggestions algorithm. The user (node) has an edge running to a friend A (another node) who is in turn connected (or has an edge running) to friend B. Friend B is then suggested to the user.
There are many other complex types of graphs that fall into different subsets. A directed graphs, for example, has strongly connected components when every vertex is reachable from every other vertex.
A vertex is a point where multiple lines meet. It is also called a node.
An edge is a mathematical term used for a line that connects two vertices. Many edges can be formed from a single vertex. However, without a vertex, an edge cannot be formed. There must be a starting and ending vertex for each edge.
A path in a graph is a sequence of vertices v1, v2, …, vk, with the property that there are edges between and . We say that the path goes from to .
The sequence 6,4,5,1,26,4,5,1,2 defines a path from node 6 to node 2.
Similarly, other paths can be created by traversing the edges of the graph. A path is simple, if its vertices are all different.
Walks are paths, but they don’t require a sequence of distinct vertices.
A graph is connected if for every pair of vertices and , there is a path from to .
A cycle is a path v1, v2, …, vk for which the following are true:
A tree is a connected graph that does not contain a cycle.
In a graph, if an edge is drawn from the vertex to itself, it is called a loop. In the illustration, V is a vertex whose edge, (V, V), is forming a loop.
Before we move on to solving problems using graph algorithms, it is important to first know how to represent graphs in code. Graphs can be represented as an adjacency matrix or adjacency list.
An adjacency matrix is a square matrix labeled by graph vertices and is used to represent a finite graph. The entries of the matrix indicate whether the vertex pair is adjacent or not in the graph.
In the adjacency matrix representation, you will need to iterate through all the nodes to identify a node’s neighbors.
a b c d e a 1 1 - - - b - - 1 - - c - - - 1 - d - 1 1 - -
An adjacency list is used to represent a finite graph. The adjacency list representation allows you to iterate through the neighbors of a node easily. Each index in the list represents the vertex, and each node that is linked with that index represents its neighboring vertices.
1 a -> { a b } 2 b -> { c } 3 c -> { d } 4 d -> { b c }
For the base graph class below, we will be using the Adjacency List implementation as it performs faster for the algorithm solutions later in this article.
The requirements of our graph implementation are fairly straightforward. We would need two data members: the total number of vertices in the graph and a list to store adjacent vertices. We also need a method to add edges or a set of edges.
class AdjNode: """ A class to represent the adjacency list of the node """ def __init__(self, data): """ Constructor :param data : vertex """ self.vertex = data self.next = None class Graph: """ Graph Class ADT """ def __init__(self, vertices): """ Constructor :param vertices : Total vertices in a graph """ self.V = vertices self.graph = [None] * self.V # Function to add an edge in an undirected graph def add_edge(self, source, destination): """ add edge :param source: Source Vertex :param destination: Destination Vertex """ # Adding the node to the source node node = AdjNode(destination) node.next = self.graph[source] self.graph[source] = node # Adding the source node to the destination if undirected graph # Intentionally commented the lines #node = AdjNode(source) #node.next = self.graph[destination] #self.graph[destination] = node def print_graph(self): """ A function to print a graph """ for i in range(self.V): print("Adjacency list of vertex {}\n head".format(i), end="") temp = self.graph[i] while temp: print(" -> {}".format(temp.vertex), end="") temp = temp.next print(" \n") # Main program if __name__ == "__main__": V = 5 # Total vertices g = Graph(V) g.add_edge(0, 1) g.add_edge(0, 4) g.add_edge(1, 2) g.add_edge(1, 3) g.add_edge(1, 4) g.add_edge(2, 3) g.add_edge(3, 4) g.print_graph()
In the above example, we see the Python graph class. We’ve laid down the foundation of our graph class. The variable V contains an integer specifying the total number of vertices.
Prepare for Python interviews without scrubbing through videos or documentation. Educative’s text-based courses are easy to skim and feature live coding environments, making learning quick and efficient.
Ace the Python Coding Interview
Given a graph represented as an adjacency list and a starting vertex, your code should output a string containing the vertices of the graph listed in the correct order of traversal. As you traverse the graph from the starting vertex, you are to print each node’s right child first, then the left.
To solve this problem, the previously implemented Graph class is already prepended.
Input: A graph represented as an adjacency list and a starting vertex
Output: A string containing the vertices of the graph listed in the correct order of traversal
Sample Output:
result = "02143" or result = "01234"
Take a look and design a step-by-step algorithm before jumping on to the implementation. Try to solve it on your own first. If you get stuck, you can always refer to the solution provided in the solution section.
def bfs(graph, source): """ Function to print a BFS of graph :param graph: The graph :param source: starting vertex :return: """ # Write your code here! pass
def bfs(my_graph, source): """ Function to print a BFS of graph :param graph: The graph :param source: starting vertex :return: """ # Mark all the vertices as not visited visited = [False] * (len(my_graph.graph)) # Create a queue for BFS queue = [] # Result string result = "" # Mark the source node as # visited and enqueue it queue.append(source) visited[source] = True while queue: # Dequeue a vertex from # queue and print it source = queue.pop(0) result += str(source) # Get all adjacent vertices of the # dequeued vertex source. If a adjacent # has not been visited, then mark it # visited and enqueue it while my_graph.graph[source] is not None: data = my_graph.graph[source].vertex if not visited[data]: queue.append(data) visited[data] = True my_graph.graph[source] = my_graph.graph[source].next return result # Main to test the above program if __name__ == "__main__": V = 5 g = Graph(V) g.add_edge(0, 1) g.add_edge(0, 2) g.add_edge(1, 3) g.add_edge(1, 4) print(bfs(g, 0))
We begin from a selected node and traverse the graph by layers. All neighbor nodes are explored. Then, we traverse we the next level. We traverse the graph horizontally, aka by each layer.
A graph may contain cycles. To avoid processing the same node again, we can use a boolean array that marks visited arrays. You can use a queue to store the node and mark it as visited. The queue should follow the First In First Out (FIFO) queuing method.
In this problem, you have to implement the depth-first traversal. To solve this problem, the previously implemented graph class is already provided.
Input: A graph represented as an adjacency list and a starting vertex
Output: A string containing the vertices of the graph listed in the correct order of traversal
Sample Output:
result = "01342" or result = "02143"
Take a look and design a step-by-step algorithm before jumping on to the implementation. Try to solve it on your own first. If you get stuck, you can always refer to the solution provided in the solution section.
def dfs(my_graph, source): """ Function to print a DFS of graph :param graph: The graph :param source: starting vertex :return: returns the traversal in a string """ # Mark all the vertices as not visited visited = [False] * (len(my_graph.graph)) # Create a stack for DFS stack = [] # Result string result = "" # Push the source stack.append(source) while stack: # Pop a vertex from stack source = stack.pop() if not visited[source]: result += str(source) visited[source] = True # Get all adjacent vertices of the popped vertex source. # If a adjacent has not been visited, then push it while my_graph.graph[source] is not None: data = my_graph.graph[source].vertex if not visited[data]: stack.append(data) my_graph.graph[source] = my_graph.graph[source].next return result # Main to test the above program if __name__ == "__main__": V = 5 g = Graph(V) g.add_edge(0, 1) g.add_edge(0, 2) g.add_edge(1, 3) g.add_edge(1, 4) print(dfs(g, 0))
The depth-first graph algorithm uses the idea of backtracking. Here, ‘backtrack’ means to move forward as long as there are no more nodes in the current path, then to move backward on the same path to find nodes to traverse.
In this problem, you must implement the
remove_edge function which takes a source and a destination as arguments. If an edge exists between the two, it should be deleted.
Input: A graph, a source (integer), and a destination (integer)
Output: A BFS traversal of the graph with the edge between the source and the destination removed
First, take a close look at this problem and design a step-by-step algorithm before jumping to the implementation. Try it yourself before checking the solution!
def remove_edge(graph, source, destination): """ A function to remove an edge :param graph: A graph :param source: Source Vertex :param destination: Destination Vertex """ # Write your code here! pass
This challenge is very similar to the deletion in the linked list, if you are familiar with it.
Our vertices are stored in a linked list. First, we access the
source linked list. If the head node of the source linked list holds the key to be deleted, we shift the head one step forward and return the graph.
If the key to be deleted is in the middle of the linked list, we keep track of the previous node and connect the previous node with the next node when the destination encounters.
Below are other interview questions that you can try your hand at solving:
Congratulations on making it to the end. You should know understand graphs in Python and understand what to prepare for graph-related coding interview questions.
If you’d like to learn more about algorithms in Python, check out Educative’s learning path Ace the Python Coding Interview. In these modules, you’ll brush up on data structures, algorithms, and important syntax by practicing hundreds of real interview questions
By the end, you’ll even be able to confidently answer multithreading and concurrency questions.
Happy learning!
Join a community of 500,000 monthly readers. A free, bi-monthly email with a roundup of Educative's top articles and coding tips. | https://www.educative.io/blog/graph-algorithms-tutorial | CC-MAIN-2021-31 | en | refinedweb |
Show Table of Contents JMS Component and uses Spring's JMS support for declarative transactions, using Spring's
JmsTemplatefor sending and a
MessageListenerContainerfor consuming. All the options from the JMS component also apply for this component.
To use this component, make sure you have the
activemq.jaror
activemq-core.jaron your classpath along with any Apache Camel dependencies such as
camel-core.jar,
camel-spring.jarand
camel-jms.jar.
Transacted and caching>
Note
Notice the init and destroy methods on the pooled connection factory. This is important to ensure the connection pool is properly started and shutdown.
The
PooledConnectionFactorywill then create a connection pool with up to 8 connections in use at the same time. Each connection can be shared by many sessions. There is an option named
maxActiveyou can use to configure the maximum number of sessions per connection; the default value is
500. From ActiveMQ 5.7 onwards the option has been renamed to better reflect its purpose, being named as
maxActiveSessionPerConnection. Notice the
concurrentConsumersis set to a higher value than
maxConnectionsis. This is okay, as each consumer is using a session, and as a session can share the same connection, we are in the safe. In this example we can have 8 * 500 = 4000 active sessions at the same time.
Invoking MessageListener POJOs in a as follows:
public class MyListener implements MessageListener { public void onMessage(Message jmsMessage) { // ... } }
Then use it in your route as follows
from(""). bean(MyListener.class);
That is, you can reuse any of the Apache Camel Components and easily integrate them into your JMS
MessageListenerPO:
<camelContext xmlns=""> <route> <from uri=""/> <to uri="activemq:queue:foo"/> </route> <route> <!-- use consumer.exclusive ActiveMQ destination option, notice we have to prefix with destination. --> <from uri="activemq:foo?destination.consumer.exclusive=true&estination JMS component released with the ActiveMQ project.
<dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-camel</artifactId> <version>5.6.0</version> </dependency> | https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.2/html/apache_camel_component_reference/idu-activemq | CC-MAIN-2021-31 | en | refinedweb |
Ember FastBootEmber FastBoot
An Ember CLI addon that allows you to render and serve Ember.js apps on the server. Using FastBoot, you can serve rendered HTML to browsers and other clients without requiring them to download JavaScript assets.
Currently, the set of Ember applications supported is extremely limited. As we fix more issues, we expect that set to grow rapidly. See Known Limitations below for a full-list.
The bottom line is that you should not (yet) expect to install this add-on in your production app and have FastBoot work.
Introduction VideoIntroduction Video
InstallationInstallation
FastBoot requires Ember 2.3 or higher. It is also preferable that your app is running
ember-cli 2.12.0 and higher.
From within your Ember CLI application, run the following command:
ember install ember-cli-fastboot
RunningRunning
If your app is running
ember-cli 2.12.0-beta.1+ you can run as follows:
ember serve
- Visit your app at
You may be shocked to learn that minified code runs faster in Node than non-minified code, so you will probably want to run the production environment build for anything "serious."
ember serve --environment production
You can also specify the port (default is 4200):
ember serve --port 8088
See
ember help for more.
Disabling FastBoot with
ember serve
Optionally you can even disable the fastboot serving at runtime using the
fastboot query parameter. Example to turn off fastboot serving,
visit your app at. If you want to turn on fastboot serving again, simply visit at or.
You can even disable serving fastboot with
ember serve using an environment flag:
FASTBOOT_DISABLED=true ember serve. If you have disabled building fastboot assets using the same flag as described here, remember to also disable serving fastboot assets when using
ember serve.
FastBoot ConfigurationFastBoot Configuration
When running locally using
ember serve you can pass options into FastBoot instance via
config/fastboot.js file. The configuration file is applicable only for applications, addons are not supported.
module {let myGlobal = environment === 'production' ? processenvMY_GLOBAL : 'testing';returnsandboxGlobals:myGlobal;;}
There are several options available, see FastBoot's README for more information, but be aware that
distPath is provided internally by
ember-cli-fastboot, hence it can not be modified by this file.
FastBoot App Server ConfigurationFastBoot App Server Configuration
When using FastBoot App Server for production environment you have to manually pass options from
config/fastboot.js file.
const FastBootAppServer = ;const config = processenvNODE_ENV;let server =distPath: 'dist'...config;serverstart;
Using Node/npm DependenciesUsing Node/npm Dependencies
Whitelisting PackagesWhitelisting Packages
When your app is running in FastBoot, it may need to use Node packages to replace features that are available only in the browser.
For security reasons, your Ember app running in FastBoot can only access packages that you have explicitly whitelisted.
To allow your app to require a package, add it to the
fastbootDependencies array in your app's
package.json:
"name": "my-sweet-app""version": "0.4.2""devDependencies":// ..."dependencies":// ..."fastbootDependencies":"rsvp""path"
The
fastbootDependencies in the above example means the only node
modules your Ember app can use are
rsvp and
path.
If the package you are using is not built-in to Node, you must also
specify the package and a version in the
package.json
dependencies
hash. Built-in modules (
path,
fs, etc.) only need to be added to
fastbootDependencies.
Using DependenciesUsing Dependencies
From your Ember.js app, you can run
FastBoot.require() to require a
package. This is identical to the CommonJS
require except it checks
all requests against the whitelist first.
let path = FastBoot;let filePath = path;.
FastBoot ServiceFastBoot Service
FastBoot registers the
fastboot service. This service allows you to
check if you are running within FastBoot by checking
fastboot.isFastBoot. There is also a request object under
fastboot.request which exposes details about the current request being
handled by FastBoot
Delaying the server responseDelaying the server response
By default, FastBoot waits for the
beforeModel,
model, and
afterModel hooks to resolve before sending the response back to the
client. If you have asynchrony that runs outside of those contexts, your
response may not reflect the state that you want.
To solve this, the
fastboot service has
deferRendering method that accepts
a promise. It will chain all promises passed to it, and the FastBoot server will
wait until all of these promises resolve before sending the response to
the client. These promises must be chained before the rendering is
complete after the model hooks. For example, if a component that is
rendered into the page makes an async call for data, registering a
promise to be resolved in its
init hook would allow the component to
defer the rendering of the page.
The following example demonstrates how the
deferRendering method can be
used to ensure posts data has been loaded asynchronously by a component before
rendering the entire page. Note how the call should be wrapped in a
fastboot.isFastBoot
check since the method will throw an exception outside of that context:
;Component;
CookiesCookies
You can access cookies for the current request via
fastboot.request
in the
fastboot service.
Route;
The service's
cookies property is an object containing the request's
cookies as key/value pairs.
HeadersHeaders
You can access the headers for the current request via
fastboot.request
in the
fastboot service. The
headers object implements part of the
Fetch API's Headers
class, the
functions available are
has,
get, and
getAll.
Route;
HostHost
You can access the host of the request that the current FastBoot server
is responding to via
fastboot.request in the
fastboot service. The
host property will return the host (
example.com or
localhost:3000).
Route;
To retrieve the host of the current request, you must specify a list of
hosts that you expect in your
config/environment.js:
module {var ENV =modulePrefix: 'host'environment: environmentbaseURL: '/'locationType: 'auto'EmberENV:// ...APP:// ...fastboot:hostWhitelist: 'example.com' 'subdomain.example.com' /^localhost:\d+$/;// ...};
The
hostWhitelist can be a string or RegExp to match multiple hosts.
Care should be taken when using a RegExp, as the host function relies on
the
Host HTTP header, which can be forged. You could potentially allow
a malicious request if your RegExp is too permissive when using the
host
when making subsequent requests.
Retrieving
host will error on 2 conditions:
- you do not have a
hostWhitelistdefined
- the
Hostheader does not match an entry in your
hostWhitelist
Query ParametersQuery Parameters
You can access query parameters for the current request via
fastboot.request
in the
fastboot service.
Route;
The service's
queryParams property is an object containing the request's
query parameters as key/value pairs.
PathPath
You can access the path (
/ or
/some-path) of the request that the
current FastBoot server is responding to via
fastboot.request in the
fastboot service.
Route;
ProtocolProtocol
You can access the protocol (
http: or
https:) of the request that the
current FastBoot server is responding to via
fastboot.request in the
fastboot service.
Route;
The ShoeboxThe Shoebox
You can pass application state from the FastBoot rendered application to the browser rendered application using a feature called the "Shoebox". This allows you to leverage server API calls made by the FastBoot rendered application on the browser rendered application. Thus preventing you from duplicating work that the FastBoot application is performing. This should result in a performance benefit for your browser application, as it does not need to issue server API calls whose results are available from the Shoebox.
The contents of the Shoebox are written to the HTML as strings within
<script> tags by the server rendered application, which are then
consumed by the browser rendered application.
This looks like:
....
You can add items into the shoebox with
shoebox.put, and you can retrieve
items from the shoebox using
shoebox.retrieve. In the example below we use
an object,
shoeboxStore, that acts as our store of objects that reside in
the shoebox. We can then add/remove items from the
shoeboxStore in the
FastBoot rendered application as we see fit. Then in the browser rendered
application, it will grab the
shoeboxStore from the shoebox and retrieve
the record necessary for rendering this route.
Route;
Think out of the ShoeboxThink out of the Shoebox
Shoebox gives you great capabilities, but using it in the real app is pretty rough. Have you ever thought that such kind of logic should be done behind the scenes? In a large codebase, defining
fastboot.isFastBoot conditionals can be a daunting task. Furthermore, it generates a lot of boilerplate code, which obscures the solution. Sooner or later coupling with
shoebox will spread over all routes.
Solution: Application AdapterSolution: Application Adapter
One way to abstract the shoebox data storage mechanics is to move the logic into the Application Adapter as shown below.
export default class ApplicationAdapter extends JSONAPIAdapter.extend( // ...snip... cacheKeyFor([, model, id]) { return (model.modelName && id) ? `${model.modelName}-${id}` : 'default-store'; } async findRecord() { const key = this.cacheKeyFor(arguments); if (this.fastboot.isFastBoot) { let result = await super.findRecord(...arguments); // must deep-copy for clean serialization. result = JSON.parse(JSON.stringify(result)); this.fastboot.shoebox.put(key, result); return result; } let result = this.fastboot.shoebox.retrieve(key); if (!result) { result = await super.findRecord(...arguments); } // must deep-copy for clean serialization. return JSON.parse(JSON.stringify(result)); } }
With this strategy, any time an ember-data
findRecord request happens while in
Fastboot mode, the record will be put into the shoebox cache and returned. When
subsequent calls are made for that record in the hydrated application, it will
first check the shoebox data.
Solution: Use an Addon (ember-storefront)Solution: Use an Addon (ember-storefront)
Additionally, there is an addon called ember-data-storefront that can help to alleviate this pain, thanks to its Fastboot mixin:.
After installing the addon and applying the mixin, your routes can look like this:
app/routes/my-route.js:
;
And they still take advantage of caching in the
shoebox. No more redundant AJAX for already acquired data. Installation details are available in the addon documentation.
RehydrationRehydration
What is Rehydration?
The rehydration feature means that the Glimmer VM can take a DOM tree created using Server Side Rendering (SSR) and use it as the starting point for the append pass.
See details here:
In order to utilize rehydration in Ember.js applications we need to ensure that both server side renderers (like fastboot) properly encode the DOM they send to the browser with the serialization format (introduced in the commit above) AND that the browser instantiated Ember.js application knows to use the rehydration builder to consume that DOM.
Rehydration is 100% opt-in, if you do not specify the environment flag your application will behave as it did before!
We can opt-in to the rehydration filter by setting the following environment flag:
EXPERIMENTAL_RENDER_MODE_SERIALIZE=true
This flag is read by Ember CLI Fastboot's dependency; fastboot to alert it to produce DOM with the glimmer-vm's serialization element builder. This addon (Ember CLI Fastboot) then uses a utility function from glimmer-vm that allows it to know whether or not the DOM it received in the browser side was generated by the serialization builder. If it was, it tells the Ember.js Application to use the rehydration builder and your application will be using rehydration.
Rehydration is only compatible with fastboot > 1.1.4-beta.1, and Ember.js > 3.2.
Build Hooks for FastBootBuild Hooks for FastBoot
Disabling incompatible dependenciesDisabling incompatible dependencies
There are two places where the inclusion of incompatible JavaScript libraries could occur:
app.import in the application's
ember-cli-build.js
If your Ember application is importing an incompatible Javascript library,you can use
app.import with the
using API.
app;
app.import in an addon's
included hook
You can include the incompatible Javascript libraries by wrapping them with a
FastBoot variable check. In the browser,
FastBoot global variable is not defined.
var map = map;{var browserVendorLib = ...;browserVendorLib = ;return defaultTree browserVendorLib;}{// this file will be loaded in FastBoot but will not be eval'dapp;}
Note:
ember-cli-fastboot will no longer provide the
EMBER_CLI_FASTBOOT environment variable to differentiate browser and fastboot builds with rc builds and FastBoot 1.0 and above.
Loading additional assets in FastBoot environmentLoading additional assets in FastBoot environment
Often addons require to load libraries that are specific to the FastBoot environment):
{/*** manifest is an object containing:* {* vendorFiles: [<path of the vendor file to load>, ...],* appFiles: [<path of the app file to load>, ...],* htmlFile: '<path of the base page that should be served by FastBoot>'* }*/// This will load the foo.js before vendor.js is loaded in sandboxmanifestvendorFiles;// This will load bar.js after app.js is loaded in the sandboxmanifestappFiles;// remember to return the updated manifest, otherwise your build will fail.return manifest;}
Note:
process.env.EMBER_CLI_FASTBOOT will be removed in RC builds and FastBoot 1.0.
Therefore, if you are relying on this environment variable to import something in the fastboot environment, you should instead use
updateFastBootManifest hook.
Conditionally include assets in FastBoot assetConditionally include assets in FastBoot asset:
{let fastbootHtmlBarsTree;// check the ember version and conditionally patch the DOM apiif thisfastbootHtmlBarsTree = this;return tree ? tree fastbootHtmlBarsTree : fastbootHtmlBarsTree;return tree;}
The
tree is the additional fastboot asset that gets generated and contains the fastboot overrides.
Providing additional configProviding additional config
By default
ember-cli-fastboot reads the app's config and provides it in the FastBoot sandbox as a JSON object. For the app in browser, it respects
storeConfigInMeta and either reads it from the config meta tag or inlines it as JSON object in the
app-name/config/environment AMD module.
Addons like ember-engines may split the app in different bundles that are loaded asynchronously. Since each bundle is loaded asynchronously, it can have its own configuration as well. In order to allow FastBoot to provide this config in the sandbox, it exposes a
fastbootConfigTree build hook.
Addons wishing to use this hook simply need to return a unique identifier for the configuration with the configuration.
{return'<engine-name>':'foo': 'bar'}
The above configuration will be available in Node via the
FastBoot.config() function. Therefore, in order to get the above config, the addon/app can call
FastBoot.config('<engine-name>').
Known LimitationsKnown Limitations
While FastBoot is under active development, there are several major restrictions you should be aware of. Only the most brave should even consider deploying this to production.
No
didInsertElement
Since
didInsertElement hooks are designed to let your component
directly manipulate the DOM, and that doesn't make sense on the server
where there is no DOM, we do not invoke either
didInsertElement or
willInsertElement hooks. The only component lifecycle hooks called in
FastBoot are
init,
didReceiveAttrs,
didUpdateAttrs,
willRender,
didRender, and
willDestroy.
No jQueryNo jQuery
Running most of jQuery requires a full DOM. Most of jQuery will just not be supported when running in FastBoot mode. One exception is network code for fetching models, which we intended to support, but doesn't work at present.
Prototype extensionsPrototype extensions
Prototype extensions do not currently work across node "realms." Fastboot applications operate in two realms, a normal node environment and a virtual machine. Passing objects that originated from the normal realm will not contain the extension methods inside of the sandbox environment. For this reason, it's encouraged to disable prototype extensions.
TroubleshootingTroubleshooting
Because your app is now running in Node.js, not the browser, you'll need a new set of tools to diagnose problems when things go wrong. Here are some tips and tricks we use for debugging our own apps.
Verbose LoggingVerbose Logging
Enable verbose logging by running the FastBoot server with the following environment variables set:
DEBUG=ember-cli-fastboot:* ember serve
PRs adding or improving logging facilities are very welcome.
Developer ToolsDeveloper Tools
Thanks to recent improvements in NodeJS it is now possible to get a debugging environment that you can connect to with Chrome DevTools (version 55+). You can find more information on the new debugging method on Node's official documentation but here is a quick-start guide:
First let's start up the FastBoot server with Node in debug mode. One thing about debug mode: it makes everything much slower.
node --inspect-brk ./node_modules/.bin/ember serve
This starts the FastBoot server in debug mode. Note that the
--inspect-brk flag will cause your
app to start paused to give you a chance to open the debugger.
Once you see the output
Debugger listening on ws://127.0.0.1:<port>/<guid>, open Chrome
and visit chrome://inspect. Once it loads you should see an Ember target
with a link "inspect" underneath. Click inspect and it should pop up a Chrome inspector
window and you can click.
Note Regarding Node VersionsNote Regarding Node Versions
The above method only started working for the v8.x track of Node after version v8.4.0, which has a fix to this issue. If you are using any versions between v8.0 and v8.4 we would recommend upgrading to at least v8.4.0
For any versions prior to 6.4 the previous version of this documentation is still valid. Please follow those instructions here
TestsTests
Run the automated tests by running
npm test.
Note that the integration tests create new Ember applications via
ember new and thus have to run an
npm install, which can take several
minutes, particularly on slow connections.
To speed up test runs you can run
npm run test:precook to "precook" a
node_modules directory that will be reused across test runs.
Debugging Integration TestsDebugging Integration Tests
Run the tests with the
DEBUG environment variable set to
fastboot-test to see verbose debugging output.
DEBUG=fastboot-test npm test
QuestionsQuestions
Reach out to us in Ember community slack in the
#-fastboot channel. | https://preview.npmjs.com/package/ember-cli-fastboot | CC-MAIN-2021-31 | en | refinedweb |
Accordions are useful to show large data neatly by hiding and expanding the data. React Native doesn’t have its own accordion component. In this blog post, let’s check how to add an accordion component in react native.
In this react native example, we use Galio UI library which provides various useful components for react native. It’s a lightweight library and very simple to use.
Galio library uses react native vector icons library. Hence make sure to install react native vector icons properly in your project. You can follow installation instructions from here.
Now, install the Galio library using the command given below.
npm install galio-framework
In order to create an accordion component properly, we need to use two components of the Galio UI library named Block and Accordion.
Block is the main component and is needed to create any other components using the library. You can create an accordion component simply as given below.
<Block style={{ height: 200 }}> <Accordion dataArray={data} /> </Block>
The data passed down to the Accordion should have keys title, content and icon.
const data = [ { title: "First Chapter", content: "Lorem ipsum dolor sit amet", icon: { name: 'keyboard-arrow-up', family: 'material', size: 16, } }, { title: "2nd Chapter", content: "Lorem ipsum dolor sit amet" }, { title: "3rd Chapter", content: "Lorem ipsum dolor sit amet" } ]; Copy to clipboardErrorCopied
Following is the complete react native accordion example.
import React, {Component} from 'react'; import {View, StyleSheet} from 'react-native'; import {Accordion, Block} from 'galio-framework'; export default class App extends Component { constructor(props) { super(props); this.state = { data: [ { title: 'First Lesson', content: 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum', }, {title: 'Second Lesson', content: 'Lorem ipsum dolor sit amet'}, {title: 'Third Lesson', content: 'Lorem ipsum dolor sit amet'}, {title: 'Fourth Lesson', content: 'Lorem ipsum dolor sit amet'}, {title: 'Fifth Lesson', content: 'Lorem ipsum dolor sit amet'}, ], }; } render() { return ( <View style={styles.container}> <Block style={styles.block}> <Accordion dataArray={this.state.data} opened={null} /> </Block> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', }, block: { height: 300, justifyContent: 'center', alignItems: 'center', }, });
And here’s the output of the example.
| https://reactnativeforyou.com/how-to-add-accordion-component-in-react-native/ | CC-MAIN-2021-31 | en | refinedweb |
In our previous article, Rapid application development on Microsoft Azure, we demonstrated how easy it is to create a Web App on Microsoft’s Azure. While we did show how to enable logging locally for a Web App, that is not really practical or useful for a real app.
For real-world applications, we need a logger that gets out of the way. Logging is not the true value of our application. Rather, we need a logger that is fast, to free up the maximum computing cycles for our business logic. And we need to collect the output of that logger (and others).
Pino is fast, very fast
Pino is an obvious choice to increase our application’s performance. Or at least to minimise the impact of logging on our app’s performance.
Pino was created by Matteo Collina of NearForm. Pino logs only in JSON via a binary stream (a file or stdout). Pino outperforms the other existing loggers by up to 6x.
If you are used to Bunyan, Bole, or Winston, in 99% of cases you should just be able to drop in Pino.
Pino plays nicely with Hapi (& Express)
Pino plays nicely with a lot of packages. Keeping in line with our article series we will be using Hapi. Hooking up Pino with Hapi is pretty straight forward using the hapi-pino npm module:
register(require('hapi-pino'), (err) => { if (err) { console.error(err) process.exit(1) } // Start the server server.start((err) => { if (err) { console.error(err) process.exit(1) } }) })
This creates logs on all our routes of the format:
{"pid":11518,"hostname":"local-server","level":30,"time":1499413842512,"msg":"server started","created":1499413842353,"started":1499413842502,"host":"localhost","port":1337,"protocol":"http","id":"local-server:11518:j4tkad9t","uri":"","address":"127.0.0.1","v":1}
There are two additional ways of logging available to pass down extra information:
// the logger is available in server.app server.app.logger.warn('Pino is registered')
// and through Hapi standard logging system server.log(['subsystem'], 'standard logging way for accessing it')
To see more examples of pino output:
- clone our pino branch
- run
npm install
- run
npm run watch
- in another tab run
curl localhost:1337
Caveat Emptor: Pino & Hapi & TypeScript
Pino also provides server.logger().info(‘some info’) as a form of logging with Hapi. However, if you have a TypeScript project then you are restricted to the type definitions supplied by @types/hapi, and they do not allow this form of logging yet.
Log aggregation with Event Hubs
Now we have a logger working, but a feed on a local machine is not that useful. We need to be able to get this information into a dashboard.
Here we need to take a step back and consider our architecture. We are deploying an app in the cloud, i.e., on a VM or container. Presumably, there are multiple instances of your app, and their other instances of, e.g., databases. There will also be other services if you are doing a micro-services deploy. You might also be collecting client-side or IoT data. Connecting all of these to a dashboard in parallel might overwhelm it. It makes more sense to connect to a log aggregation service. Event Hubs is the obvious candidate since this is part of our series on Azure.
Microsoft describes Event Hubs as:. With the ability to provide publish-subscribe capabilities with low latency and at massive scale, Event Hubs serves as the “on ramp” for Big Data.
To collect our pino logs, we will provide a Web App and an Event Hub. Then pipe the logs from the Web App to the Event Hub:
Event Hubs can be used even if part of your architecture is on another cloud or own infrastructure. We will make use of this during development below to test the connection from our local machine to an Event Hub.
That said, it is still necessary for us to connect a dashboard by subscribing it to our Event Hub. We will do so in the next article.
Note that when using Event Hub there is a delay of 10 minutes when viewing data on its dashboard. Keep this in mind as you build your own application.
Provision an Event Hub
In our previous article we demonstrated how to provision a Web App. Assuming you have an Azure and a deployment user, then this simple bash script will get you started:
#! /bin/bash GROUP_NAME=nearGroup LOCATION=westeurope PLAN_NAME=nearPlan SKU=B1 APP_NAME=pino007 CREATE_GROUP="az group create -n $GROUP_NAME --location $LOCATION" echo $CREATE_GROUP $CREATE_GROUP CREATE_PLAN="az appservice plan create -n $PLAN_NAME -g $GROUP_NAME --sku $SKU" echo $CREATE_PLAN $CREATE_PLAN CREATE_WEBAPP="az webapp create -n $APP_NAME -g $GROUP_NAME -p $PLAN_NAME" echo $CREATE_WEBAPP $CREATE_WEBAPP SET_WEBAPP_SOURCE="az webapp deployment source config-local-git -n $APP_NAME -g $GROUP_NAME --query url --output tsv" echo $SET_WEBAPP_SOURCE $SET_WEBAPP_SOURCE
Please remember to replace
APP_NAME with a unique name of your choice. Note that while we could deploy a Web App for free, Event Hubs requires at least the Basic 1 (BI) plan.
In case you have not done it before set your Azure git repository as a target.
git remote add azure
Provisioning Event Hubs is possible using the Azure CLI. But, that is kind of tricky since we need to use templates. To simplify things for this example, we will use the web interface.
Under create a (unique) namespace. We used
pinoEventHubNS (which will be accessible via,
pinoEventHubNS.servicebus.windows.net hence why it needs to be unique.):
Then create
eventHub called
pinoEventHub:
Here you can see the Event Hub
pinoEventHub linked under the namespace
pinoEventHubNS:
Then create Shared Access Policy called
sendPinoEvent:
Congratulations you have provisioned a simple Web App infrastructure, including a log ingestor. More can be done to fine tune Event Hubs provisioning, but for now this suffices.
Send your first event
To be able to send your first event to an Event Hub, all we need to do is create a Shared Access Signature (SAS).
First, you will need the data from Shared Access Policy under
CONNECTION STRINGPRIMARY KEY, eg,
Endpoint=sb://pinoeventhubns.servicebus.windows.net/; SharedAccessKeyName=sendPinoEvent; SharedAccessKey=; EntityPath=pinoeventhub
Using Microsoft’s example as basis we created a simple function:
functioncreateSignature (uri, ttl, sapk) { const signature = encodeURIComponent(uri) +'\n'+ ttl const signatureUTF8 = utf8.encode(signature) const hash = crypto.createHmac('sha256', sapk) .update(signatureUTF8) .digest('base64') return encodeURIComponent(hash) }
Where the
ttl is the expiry date of the SAS in Unix time. And the
uri is simply.
Armed with your SAS, you can now send an event:
curl -sl -w "%{http_code}" -H 'Authorization: SharedAccessSignature sr=http%3A%2F%2Fpinoeventhubns.servicebus.windows.net%2Fpinoeventhub&sig=&se=&skn=sendPinoEvent' -H 'Content-Type:application/atom+xml;type=entry;charset=utf-8' --data '{ "event": "hello world" }'
Which should return
201, denoting a successful
Piping stdout to an Event Hub
We can now successfully post to an Event Hub, now all we need is to send all the Pino logs to such a posting mechanism. Since Pino logs to stdout, all we need is something like
node yourapp.js | eventhub.
What we have is a stream of output from pino. First, we can break those into discrete events using split2. Then,
POST these lines in batches (for better performance) to an Event Hub with a writable stream in object mode using https.
Something similar to:
const writable = new Writable({ objectMode: true, writev: function (lines, done) { const events = lines .map(line => { return `{"UserProperties":${line}}` }) .join(',') const req = https.request(options, function () { done() }) req.write(`[${events}]`) req.end() }, write: function (line, enc, done) { const req = https.request(options, function () { done() }) req.write(line) req.end() }, }) pump(split2(), writable)
We have captured this in an npm module called pino-eventhub. You have already installed this as part of your download, and we can test this on your local machine using:
node build/index.js | ./node_modules/.bin/pino-eventhub -s pinoeventhubns -e pinoeventhub -n sendPinoEvent -a -x
To see it working:
- run
curl localhost:1337
- open
- click on pinoEventHubNS (or your equivalent Event Hub namespace)
- wait 10 minutes, and you will see a blip under incoming messages
The alternative: AMQP
Alternatively, AMQP could be used to send the logs to an Event Hub. We stuck to
https for simplicity.
Deploying to Web App
We have our basic Web App setup done already. However, to keep in line with Twelve-Factor App development, we want to simplify our
npm run start command to
node build/index.js | pino-eventhub. We have setup our npm module to take environment variables, so this can be done using the Azure CLI:
az webapp config appsettings set -n pino007 -g nearGroup --settings \ PINO_EVENT_HUB_NAMESPACE=pinoeventhubns \ PINO_EVENT_HUB=pinoeventhub \ PINO_SHARED_ACCESS_POLICY_NAME=sendPinoEvent \ PINO_SHARED_ACCESS_SIGNATURE= \ PINO_SAS_EXPIRY=
Now all we need to do is:
git push azure pino:master az webapp browse -n pino007 -g nearGroup
Again, you can go to your Event Hubs dashboard on the Azure Portal to see your events arrive.
Alternative: Application Insights
Instead of using Pino with Event Hubs, we could’ve opted for a solution like Application Insights (AI).
According to Microsoft:
Application Insights is an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your live Web App. It will automatically detect performance anomalies. It includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. It’s designed to help you continuously improve performance and usability.
All the functionality AI ties into looks great. AI, however, tightly couples to your application through monkey-patching. At the moment AI is instrumented for Bunyan, Console, MongoDB, MongoDB-Core, Mysql and Redis. However, for this application, we are looking at the lightest/fastest possible logging solution. Hence, Event Hubs made more sense for us.
Conclusion
Building on our previous article, we wanted to be able to collect the log files from our application. We showed that it is easy to provision an Event Hub on the Azure portal, using Pino as our logger. After creating credentials (using the tool we provided), your Web App can stream to an Event Hub from a single-line command.
In the next article in the series, we will select from the logs collected in an Event Hub, and represent them visually. | https://www.nearform.com/blog/collect-fast-pino-logs-in-an-azure-event-hub/ | CC-MAIN-2021-31 | en | refinedweb |
callback function c++
I have been trying to implement a call back function with no success.
Here is the problem. I have a subscriber that is supposed to listen to a topic. Let's call it A. A is of type std_msgs/Float32
How can I implement a callback function?
#include "ros/ros.h" #include "geometry_msgs/Twist.h" #include "std_msgs/Float32.h" #include <sstream> using namespace std; ros::Publisher velocity_publisher; ros::Subscriber pose_sub; std_msgs::Float32 ball_pose; void poseCallback(const std_msgs::Float32::ConstPtr & pose_message); //void moveUp(std_msgs::Float32 distance_tolerance); int main(int argc, char **argv) { ros::init(argc, argv, "sphero_move"); ros::NodeHandle n; velocity_publisher = n.advertise<geometry_msgs::twist>("/cmd_vel", 1000); pose_sub = n.subscribe("/ball_pose_x", 10, poseCallback); ros::Rate loop_rate(0.5); //moveUp(30.00); loop_rate.sleep(); ros::spin(); return 0; } void poseCallback(const std_msgs::Float32::ConstPtr & pose_message) { ball_pose = pose_message->data; } /**void moveUp(std_msgs::Float32 distance_tolerance) { geometry_msgs::Twist vel_msg; ros::Rate loop_rate(10); do{ vel_msg.linear.x = 25; vel_msg.linear.y = 0; vel_msg.linear.z = 0; velocity_publisher.publish(vel_msg); ros::spinOnce(); loop_rate.sleep(); }while((ball_pose-500)<distance_tolerance); }**="" <="" pre="">
I want to be able to update the position of the robot in every iteration to be able to act on the current position, since the robot is moving.
Here is the error I am receiving.
/home/sphero/catkin_ws/src/sphero_controller/src/sphero_move.cpp: In function ‘void poseCallback(const ConstPtr&)’: /home/sphero/catkin_ws/src/sphero_controller/src/sphero_move.cpp:34:12: error: no match for ‘operator=’ (operand types are ‘std_msgs::Float32 {aka std_msgs::Float32_<std::allocator<void> >}’ and ‘const _data_type {aka const float}’) ball_pose = pose_message->data; ^ In file included from /home/sphero/catkin_ws/src/sphero_controller/src/sphero_move.cpp:3:0: /opt/ros/kinetic/include/std_msgs/Float32.h:22:8: note: candidate: std_msgs::Float32_<std::allocator<void> >& std_msgs::Float32_<std::allocator<void> >::operator=(const std_msgs::Float32_<std::allocator<void> >&) struct Float32_ ^ /opt/ros/kinetic/include/std_msgs/Float32.h:22:8: note: no known conversion for argument 1 from ‘const _data_type {aka const float}’ to ‘const std_msgs::Float32_<std::allocator<void> >&’ sphero_controller/CMakeFiles/sphero_move_node.dir/build.make:62: recipe for target 'sphero_controller/CMakeFiles/sphero_move_node.dir/src/sphero_move.cpp.o' failed make[2]: * [sphero_controller/CMakeFiles/sphero_move_node.dir/src/sphero_move.cpp.o] Error 1 CMakeFiles/Makefile2:808: recipe for target 'sphero_controller/CMakeFiles/sphero_move_node.dir/all' failed make[1]: [sphero_controller/CMakeFiles/sphero_move_node.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: ** [all] Error 2
Do you have more of the code you could post? Here you are showing that there is a
Subscriberbut you don't set it to anything. That is where you would tell your node to go to a callback function when messages are received on a topic. Where is your
mainfunction?
What error are you getting? Or how do you know that the code is not entering the callback function?
Do you still need me to add the errors? I think the problem is coming from the type of that
The example is now doing more than one thing: setting up a subscriber with a callback function and doing things in
moveUp(). And
moveUp()may not return. I would suggest getting just the callback working first. And try to say what you expect to see and what you actually get.
I've updated it with errors. thanks | https://answers.ros.org/question/243762/callback-function-c/ | CC-MAIN-2021-31 | en | refinedweb |
Wikiversity:Curator Mentorship
(Redirected from Wikiversity:Custodian Mentorship)Jump to navigation Jump to search
In most cases, new custodians go through a "mentorship" process, where an experienced custodian helps a prospective custodian learn the ropes as a probationary custodian, and then provides the community with a recommendation.
This is a program within Wikiversity:Mentors.
Outline[edit source]
All going well[edit source]
- A mentorship relationship is established.
- The mentor makes a request to a bureaucrat (or if he/she is one, just promote).
- Mentored has three months to learn the tools and show him/herself to be appropriate for custodianship.
- Mentor recommends the mentored for full custodianship, community votes.
Not so well[edit source]
- Mentor is responsible for the mentored, and should immediately go to the stewards if sysop tools are being misused.
- If at any point in the probationary period the mentor decides that he is or will be unwilling to recommend, the mentored has 48 hours to find a new mentor, otherwise the tools will be removed by the stewards on the mentor's request.
- If the mentor is unexpectedly unavailable or the mentored wishes to have a different mentor (for any reason whatsoever), mentorship can be transferred to another willing mentor for the remainder of the probationary period.
- If at the end of the probationary period the mentor decides not to recommend the mentored for full custodianship, the mentored may either initiate a self-nomination, or find another mentor (any additional probationary period to be decided in conversation with the new mentor).
Curator skills[edit source]
The following skills are mastered and demonstrated at some point during the probationary period:
- Monitor Wikiversity:Request custodian action and Wikiversity:Notices for custodians
- Welcome new users
- Respond to Colloquium questions and requests
- Review the Template:Administering Wikiversity resource list
- Move a page without redirect
- Move a page with subpages
- Move a page with delete
- Delete a page
- Monitor Wikiversity:Import and import content from a sister project
- Review one or more Maintenance reports and make appropriate corrections
- Work with other curators and custodians to effectively support Wikiversity
Custodian skills[edit source]
- Undelete a page
- Merge page history
- Hide revisions
- Monitor the AbuseLog and create or update an abuse filter
- Edit one or more pages in the MediaWiki namespace
- Block a user or IP address and monitor Category:Requests for unblock
- Work with other curators and custodians to effectively support Wikiversity
More information about these skills can be found at How to be a Wikimedia sysop and at Wikiversity:Custodianship. | https://en.wikiversity.org/wiki/Wikiversity:Custodian_Mentorship | CC-MAIN-2021-31 | en | refinedweb |
Is there a long term roadmap for emacs research and development that someone could point to? Having done my homework I find nothing out there except rumour and myth. For instance: - Is multi-threading coming to emacs 25? - Is there realistic support for replacing elisp with guile? Is that considered possible even? - If elisp is the future, what type of changes are envisaged? - double escape regex fix? - lexical closures (to support no namespaces)? - first-class print for functions? More generally, when can we get turtles all the way down and enjoy the return of the symbolic machine? | https://lists.gnu.org/archive/html/emacs-devel/2012-12/msg00447.html | CC-MAIN-2021-31 | en | refinedweb |
width of the screen window in pixels (Read Only).
This is the actual width of the player window (in full-screen it is also the current resolution).
using System.Collections; using System.Collections.Generic; using UnityEngine;
public class Example : MonoBehaviour { void Start() { //Output the current screen window width in the console Debug.Log("Screen Width : " + Screen.width); } } | https://docs.unity3d.com/2020.2/Documentation/ScriptReference/Screen-width.html | CC-MAIN-2021-31 | en | refinedweb |
Contents
This chapter is normative.
The DTD modularization framework specification speaks at length on the subject of abstract modules. In brief, an "abstract" module is simply a set of objects, in this case objects within an ordered hierarchy of content objects, which encapsulates all of the features of the objects and assembles them into a coherent set. This set of objects and their properties is independent of its machine representation, and so is the same whether written in DTD module form, as a Schema module, or as a Java class.
The abstract modules described in XHTML-MOD are composed in a functional manner, and each "abstract module" contains data structures that are generally functionally similar. (There is no requirement that modules be created along functional lines; any other method that suits the author's purpose may be used instead.)
The framework described here makes use of the same abstract modules as in XHTML-MOD with few exceptions. In the case of the schema module representation, the relationship between the "abstract" modules and the schema modules is quite close. In each case there is a one-to-one relationship between the abstract and concrete modules (with one exception for the changes to the legacy module) and they share essentially the same names and data structures.
These modules must be included in any document that uses the XHTML namespace. Each section below describes the purpose of the module and its contents.
None of the modules defined here should be modified by developers; instead use <redefine> or a substitution group.
This is a module container for XHTML language support modules.
The character entities module includes three notation elements within an <appinfo> element, each referencing one of the required entity sets in XHTML: ISO Latin-1, Symbols, and Special characters.
Character entities are not fully supported in XML Schema, as described in Section 2.1.
These are the core element definitions for the required modules.
Block Phrasal
Block Structural
Inline Phrasal
Inline Structural
These modules are (clearly) optional; they may be removed or combined arbitrarily (except for dependencies). Developers should not modify the contents of these files as they part of the XHTML definition. Instead, extension in the optional modules should be confined to redefinitions and derivations.
This module has been reorganized to conform to the framework conventions used here. It has been divided here into two separate modules. The "misc" module contains everything in the DTD legacy model except frames. Frames are now in a separate module called framedefs. This allows the developer to easily separate the legacy features if desired.
Frames
Target
Iframe
Ruby elements denote annotations used in some Asian languages. [RUBY]
The Ruby module has been moved into the optional element definitions module. Note that it is normatively required in XHTML 1.1
This is an example base schema document that includes all the other modules to create the complete schema.
The hub document included here intends to approximate XHTML 1.1 subject to the requirements given in Section 1.4. This schema should be fully equivalent to the DTD version except for schema-specific additions and changes. This hub document is non-normative and provided only as an example.
The purpose of any language definition, regardless of its basis on DTDs, XML Schema, or some other representation, is the same: to determine if a specific document instance conforms to the language definition. In XML Schema terms, this means that documents can be validated using the schema. The validation process attempts to determine the document's structural integrity, and the behavior of any XML processor in cases of validation errors is well-defined in the XML 1.0 specification. Therefore the real test of any modularization system for XHTML is whether the resulting schema can be used to determine if any particular XHTML document instance is valid.
This document does not attempt to define conformance beyond the ability to validate the structural integrity of documents. In particular it does not attempt to describe any level of user-agent conformance, as this is not a modularization issue, but an issue for the specification of the language semantics. Conformance to the XML Schema-based modularization framework is strictly defined in terms of document validation. Further levels of conformance are described in the published language specifications themselves.
Schemas defining language variants within the XHTML namespace may be considered to be conformant if they:
An XML Schema or set of Schema modules can be considered to be conformant to this schema modularization framework if they follow the schema modularization framework conventions described in Section 2.2.
The XHTML Family of Documents is defined as the set of language variants that use the XHTML namespace as the namespace of the root element, which must be <html>.
In order to be a conformant member of the XHTML Family of Documents, an XML Schema or set of schema modules must:
This class of document definitions includes both XHTML language variants and compound document types using external modules.
Versioning of modules that claim conformance to this specification is subject to the framework conventions in Section 2.2. Versioning information should be available in the version block section of each conformant module. | http://www.w3.org/TR/2001/WD-xhtml-m12n-schema-20011219/schema-modules.html | CC-MAIN-2017-04 | en | refinedweb |
System.Security.Authentication.ExtendedProtection Namespace
The System.Security.Authentication.ExtendedProtection namespace provides support for authentication using extended protection for applications.
The design of Integrated Windows Authentication (IWA). | https://msdn.microsoft.com/en-us/library/dd454743.aspx | CC-MAIN-2017-04 | en | refinedweb |
This is your resource to discuss support topics with your peers, and learn from each other.
07-04-2012 02:14 AM
How can I use a standard qml component such as a Rectangle in the cascade project? I tried something like this and it doesnt works..
import bb.cascades 1.0 import QtQuick 1.0 Rectangle { width: 360 height: 360 Text { text: "Hello World" anchors.centerIn: parent } MouseArea { anchors.fill: parent onClicked: { Qt.quit(); } } }
Nothing is shown in the simulator, and the console says something like...
QPixmap: Cannot create a QPixmap when no GUI is being used
QPixmap: Cannot create a QPixmap when no GUI is being used
how?
07-04-2012 12:04 PM - edited 07-04-2012 12:07 PM
Welcome to the forums!
As you've probably seen, Cascades is a rich environment, but still under construction. You have cascades UI classes that give you a native look and feel, including a responsive UI that is not guaranteed in Qt applications. To get this, you use cascades UI augmented by native calls and non-UI Qt classes. You can use some Qt UI classes in some cases when they are helper classes not actively involved in the UI. But you either use the cascades UI or the Qt UI, not a mix. This extends to the QML: in cascades you can use the cascades classes or your own components.
(You can write a fully Qt application, if you are willing to give up the deep integration with the OS and the signature Blackberry experience provided through Cascades.)
So: don't use QtQuick. But do use classes documented here:
(the documentation applies to both C++ and cascades QML)
What are you trying to achieve? There may be another way to do what you are looking for. I suspect you are looking for a Button, Label, TextArea or TextField. Download the cascadescookbookqml sample and see if there is something there that does what you are looking for.
Stuart
07-04-2012 09:43 PM
I just want to be able to use qt quick components so that it is cross platform, workable on windows,linux,symbian,etc as well..
Other platforms do allow us to use a mix....
I understand about the advantages of having deep integration with blackberry os and native look and feel through cascade.. but cross platform is also what developers look at nowadays..
07-04-2012 09:54 PM
You also mentioned that we can have a fully cascade UI or Qt UI application, not a mix. Then can we have a fully Qt Quick application?
07-05-2012 09:03 AM
As far as I know this works fine but I have never tried it. It is mentioned in a number of threads. Fully Qt applications are supported.
Stuart
07-09-2012 09:14 AM
Not sure if this is related to your question, but see:
Stuart
07-09-2012 11:03 PM
thanks for the info..
i have been trying various method and struggling till now..
what i want to do is very simple.. just create a blackberry project that I can use qt quick components in it....
but i still can't get it running on the simulator...
07-17-2012 02:22 PM
Have you succeeded making a Qt Quick application yet?
There are several threads discussing how to make and deliver a Qt application. Check both this forum and the native forum.
Do you still have a specific question not covered by one of the existing threads on Qt applications?
Stuart
07-18-2012 05:35 PM
Check this thread:
Should get you started
07-23-2012 03:51 PM
Are you up and running yet?
If one of the posts answered your question, please mark it as a solution.
Otherwise, perhaps you could word your question differently, perhaps with a simple HelloWorld (use the icon with the little clipboard with a C to include the code), and maybe with some lo-res pictures to illustrate what you are seeing.
Stuart | https://supportforums.blackberry.com/t5/Native-Development/using-standard-qml-components/m-p/1816969 | CC-MAIN-2017-04 | en | refinedweb |
On Tue, Oct 16, 2001 at 12:37:47PM +1000, Jo Bourne wrote:
> Hi,
>
> I am trying to upgrade our c1.8.2 sites to c2 and I think I have found a bug in the util
logic sheet, I am trying to include an remote xml page using xsp. first i tried using pages
from one of our c1.8.2 that worked fine before upgrading (I did change the xsp namespace declarations)
and when this failed I have pruned back and back until all I have is this:
>
> <?xml version="1.0" ?>
> <xsp:page xmlns:
> <page>
> <xsp:logic>
> <util:include-uri>
> <util:href></util:href>
> </util:include-uri>
> </xsp:logic>
> </page>
> </xsp:page>
yep, it's a bug, I'm able to reproduce it with cocoon, however not using
standalone application of util.xsl and xsp.xsl onto your page...
I'm going into it now and will let you know later on what's the result...
>
> <xnip/>
>
-- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200110.mbox/%3C20011016145846.A22173@renie.dyndns.org%3E | CC-MAIN-2017-04 | en | refinedweb |
#include <iostream> using namespace std; int count(int n, int counter, int n1) { if(n ==2) return counter; int base_n = 1, sum; else { sum += n1; if(sum <= n) count(n, counter++, n1++); } } int main() { int n, int counter = 1, n1 = 1; cout << "enter n (greater than or equal to 2): "; cin >> n; cout << count(n, counter, n1); return 0; }
I am trying to get a head start on my lab, but it has been awhile since I've done any C++ programming. Hoping that someone can help guide me in the right direction for the recursive design. | http://www.dreamincode.net/forums/topic/122047-counting-to-n-how-many-different-summations/ | CC-MAIN-2017-04 | en | refinedweb |
MTH 133 - UNITS 3-5
1)Solve the following equations algebraically. You must show all your work
2)Solve algebraically and check your potential solutions:
3)The volume of a cube is given by V = s3, where s is the length of a side. Find the length of a side of a cube if the volume is 800 cm3. Round the answer to three decimal places
4)For the following function, C computes the cost in millions of dollars of implementing a city recycling project when x percent of the citizens participate. ...
5)a)If , fill in the following table for x = 0, 1, 2, 3, 4. Round to three decimal places where necessary.
b)Explain why no negative values are chosen as values to substitute in for x.
c)Graph in MS Excel and paste your graph here.
6)The formula for calculating the amount of money returned for an initial deposit into a bank account or CD (certificate of deposit) is given by...
a) Calculate the return (A) if the bank compounds annually (n = 1). Round your answer to the hundredth's place
b)Calculate the return (A) if the bank compounds quarterly (n = 4). Round your answer to the hundredth's place
c)If a bank compounds continuously, then the formula used is ...
7) A commonly asked question is, "How long will it take to double my money?" At 8% interest rate and continuous compounding, what is the answer? Round your answer to the hundredth's place.
8)Suppose that the.
(Please see the attachment).
Solution Summary
A Complete, Neat and Step-by-step Solution is provided in the attached file. | https://brainmass.com/math/fractions-and-percentages/mth-133-units-3-5-319976 | CC-MAIN-2017-04 | en | refinedweb |
Creating a custom atomic scan plug-in
In my previous article where I introduced atomic scan, I largely talked about using atomic to scan your containers and images for CVE Vulnerabilities. I also discussed how atomic scan had been architected to a plug-in approached so that you can implement your own scanners. The plug-ins do not have to focus on vulnerabilities, it could be as simple a scanner that collects information about containers and images.
In this blog, I will walk through how you can create your own custom scanner within the atomic plug-in framework.
The components of a scan plug-in
Creating a scanner plug-in for atomic mostly revolves around:
- making atomic aware of your plug-in
- ensuring the right input
- dealing with the output
Make atomic aware of your plug-in
To make atomic aware of your plug-in, you must deliver a configuration file that describes your atomic into one of atomic’s configuration directory. One a proper file is in place, atomic can then use your plug-in. This also entails the use of some ‘installation’ technique. Both are described below.
Configuration File
The configuration file for plug-ins resides in /etc/atomic.d. Atomic ships a single plug-in
configuration for scanning with the openscap project. The plugin format is as follows:
type: scanner scanner_name: image_name: fully-qualified image name default_scan: custom_args: [optional] scans: [ { name: scan1, args: [... ], description: "Performs scan1"}, { name: scan2, args: [...], description: "Performs scan2" } ]
The scanner arguments must be in list format. You must also define one of your of your scans as the default scan for your scanner plug-in. You can also add an optional key and value for custom_args which allows you to add custom arguments to the docker command. This is typically used for bind mounting additional directories on the host file system for your scanning application. And finally, the image name must be fully-qualified because this is used to pull the image if it is not already local.
Installing your configuration file
The preferred way to install your plugin’s configuration file is through the use of the atomic’s install command. This command will execute the INSTALL label on the image. Typically, the INSTALL label is a combination of a temporary docker container command and a script to be executed by the container. An example INSTALL label might look like this:
LABEL INSTALL ‘docker run -it --rm -v /etc/atomic.d:/host/etc/atomic.d ${IMAGE} install.sh’
And the corresponding install.sh could be as simple as:
#/bin/sh echo “Installing configuration file for PLUGIN_NAME” cp -v /PLUGIN_NAME /host/etc/atomic.d
The destination of the configuration file in the install script is preceeded by /host because that is where the host’s /etc/atomic.d/ is bind mounted into the container as described by the INSTALL label above.
Input from atomic
As of now, atomic scan can take four different inputs for which containers or images to scan. They are:
- –images (scan all images)
- –containers (scan all containers)
- –all (scan all containers and images)
- a list of images or containers (provide a list of image or container names or IDs.
Atomic will then mount the filesystem of each container or image to a time stamped directory under /run/atomic/time-stamp. Each container or image will be mounted to a directory with its ID. So for example, if you were scanning two images that had IDs of cef54 and b36fg respectively, the directory structure would look like:
/run/atomic/time-stamp/ cef54.../ b36fg.../
When atomic runs your scanning image, it will always mount /run/atomic/time-stamp to your container’s /scanin directory. Your scanning container simply needs to walk the first level of directories under /scanin for processing. And because the directories are named with the ID of the object, you have a nice key to organize your output data.
Output from atomic
Just like how atomic will bind mount the chroots to /scanin, it also bind mounts a /scanout directory to the container. On the host, the /scanout directory is actually mapped to
/var/lib/atomic/scanner-name/time-stamp. Atomic expects you to put your output in the /scanout directory, again organizing your output data by directory names that correlate to the IDs of the object. You can output whatever data files you want but you must output a json file in each directory that follows the required template so that atomic can display some information to stdout for the user.
An example of what this directory structure looks like on the host can be as follows:
/var/lib/atomic/scanner-name/time-stamp/ cef54../ json b36fb../ json
JSON template
The required JSON template must be formed as follows:
{ "Time": "timestamp", "Finished Time": "timestamp", "Successful": "true", "Scan Type": "Description of scan", "UUID": "/scanin/ID_of_object", "CVE Feed Last Updated": "timestamp", "Scanner": "scanner_name", "Vulnerabilities": [ { "Custom": { "custom_key1": "custom_val1", "custom_key2": [ { "custom_key3": "custom_val3", "custom_key_4": "custom_val4" ...
If the type of scanning you are performing is not related to CVEs or identifying vulnerabilities, you
can change the Vulnerabilities key results Results. Notice that you can use the custom tag to
add custom outputs. Atomic will recursively follow the custom tag and will output the key and values
verbatim as they are.
A sample custom scanner
If you want to create a custom scanner plugin for Atomic, you need to have prepared the following elements:
- A configuration file that describes your scanner plug-in
- An install script that prepares the host to run your scanning application
- Your scanning application
- An image that contains all of the above.
In my example below, I have created a custom scan plug-in that allows you to list all the RPMs in an image, which is the default scan type for my image. I also provide an alternative scan type that allows you to list the OS version of each image.
Configuration file
This configuration file is what enables your scanner plug-in with Atomic. Note that you can provide one or more types of scans in your configuration file but you must set a default. In the case of my custom configuration file, there are two scan types defined: rpm-list and get-os. Notice how they each call the python executable with a different argument which allows me to differentiate between the two.
type: scanner scanner_name: example_plugin image_name: example_plugin default_scan: rpm-list custom_args: ['-v', '/tmp/foobar:/foobar'] scans: [ { name: rpm-list, args: ['python', 'list_rpms.py', 'list-rpms'], description: "List all RPMS", }, { name: get-os, args: ['python', 'list_rpms.py', 'get-os'], description: "Get the OS of the object", } ]
Install script
The install script is used by the atomic install command to put your scanner’s configuration file in the correct directory on the host file system. The atomic install command uses the INSTALL label in your image to call the install script you have provided. The following is a simple install script that copies my example_plugin configuration file to /etc/atomic.d on the host file system using the bind mount defined in the INSTALL label (shown in the Dockerfile below).
#/bin/bash echo "Copying example_plugin configuration file to host filesystem..." cp -v /example_plugin /host/etc/atomic.d/
Executable
Obviously a scanner can be very complex. My example scanner here is a relatively simple python executable that can list the RPMs in an image or show its OS version. Note how in the python executable, the results are written to json files in the required template.
import os import subprocess from datetime import datetime import json from sys import argv class ScanForInfo(object): INDIR = '/scanin' OUTDIR = '/scanout' def __init__(self): self._dirs = [ _dir for _dir in os.listdir(self.INDIR) if os.path.isdir(os.path.join(self.INDIR, _dir))] def list_rpms(self): for _dir in self._dirs: full_indir = os.path.join(self.INDIR, _dir) # If the chroot has the rpm command if os.path.exists(os.path.join(full_indir, 'usr/bin/rpm')): full_outdir = os.path.join(self.OUTDIR, _dir) # Get the RPMs cmd = ['rpm', '--root', full_indir, '-qa'] rpms = subprocess.check_output(cmd).split() # Construct the JSON rpms_out = {'Custom': {}} rpms_out['Custom']['rpms'] = rpms # Make the outdir os.makedirs(full_outdir) # Writing JSON data self.write_json_to_file(full_outdir, rpms_out, _dir) def get_os(self): for _dir in self._dirs: full_indir = os.path.join(self.INDIR, _dir) os_release = None for location in ['etc/release', 'etc/redhat-release','etc/debian_version']: try: os_release = open(os.path.join(full_indir, location), 'r').read() except IOError: pass if os_release is not None: break full_outdir = os.path.join(self.OUTDIR, _dir) # Construct the JSON out = {'Custom': {}} out['Custom']['os_release'] = os_release # Make the outdir os.makedirs(full_outdir) # Writing JSON data self.write_json_to_file(full_outdir, out, _dir) @staticmethod def write_json_to_file(outdir, json_data, uuid): current_time = datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f') json_out = { "Time": current_time, "Finished Time": current_time, "Successful": "true", "Scan Type": "List RPMs", "UUID": "/scanin/{}".format(uuid), "CVE Feed Last Updated": "NA", "Scanner": "example_plugin", "Results": [json_data], } with open(os.path.join(outdir, 'json'), 'w') as f: json.dump(json_out, f) scan = ScanForInfo() if argv[1] == 'list-rpms': scan.list_rpms() elif argv[1] == 'get-os': scan.get_os()
Dockerfile
The Dockerfile for my example plug-in is very simple. It contains an INSTALL label so that atomic install will function properly. And besides the RHEL base image, it simply adds the example plugin configuration file, the scanner executable, and the install.sh script itself.
from registry.access.redhat.com/rhel7:latest LABEL INSTALL='docker run -it --rm --privileged -v /etc/atomic.d/:/host/etc/atomic.d/ $IMAGE sh /install.sh' ADD example_plugin / ADD list_rpms.py / ADD install.sh /
The user experience
As is the mission of the atomic application, the user experience for using the new scanner image is very crisp and easy. The first step in using the image is to use atomic install to prepare the host operating system. In the case of this example, we simply need to ‘expose’ the example_plugin configuration file from the image to /etc/atomic.d/ on the host.
# sudo atomic install example_plugin docker run -it --rm -v /etc/atomic.d/:/host/etc/atomic.d/ example_plugin sh /install.sh Copying example_plugin configuration file to host filesystem... '/example_plugin' -> '/host/etc/atomic.d/example_plugin' #
With atomic install, if the image is not local, atomic will pull the image from the correct repository onto your host. In the example case, the image was already local. Now the host is aware of the new plugin and we can verify what scanning options are available to us with the atomic scan command.
# sudo atomic scan --list Scanner: openscap * Image Name: openscap Scan type: cve_scan * Description: Performs a CVE scan based on known CVE data Scan type: standards_scan Description: Performs a standard scan Scanner: example_plugin Image Name: example_plugin Scan type: rpm-list * Description: List all RPMS Scan type: get-os Description: Get the OS of the object * denotes default
When viewing the list of available scan options, notice how asterisks (*) are use to denote defaults. In this case, you can see that ‘openscap’ is the default scanner and its ‘cve_scan’ is the default scan type. For our example plugin, ‘rpm-list’ is the default scan type and ‘get-os’ is an additional scan type. You can set the default scanner in the /etc/atomic configuration file.
You can use the ‘–scanner’ option in atomic scan to switch scanners and if no ‘–scan_type’ is provided, it will use the default scan type declared for that scanner.
# sudo atomic scan --scanner example_plugin registry.access.redhat.com/rhel7:latest docker run -it --rm -v /etc/localtime:/etc/localtime -v /run/atomic/2016-05-18-13-44-57-660748:/scanin -v /var/lib/atomic/example_plugin/2016-05-18-13-44-57-660748:/scanout:rw,Z -v /tmp/foobar:/foobar example_plugin python list_rpms.py list-rpms registry.access.redhat.com/rhel7:latest (c453594215e4370) The following results were found: rpms: tzdata-2016d-1.el7.noarch setup-2.8.71-6.el7.noarch basesystem-10.0-7.el7.noarch nss-softokn-freebl-3.16.2.3-14.2.el7_2.x86_64 glibc-2.17-106.el7_2.6.x86_64 ... (content remove for space) yum-plugin-ovl-1.1.31-34.el7.noarch vim-minimal-7.4.160-1.el7.x86_64 rootfiles-8.1-11.el7.noarch Files associated with this scan are in /var/lib/atomic/example_plugin/2016-05-18-13-44-57-660748.
Notice how atomic scan will also cite where the output files from the scan are located.
If you wanted to use the example_plugin scanner and the scan_type of ‘get-os’, you simply need to pass both the ‘–scanner’ and ‘–scan_type’ command switches.
# sudo atomic scan --scanner example_plugin --scan_type get-os registry.access.redhat.com/rhel7:latest ubuntu docker run -it --rm -v /etc/localtime:/etc/localtime -v /run/atomic/2016-05-18-13-47-25-627346:/scanin -v /var/lib/atomic/example_plugin/2016-05-18-13-47-25-627346:/scanout:rw,Z -v /tmp/foobar:/foobar example_plugin python list_rpms.py get-os ubuntu (17b6a9e179d7cb9) The following results were found: os_release: stretch/sid registry.access.redhat.com/rhel7:latest (c453594215e4370) The following results were found: os_release: Red Hat Enterprise Linux Server release 7.2 (Maipo) Files associated with this scan are in /var/lib/atomic/example_plugin/2016-05-18-13-47-25-627346.
Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development. The developer program and software are both free! | https://developers.redhat.com/blog/2016/05/20/creating-a-custom-atomic-scan-plug-in/ | CC-MAIN-2017-04 | en | refinedweb |
CGI::AppToolkit::Template - Perl module to manipulate text templates
This module takes a raw complex data structure and a formatted text file and combines the two. This is useful for the generation of HTML, XML, or any other formatted text. The templating syntax is formatted for quick parsing (by human or machine) and to be usable in most GUI HTML editors without having to do a lot of backflips.
CGI::AppToolkit::Template was developed to fulfill several goals. It is similar to HTML::Template in concept and in style, but goes about it by very different means. It's goals are:
Shortcut to new(-set=>"template text") or new(-file=>"filename"). If the supplied string has line endings, it's assumed to be the template text, otherwise it's assumed to be a filename. This may be called as a method or as a subroutine. It will be imported into useing namespace when requested:
use CGI::AppToolkit::Template qw/template/;
NOTE: This module is loaded and this method is called by CGI::AppToolkit->template(), which should be used instead when using CGI::AppToolkit.
Example:
$t = template('template.html'); # OR $t = CGI::AppToolkit->template('template.html'); # or to read the file in from another source manually open FILE, 'template.html'; @lines = <FILE>; $t = template(\@lines); # must pass a ref # or $t = template(join('', @lines)); # or a single string
Create a new CGI::AppToolkit::Template object. The template() method cals this method for you, or you can call it directly.
NOTE: If you are using CGI::AppToolkit, then it is highly recommended that you use it's CGI::AppToolkit->template() method instead.
OPTIONS include: load (or file), set (or text or string), and cache. load and set are shorthand for the corresponding methods. cache, if non-zero, will tell the module to cache the templates loaded from file in a package-global variable. This is very useful when running under mod_perl, for example.
Example:
$t = CGI::AppToolkit::Template->new(-load => 'template.html'); # or to read the file in from another source manually open FILE, 'template.html'; @lines = <FILE>; $t = CGI::AppToolkit::Template->new(-text => \@lines); # must pass a ref # or $t = CGI::AppToolkit::Template->new(-text => join('', @lines)); # or a single string
Load a file into the template object. Called automatically be template() or CGI::AppToolkit->template().
Example:
$t = CGI::AppToolkit::Template->new(); $t->load('template.html');
Sets the template to the supplied TEXT. Called automatically be template() or CGI::AppToolkit->template().
Example:
$t = CGI::AppToolkit::Template->new(); open FILE, 'template.html'; @lines = <FILE>; $t->set(\@lines); # must pass a ref # or $t->set(join('', @lines)); # or a single string
Makes the template. output and print are synonyms.
Example:
$t->make({token => 'some text', names => [{name => 'Rob'}, {name => 'David'}]});
Checks to see if the template file has been modified and reloads it if necessary.
var loads a variable tagged with NAME from the template and returns it. vars returns a list of variable names that can be passed to var.
Example:
$star = $t->var('star'); @vars = $t->vars();
The template syntax is heirarchical and token based. Every tag has two forms: curly brace or HTML-like. All curly brace forms of tags begin with
{? and end with
?}. Angle brackets
<> may be used instead of curly braces
{}. For example, the following are all the same:
{? $name ?} <? $name ?> <token name="name"> <token name>
Use of HTML-like tags or curly brace tags with angle brackets might make the template difficult to use in some GUI HTML editors.
NOTE: Tokens may be escaped with a backslash '\' ... and becuase of this, backslashes will be lost. You must escape any backslashes you want to keep in your template.
Tokens may be nested to virtually any level. The two styles, curly bace and html-like, may be mixed at will, but human readability may suffer.
Line endings may be of any OS style: Mac, UN!X, or DOS.
A simple token. Replaced with the string value of a token provided with the specified name key.
If a filter() is specified, then the named
CGI::AppToolkit::Template::Filter subclass will be loaded and it's
filter() function will be called, with the token's value and any parameters specified passed to it. Please see
CGI::AppToolkit::Template::Filter for a list of provided filters.
NOTE: The template module's ability to parse the parameters are very rudimentary. It can only handle a comma delimited list of space-free words or single or double quoted strings. The string may have escaped quotes in them. The style of quote (single or double) makes no difference.
A decision if..else block. Checks token to be true, or compares it to the string text, the subtemplate template, or the number number, respectively, and if the test passes then the template code inside this token is appended to the output text. If there is an 'else' (
{?-- $token --?} or
<else>) and the test fails, the template code between the else and the end (
{?-- $token ?}) will be appended to the output text.
The comparison operators
<,
<=,
>,
>=, or
!= may be used, and you may also place an exclamation point (
!) before the
$token.
{?if $token<='a string' --?}...{?-- $token?} {?if !$token --?}...{?-- $token?} {?if !$token!='a string' --?}...{?-- $token?} <-- if token equals 'a string'
Comparison is done as a number if the value is not quoted, as a string if is single-quoted, and as a subtemplate if it is double-quoted. This is intended to be similar to the use of quotes in perl:
<option {?if ${?-- $state?} {?if $count > 0 --?}{?$count?}{?-- $count --?}<font color="red">$count</font>{?-- $count?}
An alternate syntax for the decision if..else block. Checks token to be true, or compares it to the value value, and if the test passes then the template code inside this token is appended to the output text. If there is an 'else' ({?-- $token --?} or <else>) and the test fails, the template code between the else and the end (
</iftoken>) will be appended to the output text.
If there is no
value="..." given, then the token is tested for perl 'trueness.' If the
comparison="..." is given as
not or
ne then the 'trueness' of the token is reversed.
The value, if given, is treated as described in the
as="...", or as a number if not specified. Unlike the curly brace form, the style of quoting does not matter. Possible
as values are
string,
template, or
number.
The token is compared to the value according to the value of
comparison="...". Possible values are
not (false),
ne (not equal),
eq (equal),
lt (less than),
le (less than or equal to),
gt (greater than), or
ge (greater than or equal to).
<iftoken name="thanks">Thanks for visiting!<else>You're not welcome here! Go away.</iftoken>
You can mix token stylas as you wish, to the dismay of anyone (or any GUI HTML app) trying to read the template:
<iftoken name="id" as="number" value="10" comparison="gt">Your id is greated than 10!{?-- $id --?}Your id <= 10.{?-- $id?} {?if $name --?}Hello '<token name='name'>'.<else>I don't know who you are!</iftoken> {?if $address --?}I know where you live!<else>I don't know your address{?-- $address?} <iftoken id><token id>{?-- $id?}
A repeat token. Repeats the contents of this token for each hashref contained in the arrayref provided with the name token and the results are appended to the output text. If the arrayref is empty and ther is an 'else' ({?-- $token --?} or <else>), then the template code between the else and the end (
</iftoken>) will be appended to the output text.
A repeat token, as above, except it repeats the line that it is on. The token can appear anywhere in the line.
<select name="tool"> <option value="{?$id?}" {?if $id="{?$selected-tool?}" --?}SELECTED{?-- $id?}>{?$name?}{?@tools?} </select>
In the above example, the
<option ...> line will be repeated for every tool of the 'tools' array. If the
id is the same as
{?$selected-tool?}, then SELECTED. So, in the code we call:
print CGI::AppToolkit->template('tools')->make( 'tools' => [ {'id' => 1, 'name' => 'Hammer'}, {'id' => 2, 'name' => 'Name'}, {'id' => 3, 'name' => 'Drill'}, {'id' => 4, 'name' => 'Saw'}, ], 'selected-tool' => 3 );
And, assuming the file is called '
tools.tmpl,' then the result should look something like:
<select name="tool"> <option value="1" >Hammer <option value="2" >Name <option value="3" SELECTED>Drill <option value="4" >Saw </select>
A variable token. This will not appear in the output text, but the contents (value) can be retrieved with the
var() and
vars() methods.
The data passed to the
make method corresponds to the tags in the template. Each token is a named key-value pair of a hashref. For example, the following code:
use CGI::AppToolkit; my $t = CGI::AppToolkit->template('example.tmpl'); print $t->make({'token' => 'This is my text!'});
Given the file
example.tmpl contains:
<html> <head><title>{?$token?}</title></head> <body> Some text: {?$token?} </body> </html>
Will print:
<html> <head><title>This is my text!</title></head> <body> Some text: This is my text! </body> </html>
Complex data structures can be represented as well:
use CGI::AppToolkit; my $t = CGI::AppToolkit->template('example2.tmpl'); print $t->make({ 'title' =>'All about tokens', 'tokens' => [ {'token' => 'This is my text!'}, {'token' => 'Text Too!'} ] });
Given the file
example.tmpl contains:
<html> <head><title>{?$title?}</title></head> <body> {?@tokens?}Some text: {?$token?} </body> </html>
Will print:
<html> <head><title>All about tokens</title></head> <body> Some text: This is my text! Some text: Text Too! </body> </html>
In this example I combine the use of <?$token?> style syntax and {?$token?} style syntax.
<html> <head> <title><?$title><title> <head> <body> <?$body?><br> Made by: <token name="who"> <table> <tr> <td> Name </td> <td> Options </td> </tr> {?@repeat --?} <tr> <td> <token name> </td> <td> <a href="index.cgi?edit-id={?$id?}">Edit<?a> </td> </tr> {?-- @repeat?} </table> </body> </head> </html> <?my $author--> <B><A HREF="mailto:rob@heavyhosting.net">Rob Giseburt</A></B> <?--$author>
#!/bin/perl use CGI; # see the perldoc for the CGI module use CGI::AppToolkit; #-- Standard CGI/CGI::AppToolkit stuff --# $cgi = CGI->new(); $kit = CGI::AppToolkit->new(); $kit->connect( ... ) || die $db->errstr; # load the data from a DB # returns an arrayref of hashrefs $repeat = $kit->data('item')->fetch(-all => 1); # Place the loaded data in a page-wide data structure $data = { title => 'This is an example of CGI::AppToolkit::Template at work.', body => 'Select edit from one of the options below:', repeat => $repeat }; # print the CGI header print $cgi->header(); #-- CGI::AppToolkit::Template stuff --# $template = $kit->template('example.tmpl'); # load the 'author' HTML from the template $author = $template->var('author'); # place it into the data $data->{'who'} = $author; # output the results of the data inserted into the template print $template->output($data);
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Please visit for complete documentation. | http://search.cpan.org/dist/CGI-AppToolkit/lib/CGI/AppToolkit/Template.pm | CC-MAIN-2017-04 | en | refinedweb |
Difference between revisions of "EclipseLink/UserGuide/JPA/Basic JPA Development/Mapping/Relationship Mappings/Collection Mappings/ManyToMany"
Revision as of 16:26, 30 March 2011
EclipseLink JPA
Key API
Examples
Many-to-Many Mappings #Example: @ManyToMany Annotation - Employee Class with Generics and Usage of @ManyToMany Annotation - Project Class with Generics examples show how to use this annotation to>}}
The #Example: @ManyToMany Annotation - Employee Class with Generics and Usage of @ManyToMany Annotation - Project Class with Generics examples 9.1.25 "JoinTable Annotation" in the JPA Specification.
Example: @ManyToMany Annotation - Project Class with Generics
@Entity public class Project implements Serializable { ... @ManyToMany(mappedBy="projects") public Set<Employee> getEmployees() { return employees; } ... }
For more information, see Section 9.1.26 "ManyToMany Annotation" in the JPA Specification.
For more information on EclipseLink direct mappings and relationship mappings, see Relational Mapping Types.
For more information on EclipseLink one-to-one mappings, see [[Introduction%20to%20Relational%20Mappings%20( | http://wiki.eclipse.org/index.php?title=EclipseLink/UserGuide/JPA/Basic_JPA_Development/Mapping/Relationship_Mappings/Collection_Mappings/ManyToMany&diff=244687&oldid=244451 | CC-MAIN-2017-04 | en | refinedweb |
Hello friends I need your help,
I have a simply appWeb (C# VS2008), it make some calls to different webServices, then each one the WS and the appWeb keep logging constantly your activity in a file text. This files contains the sessionID like the follow example:
[Time] | [SessionID] | [Source] | [Message]
9.00 | oha5rl45slcbyd55r3uicq55 | AppWeb | LoginUser
9.01 | oha5rl45slcbyd55r3uicq55 | AppWeb | OpenPage test.asmx
9.02 | oha5rl45slcbyd55r3uicq55 | AppWeb | ClosePage test.amx
9.02 | oha5rl45slcbyd55r3uicq55 | AppWeb | Invoke webService uno.asmx
9.03 | ddgodn455qdvcl45nfurbb45 | WebService uno | Create DB
9.03 | ddgodn455qdvcl45nfurbb45 | WebService dos | Receive call to method getObjects()
Aaron Skonnard
MSDN Magazine August 2002
Hello Frds,I'm having arraylist to add student info but i have to create unique id per record below is my class.Student objStudent=new Student(123,"Nj");statArray.Add(objStudent)public class Student{ private string name; private int RollId; public string Name { get { return name; } set { name = value; } } public int Id { get { return RollId; } set { RollId = value; } } public Employee(string name, int id) { { this.name=name; this.Id=id; } } access a webservice in asp.net programming. This webservice was done using socket layer programming and also i am having the ipaddress and port address. Did any one know about how to access webservice using socket layer and if know let me know how to access the webservice. Thank
I have created a site using the standard blogsite template. By default it contains a picture library named "Photos". I want to create a new folder in this library using the UpdateListItems Webservice. But I am getting an exception saying "The
file name you specified could not be used. It may be the name of an existing file or directory, or you may not have permission to access the file." But there is not file/folder with the name I am specifying. I am using the following query to create the
folder
<Query>
<Batch PreCalc="TRUE" OnError="Continue" RootFolder="/forexp/Photos">
<Method ID="1" Cmd="New">
<Field Name="ID">New</Field>
<Field Name="FSObjType">1</Field>
<Field Name="FileRef">/blogsite/Photos/TestFolder</Field>
</Method>
</Batch>
</Query>
Please note that I can create the folder using the web interface. I also created a new picture library in the same site and used the same above query to create the folder and I am able to create the folder.
Can
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/25845-how-do-i-keep-unique-sessionid-between-my.aspx | CC-MAIN-2017-04 | en | refinedweb |
Posted 29 Aug 2015
Link to this post
I want to set the paper type and ScaleFactor of worksheet but the code below for papertype and scalefactor does not compile. I would like some help with this.
WorksheetPageSetup pageSetup = wb.ActiveWorksheet.WorksheetPageSetup;
pageSetup.Margins = new PageMargins(25.00, 25.00, 25.00, 25.00);
pageSetup.PaperType = PaperType.Letter;
pageSetup.ScaleFactor = new Size(0.9, 0.9);
Posted 01 Sep 2015
Link to this post
Telerik.Windows.Documents.Spreadsheet.Model.Printing.WorksheetPageSetup pageSetup =
this
.radSpreadsheet.Workbook.ActiveWorksheet.WorksheetPageSetup;
pageSetup.Margins =
new
Telerik.Windows.Documents.Spreadsheet.Model.Printing.PageMargins(25.00, 25.00, 25.00, 25.00);
pageSetup.PaperType = Telerik.Windows.Documents.Model.PaperTypes.Letter;
pageSetup.ScaleFactor =
System.Windows.Size(0.9, 0.9);
Posted 01 Sep 2015
in reply to
Tanya
Link to this post
Thanks very much for the clarification on PaperTypes. I am still seeing an error with the Scaling Factor.
pageSetup.ScaleFactor = new System.Windows.Size(0.9, 0.9);
the type or namespace name "Size" does not exist in the namespace "System.Windows". I am not given an opportunity to 'resolve' and I tried adding a specific reference to System.Windows.
Posted 04 | http://www.telerik.com/forums/asp-net-spreadprocessing | CC-MAIN-2017-04 | en | refinedweb |
I'm trying to write a simple function that takes in a word and a stopword to see if they are the same words. It will return true if they are.
So far, by doing this,
function isStopWord(word, stopWords) {
return (stopWords.indexOf(word) !== -1);
}
console.log(isStopWord("cat", "cat"));
console.log(isStopWord("cat", "catnip");
console.log(isStopWord("catnip", "cat");
The
String.prototype.indexOf() method returns the position of the first occurrence of a specified value in a string. Thus
cat has one occurence inside
catnip thus the
index returned will be
!== 1
If you just want to check one word to another one. Use the following snippet.
function isStopWord(word, stopWords) { return stopWords === word; } console.log(isStopWord("cat", "cat")); console.log(isStopWord("catnip", "cat"));
If you want to see if a word is present inside an array of words, use the following snipper which is using
Array.prototype.indexOf()
function isStopWord(word, stopWords) { return stopWords.indexOf(word) !== -1; } console.log(isStopWord("cat", ["cat", "dog", "bird"])); console.log(isStopWord("catnip", ["cat", "dog", "bird"])); | https://codedump.io/share/M1qtsFmwC906/1/my-isstopword-function-thinks-quotcatquot-and-quotcatnipquot-are-the-same-words | CC-MAIN-2017-04 | en | refinedweb |
Download source code for 9 simple steps to run your first Azure Table Program
Introduction
What will we do in this article?
Step 1:- Ensure you have things at place
Step 2:- Create a web role project
Step 3:- Specify the connection string
Step 4:- Reference namespaces and create classes
Step 5:- Define partition and row key
Step 6:- Create your ‘datacontext’ class
Step 8:- Code your client
Step 9:- Run your application
Azure has provided 4 kinds of data storages blobs, tables, queues and SQL azure. In this section we will see how to insert a simple customer record with code and name property in Azure tables.In case you are complete fresher and like me you can download my two azure basic videos which explain what azure is all about Azure FAQ Part 1 :- Video1 Azure FAQ Part 2 :- Video2.Please feel free to download my free 500 question and answer eBook which covers .NET , ASP.NET , SQL Server , WCF , WPF , WWF , Silver light ,-requisite at place. You can read the below article to get the basic prerequisite .
The next step is to select the cloud service template, add the web role project and create your solution.
The ‘connectionstring’.
We also need to specify where the storage location is , so select the value and select ‘Use development storage’ as shown in the below figure. Development storage means your local PC currently where you Azure fabric is installed.
If you open the ‘ServiceConfiguration.cscfg’ file you can see the setting added to the file.
In order to do Azure storage operation we need to add reference to ‘System.Data.Services.Client’ dll.
Once the dlls are referred, let’s refer.
The next step is to create your data context class which will insert the customer entity in to.
In the same data context we have created an ‘AddCustomer’ method which takes in the customer entity object and call’s the ‘AddObject’ method of the data context to insert the customer entity data in to Azure tables. table’s structure. in to the table.
//Loop through the records to see if the customer entity is inserted in the tabless
foreach (clsCustomer obj in customerContext.Customers)
{
Response.Write(obj.CustomerCode + " " + obj.CustomerName + "<br>");
}
It’s time to enjoy your hard work, so run the application and enjoy your success.
You can get the source code from top of this article.
Latest Articles
Latest Articles from Questpond
Login to post response | http://www.dotnetfunda.com/articles/show/766/9-simple-steps-to-run-your-first-azure-table-program | CC-MAIN-2017-04 | en | refinedweb |
cb0t ares chat clientTextArea; import javax.swing.JScrollPane; import javax.swing.JFrame; public class DemoJMenu extends JFrame implements ActionListener, ItemListener { JTextArea output; JScrollPane scraplikasi
In MASTER FIRST YEAR in Montpellier-II university, we must programe a image compressor, we don't decided tu use JPEG or GIF, we created the .grol format. the algorithm is simple: detect rectangles in Y, Cr or Cb masks. (Y U V) and we store there rectangles in a binary file. we can open this binary file to open and read the images. today this project sent to our responsibles, the user interface is the principal goal. because with this, we can, like photoshpo or the gimp, activate the interactivecodol compression duteil grollemund image jean-marie julien master1 montpellier naitan reconnaissancedeforme
Welcome to PERMMPERMM is a Python-based Environment for Reaction Mechanisms/Mathematics. Simply put, PERMM helps you detangle the complex relationships in reaction mechanisms. Reaction mechanisms, as shown in our logo, often have complex and recursive relationships that when combined with 4-dimensional rate data... can get frizzy. PERMM starts with a reaction mechanism, applies n-dimensional rate data, and then provides an interface for chemical analysis. PERMM lets you: define chemical speciesair Apple chemical chemistry diagnostic Linux Mac mechanism ozone quality reactions Unix Windows
Feel free to ask questions! Either email, or I'm often in #libxbee on irc.freenode.net... News14 April 2012: A few people have asked for Windows support, so I got to work. I am pleased to announce that the Win32 port effort is nearly complete. Basic functionality has been tested, and currently only the 'xbee1' and 'xbee2' modes are supported. Have a look at the win32 branch. 08 March 2012: All versions of libxbee v3 since v3.0.5 (3c3b0204435e) will use API mode 1. This is a compile time option,api FreeBSD library libxbee linux Series1 Series2 Windows XBee XBee1 XBee2
.NET C# application which reads files containing signal sequences, searches for patterns on any signal aggregation level, and visualizes them using arc diagrams (1). Signals can be any data like single letters, words, n-grams (combination of words), DNA bases {A, G, T, C}, musical notes etc. For each kind of signal file type an adapter needs to be implemented. This application was inspired by (2) and is part of a series of articles for Germany´s largest .NET magazine: dotnetpro ( dotnetpro Pile Visualization Westphal
Tryouts v2.0 ALPHAPut your Ruby tests in comments. NOTE: Tryouts syntax changed since 0.x. The old version is still available in the 0.8-FINAL branch. Basic syntax ## A very simple test 1 + 1 #=> 2 ## The test description can spread ## across multiple lines. The same ## is true for test definitions. a = 'foo' b = 'bar' a + b #=> 'foobar' ## A test will pass when its return ## value equals the expectation. 'foo'.class #=> String ## The expectations are evaluated. 1 + 1 #=> 1 + 1 ## Here's an exambasketball benchmarking communication dreams dsl fun performance testing
A recommender made in Java that uses a partially ordered set of preferences rather than stars, thumbs ups, etc. Current Under-specified GoalsItems and users will be in many different multi-dimensional spaces. Each space will correspond to a property. The distance between an item and a person will be meaningless on its own, but the distance compared to other distances from the same user indicate relative strength of the property in the item as the user sees it. So if A is closer to me in the X spcollaborativefiltering DAG graphvisualization hypersphere informationvisualisation interactive mathematics non-euclideanspace poset predictor preference recommender valence web2.0
Gibbler - v0.7 ALPHAGit-like hashes and history for Ruby objects for Ruby 1.8, 1.9 and JRuby. Check out the screencast created by Alex Peuchert. Important Information Regarding Your DigestsDigest calculation changed in the 0.7 release for Proc objects. See "News" below for more info. Example 1 -- Basic Usage require 'gibbler' "kimmy".gibbler # => c8027100ecc54945ab15ddac529230e38b1ba6a1 :kimmy.gibbler # => 52be7494a602d85ff5d8a8ab4ffe7f1b171587df config = {} config.gibbler # => 4fdcadc66a38feb9cdata development sha1 sha256 testing
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.
Tag Cloud >> | http://www.findbestopensource.com/product/cb0t | CC-MAIN-2017-04 | en | refinedweb |
Essentials
All Articles
What is LAMP?
Linux Commands
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
ONLamp Topics
App Development
Database
Programming
Sys Admin
Libtap is a library for testing C code. It implements the Test Anything Protocol, which is emerging from Perl's established test framework.
One of the ideas behind Extreme Programming (XP) is to "design for today, code for tomorrow." Rather than making your design cover all eventualities, you should write code that is simple to change should it become necessary.
Having a good regression test suite is a key part of this strategy. It lets you make modifications that change large parts of the internals with the confidence that you have not broken your API. A good test suite can also be a way to document how you intend people to use your software.
Having worked where people thought that writing tests was a waste of time, I can't tell you how much time I wasted trying to fix bugs that had emerged as a result of bugs being fixed or new features added. If we'd had a proper regression test suite, we could have found those immediately, and I would have lots of extra time to write new features. Taking the time to produce good tests (and actually running them) actually ends up saving a lot of time, not wasting it.
Perl distributions normally ship with a test suite written using Test::Simple, Test::More, or the older (and now best avoided) Test module. These modules contain functions to produce plain-text output according to the Test Anything Protocol (TAP) based on the success or failure of the tests. The output from a TAP test program might look something like this:
Test::Simple
Test::More
Test
1..4
ok 1 - the WHAM is overheating
ok 2 - the overheating is detected
not ok 3 - the WHAM is cooled
not ok 4 - Eddie is saved by the skin of his teeth
Related Reading
C in a Nutshell
By Peter Prinz, Tony Crawford
The 1..4 line indicates that the file expects to run four tests. This can help you detect a situation where your test script dies before it has run all the intended tests. The remaining lines consist of a test success flag, ok or not ok, and a test number, followed by the test's "name" or short description. Obviously, the second and third lines indicate a successful test, while the last two indicate test failures.
1..4
ok
not ok
Perl modules usually invoke the tests either by running the prove program or by invoking make test or ./Build test (depending on whether you're using ExtUtils::MakeMaker or Module::Build). All three approaches use the Test::Harness module to analyze the output from TAP tests. If all else fails, you can also run the tests directly and inspect the output manually.
prove
make test
./Build test
ExtUtils::MakeMaker
Module::Build
Test::Harness
If Test::Harness is given a list of tests programs to run, it will run each one individually and summarize the result. Tests can run in quiet and verbose modes. In the quiet mode, the harness prints only the name of the test script (or scripts) and a result summary. Verbose mode prints the test "name" for each individual test.
Besides Perl, helper libraries for producing TAP output are available for many languages including C, Javascript, and PHP (see the Links & Resources section).
Suppose that you want to write tests for the module Foo, which provides the mul(), mul_str(), and answer() functions. The first two perform multiplication of numbers and strings, while the third provides the answer to life, the universe, and everything. Here is an extremely simple Perl test script for this module:
Foo
mul()
mul_str()
answer()
use Test::More tests => 3;
use Foo;
ok(mul(2,3) == 6, '2 x 3 == 6');
is(mul_str('two', 'three'), 'six', 'expected: six');
ok(answer() == 42, 'got the answer to everything');
The tests => 3 part tells Test::More how many tests it intends to run (referred to as planning). Doing this allows the framework to detect whether you exit the test script without actually running all the tests. It is possible to write test scripts without planning, but many people consider this a bad habit.
tests => 3
Hey! Isn't this article supposed to be about testing C? It is. Libtap is a C implementation of the Test Anything Protocol. It is to C what Test::More is to Perl, though using it doesn't tie you into using Perl. However, for convenience you probably want to use the prove program to interpret the output of your tests.
Libtap implements a convenient way for your C and C++ programs to speak the TAP protocol. This allows you to easily declare how many tests you intend to run, skip tests (some apply only on specific operating systems, for example), and mark tests for unimplemented features as TODO. It also provides the convenient exit_status() function for indicating whether any of the tests failed through the program's return code.
exit_status()
How would you would write the test for the Foo module in C, using libtap? The #include <foo.h> line is analogous to the use Foo; of the Perl version. However, as this is C, you also need to link with the libfoo library (assuming this implements the functions declared in foo.h).
#include <foo.h>
use Foo;
libfoo
For this test, I will show the full source of the test program, including any #include lines; I will show only shorter fragments below. Notice again the difference in the number passed to the plan_tests() function and the number of actual tests that actually run:
#include
plan_tests()
#include <tap.h>
#include <string.h>
#include <foo.h>
int main(void) {
plan_tests(3);
ok1(mul(2, 3) == 6);
ok(!strcmp(mul_str("two", "three"), "six"), "expected: 6");
ok(answer() == 42, "got the answer to everything");
return exit_status();
}
The exit_status() function returns 0 if the correct number of tests ran and if they all succeeded; it returns nonzero otherwise. In the Perl version the test framework makes magic happen behind the scenes so that you don't have to twiddle the exit status by hand.
One notable difference between the Perl version and the C version is the ok1() macro, a wrapper around the ok() call. Instead of having to call ok() with a test condition as the first parameter and diagnostic as the second (and any subsequent) parameter, this macro stringifies its argument and uses that for the diagnostic message. This can be very convenient for simple tests.
ok1()
ok()
Both the Perl and C tests above, when run, print something along the lines of:
1..3
not ok 1 - mul(2, 3) == 6
# Failed test (basic.c:main() at line 12)
ok 2 - expected: 6
ok 3 - got the answer to everything
The line starting with # is a diagnostic message; libtap prints these occasionally to help you find which test is failing. In this case, it identifies the line in the test file that contained the failing test.
#
Pages: 1, 2, 3
Next Page
Sponsored by:
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.linuxdevcenter.com/pub/a/onlamp/2006/01/19/libtap.html | CC-MAIN-2017-04 | en | refinedweb |
Although XML documents are text only and thus can easily be stored in files, they are so-called semi-structured data, which need to be accessed via the structure. (Semi-structured data have been intensively studied by Abiteboul et al. (Abiteboul et al. 2000)). It is therefore worthwhile to draw upon database technologies for their storage and retrieval. In doing so, the XML document structure has to be mapped to the database schema, which is required by every database management system. The structure of XML documents does not correspond to any schema model of the widely used database approaches and therefore has led, on the one hand, to extensive studies of the necessary transformation and, on the other hand, to the implementation of so-called native XML databases.
The storing of XML documents in relational databases means describing hierarchical, tree-type structures with relations. In the object-oriented world, the DOM builds the basis for these structures. But it is just the relational database approach that poses the question whether we should build on an object model. We will therefore point to two alternative data models for XML documents, the so-called edge approach applied by D. Florescu and D. Kossman (Florescu and Kossmann 1999b) and XRel developed by M. Yoshikawa et al. (Yoshikawa et al. 2001).
By using the DOM, these tree-type structures have already been transformed into trees by the implementation classes of the DOM interfaces. Two associations form the tree: the childNodes and the parentNode association. The childNodes association is multivalued, which leads to a one-to-many relationship between nodes. We have to reverse this relationship to meet the relational database constraint that does not allow composed attribute values. But the parentNode association already defines the reverse relationship.
The value of the parentNode field of a table entry identifies the superordinate element that is defined by its own table entry. The elements are, however, no longer unique as soon as they are removed from the context of the XML document. Therefore, every element receives a unique identification number that is also used as the key of its table entry. The identification numbers also allow us to store the sequence of the subordinate elements. For example, the identification number of firstname is smaller than the identification number of lastname. Table 19.1 shows the unique element table for the XML document of our example in Listing 19.2. Personnel is the topmost element with ID 1; it has no parent. Professor has ID 2 and is contained in personnel, which is its parent with ID 1. Name is contained in professor, firstname and lastname are contained in name, course is contained in professor, and title and description are contained in course with ID 6.
The actual contents of an XML document refer from the CDATASection table to entries in the element table. In this way a link is established between the CDATASection table and the element table that we can create using a foreign key in the field parentNode. Moreover, each row in the CDATASection table possesses an identification number as a key and a value stored in the field data. Table 19.2 shows the text contents of the XML document in Listing 19.2. For example, Sissi, the first name of a professor, points to entry firstname with ID 4 in the element table.
The attribute table contains the specific fields of the Attr node?value and specified, an identification number for the sequence and, in the field parentNode, the identification number of the element to which it is defined as the foreign key. Table 19.3 shows the entries for the example in Listing 19.2. PersonnelNo with value 0802 belongs to the entry professor with ID 2 in the element table.
In addition to the attribute values, the actual contents of the XML document are stored in the data fields of the records in the CDATASection table. The values of this field can, however, vary randomly in size, from short strings to page-long texts. A differentiation can take place by means of different tables: Short strings are stored in a string table; long texts in a text table. Both tables then replace the CDATASection table. Tables 19.4 and 19.5 show this once again for the example in Listing 19.2.
If we want to extract a text from the database, either we need the special support of the database manufacturer who, as in the case of Oracle, has complemented its database with the SQL construct Connect-By for the extraction of hierarchical structures. Or, starting at the root, we can use an SQL instruction for every element, similar to a recursive descent into the DOM tree. A construct like Connect-By is not offered by all manufacturers of relational databases. The second solution requires database access for every subelement. The typed implementation of the DOM could be an improvement.
The typed implementation of the DOM defines a class for every element and stores the class instances in a table of the same name. The nesting of elements, which is realized by composition, also has to take place by means of an identification number. These form the key for the entries. They must, however, be unique throughout all special element tables. The values of the parentNode fields are no longer foreign keys, as they would have to refer to the same table. However, two entries of a specific element table, as elements in an XML document, can be included in two different superordinate elements.
The elements of the example document in Listing 19.2 require the definition of eight tables, as shown in Table 19.6. The attribute and CDATASection tables and the string and text tables remain the same as with the nontyped DOM approach.
It is obvious that for a highly structured XML document many tables with few entries result. Extracting an XML document takes place by joining all element tables to a single table. This must be expressed by an SQL query. Beginning with the table of the root element, it selects the value for tagname in two tables at a time when the value of the ID field of the first table is identical to the value of the parentNode field of the second table.
The creation of this Select statement requires knowledge of the document structure. The structure is, however, reflected only in the names of the tables. As the tables are not linked to each other via the foreign key, the nesting of the elements is also not expressed in the database schema. The advantages of the typing?the validation of the document using the database and the metadata for the structure of the document?are not present with relational databases. But the advantage remains that parts of the documents can be accessed via element names.
Object-oriented databases are the natural storage technology for the DOM. They store DOM trees without having to map the objects and their relations to other data concepts. Because they are based on a schema, as relational database systems are, the implementation variants of the DOM are reflected in the schema and have to be weighed against each other.
With the typed implementation, the specialized element classes complement the schema, and the names of the elements and their nested data are stored as metadata in the database. This can be advantageous when an application wants to validate a document using the database schema or wants to obtain information about the structure of the documents. Accessing subelements of an element also takes place via named references directly from the element and is therefore fast. With the nontyped implementation, subelements are instances in the childNodes set and have to be searched for. The class extents in object-oriented databases also bring an advantage in speed. They collect all references to the instances of a class and thus offer direct access to them. Using these, all course elements, for example, can be extracted from an XML document.
The typed compositions between the classes can, however, also be a great hindrance. If we want to extract the complete XML document again, which corresponds to running through the complete DOM tree, we do not take the typed access path but have to visit the nontyped nodes of the childNodes sets.
Modifications of the DTD also have a disadvantageous effect. Object-oriented database systems do indeed allow a dynamic customization of the schema. However, as this represents the document structure, a modification can lead to invalid documents that follow the original DTD.
These disadvantages speak in favor of the nontyped implementation of the DOM that optimally supports the running through of a DOM tree to the complete output of an XML document. Quick access to the child nodes of an element node can however be achieved by an indexing of the node set. Object-oriented database systems provide a means of indexing. In this way, indices to the attribute nodeName and to the ordering number of the child nodes can compensate for the speed differences of the different implementations.
To summarize, there is the attempt to represent hierarchical data by mapping XML documents on the schema of the various database models. This fact suggests the examination of a further type of database whose data can be organized hierarchically: the directory server.
Although hardly discussed, directory servers could be another interesting database approach for storing XML documents. Usually, they store huge quantities of simply structured data like personnel or inventory data of a company and allow very fast read access but significantly worse write access of the data. Another important feature is the existence of a tree?the so-called directory information tree?as a means of organizing the data.
Directory servers are widespread as address databases that are accessed by using the Lightweight Directory Access Protocol (LDAP), a simple variant of the X.500 ISO standard (Howes et al. 1995). Entries in an LDAP directory contain information about objects such as companies, departments, resources, and people in a company. They are ordered hierarchically, as people normally work in departments of companies. Entries consist of attributes and their values or value sets.
Although directory servers were originally developed for providing central address books, which is reflected in the attribute names?"o" for "organization", "ou" for "organizational unit", "sn" for "surname"?they can include entries of any object classes (i.e., with self-defined attribute types).
An entry for a professor of a department is presented in LDIF (Lightweight Directory Interchange Format), a text format for the exchange of directory data, in Listing 19.3.
dn: personnelNo=1012, ou=FBWI, o=fh-karlsruhe.de objectclass: professor objectclass: employee objectclass: person objectclass: top cn: Cosima Schmauch givenname: Cosima sn: Schmauch personnelNo: 1012 uid: scco0001 telephone: 2960 roomNo: K111 courses: courseNo=wi2034, ou=FBWI, o=fh-karlsruhe.de courses: courseNo=wi2042, ou=FBWI, o=fh-karlsruhe.de
Every entry in the directory server is given a so-called distinguished name (dn) that uniquely identifies it. The distinguished name is derived from a defined relative distinguished name (rdn) consisting of attribute value pairs and extensions of namespaces. The namespaces are ordered hierarchically and are normally represented as trees?directory information trees. Figure 19.3 shows a section of the directory information tree at the Karlsruhe University of Applied Sciences.
Just as with object-oriented databases, we define the directory server schema using classes. Relationships between directory server classes are, however, established using distinguished names. An example of this is the professor entry, which is linked to several course entries. A link between directory server entries is not typed?it is a string or a set of strings, which are marked as distinguished names.
The typed DOM implementation can therefore affect only the names of the directory server classes but not the relationship between the classes. The directory server schema, similar to an implementation using relational databases, cannot reflect the document structure. We have selected therefore the nontyped DOM implementation as the basis for the directory server schema.
For the interfaces of the DOM, 13 classes are defined for their implementation?there was no implementation of the abstract class for the interface CharacterData. Figure 19.4 shows these classes. The class xmlnode implements the interface Node and is the base class for all remaining classes. It makes the attributes XMLname, XMLtype, and XMLvalue for storing the document-specific information available to them. The remaining classes add attributes, if required.
We are left to decide how the parent-child relationships of the DOM tree are implemented. We could use distinguished names. The childNodes relationship between elements can be realized through a corresponding multivalued attribute at the class xmlnode. Because we already have the LDAP directory information tree, we can also map the DOM tree to it. We do not have to implement the tree using relations, as it is necessary with object-oriented databases via the childnodes association. We can rely on the directory information tree that is built by the form of the distinguished names. Therefore the base class xmlnode is given an additional attribute XMLid that contains a number and thus retains the order of the subelements. This id at the same time will form the relative distinguished name of the entry.
An XML document is now mapped to the directory information tree so that?modeled on the DOM tree?the element entries form the inner nodes, while all others become leaves. Figure 19.5 shows the directory information tree for the XML document from the example document of Listing 19.2. Every entry in the directory server is positioned in the directory information tree. It consists of the attribute values that are defined by its class. The personnel element is entered in the tree under the nodes with the distinguished name ou=xml. It has the following attribute values:
XMLname = personnel, XMLvalue = null, XMLtype = Element, XMLid = 1
Thus the entry is given the distinguished name XMLid=1, ou=xml.
The course element, which is subsumed under the professor element as the third element after name and telephone, is given the value 3 as its XMLid and therefore the distinguished name
XMLid=3,XMLid=1,XMLid=1,ou=xml.
The attribute personnelNo obtains its name as a value of XMLid. It is subsumed under the professor element and therefore has the distinguished name
XMLid=personnelNo,XMLid=1,XMLid=1,ou=xml.
The ordering number given to every entry by the attribute XMLid contributes to its distinguished name. This allows it to retain the sequence of the elements, comments, and text parts. The value for the XMLid is assigned, and from its position in the tree and the XMLid, a new distinguished name is formed.
Because of the nontyped DOM implementation, a parser must validate the XML document, create the DOM tree, and allow access to the root of the DOM tree that represents the document. Starting at the root, the tree is then traversed completely. While doing so, the type is determined for every node, the corresponding LDAP entry is created with a distinguished name, the rest of the attribute values of the entry are set, and the entry is stored into the directory server. Then its child nodes are processed.
Finally, we should have a look at native XML databases, which are specialized to store and process XML documents. The database system we use has to know the DTD. From the DTD, it creates a database schema. Using a Java API, the XML document has to be parsed with the integrated DOM parser, which returns a reference to the root object. This root object will then be inserted into the database. | http://etutorials.org/XML/xml+data+management/Part+V+Performance+and+Benchmarks/Chapter+19.+A+Comparison+of+Database+Approaches+for+Storing+XML+Documents/19.3+Databases+for+Storing+XML+Documents/ | CC-MAIN-2017-04 | en | refinedweb |
Upgrading to Relay Modern or Apollo
I wrote previously about how the New York Times is basing its re-platform/redesign on React and GraphQL, and I singled out Relay as our choice of framework for connecting the two.
React, Relay and GraphQL: Under the Hood of the Times Website Redesign
The New York Times website is changing, and the technology we use to run it is changing too.
open.nytimes.com
All of those things remain true, but in an ecosystem that moves extremely fast (NPM/Node) and creates epic amounts of vaporware and abandonware, we must constantly reevaluate all of our decisions and assumptions to make sure we are building a solid foundation for the future.
Relay has now been dubbed Relay Classic, and the new cool kid is Relay Modern. Migrating from Classic to Modern is not a walk in the park, which I’ll talk about below. We need to know where these platforms are heading, and we thought it would be a good time to compare Relay Modern and Apollo to weigh our options for the future.
Disclaimer
This post is not going to prefer one project or the other, and it is not going to foreshadow any decision on our side. We are in no danger of moving too fast in either direction. In fact, Relay Modern has moved closer to Apollo in a lot of ways, so any pre-requisite work we do to transition our Classic codebase will move us closer to both of them.
Assumptions
- GraphQL is the future
- We are going to use React to build our UIs
- The app we build has to be universal/isomorphic:
fully-rendered on the server and the client
- The app needs capabilities like client-only queries to allow the server response to be cached in a CDN (server-rendering in React is …. slow) and not rely on cookies
Proof of Concept
The NYT contains a lot of components, all with GraphQL fragments of varying degrees of complexity. Using the NYT codebase to try-out a new framework is not always practical. As such, I wrote an end-to-end product that is much simpler. It is also an open source project I am working on: a headless WordPress “theme” built on GraphQL, with versions for Relay, Apollo, and an app written in React Native. The motivation: I want to use WordPress as my CMS, but I do not like writing UIs in PHP. I do like writing UIs in React, and I think Relay and Apollo are cool.
The GraphQL server reads its data from the WordPress REST API and exposes a “product schema” that describes the data a WordPress theme probably needs to build a site that has parity with a theme written in PHP using WordPress idioms. The schema could actually be resolved by a backend that is not WordPress, which is the beauty of GraphQL: I describe my product, the data resolution and implementation details are opaque.
The WordPress Rest API does not expose enough data by default to build a full theme, so I extended the API with my own endpoints. They are enabled by activating my WordPress GraphQL Middleware plugin in a WordPress install. The plugin is only available on GitHub right now, as it is still in active development.
The GraphQL server is mostly stable. It uses the reference implementation of GraphQL from Facebook, graphql-js. I also leaned on Jest for unit-testing, and DataLoader (a game-changer) for orchestrating batch requests to the REST API.
The first implementation I did was on Relay Modern: relay-wordpress. For our recent Maker Week (work on whatever you want) at the Times, I wrote a React Native version of the same app.
To play devil’s advocate, I also tried out the same app in Apollo: apollo-wordpress. I went in with a bias towards Relay and came away with a completely different perspective.
It is possible that neither is the right framework, or they are both equally as good. I think it all depends on your evaluation criteria. I will evaluate both of them below. My notes on Relay Classic are based on work I have done at the Times. The examples in Relay Modern and Apollo are drawn from my open source projects.
Picking a Router
When using Relay with a Node server like Express, you can typically pipe all routes to the same resolver and have your app’s internal router control the render path. The router might also take props that allow you to specify GraphQL queries associated with the matched path. Example:
// routes/index.js<Route component={App}>
<Route path="/:id" component={Story} queries={{
story: () => Relay.QL`query { article(id: $id) }`,
}} />
<Route path="/" component={Home} queries={{
home: () => Relay.QL`query { home }`,
}} />
</Route>
It is essential that the router consumes a static route config that can be read in a repeatable and predictable way. Without knowing the route config, it is not currently possible to extract all queries for a particular route to:
1) request data on the server
2) rehydrate the client with the same data
This avoids making GraphQL queries on the server and then immediately requesting the data again when the page loads on the client.
Relay Classic:
We need:
- React Router v3 (RRv3), which is used to compose nested routes using React components (see above) that can be read as a route “config”.
- Isomorphic React Router Relay (IRRR), which fetches data based on the props extracted by
match()(from RRv3) on the server — this module requires: Isomorphic React Relay (IRR) and React Router Relay.
On the client,
<Router> from IRRR wraps
<Route>s from RRv3. Because we are using IRRR, we know how to read the props on
<Route> that specify the Relay queries associated with a given path. IRR supports rehydration, but we had some problems with it (probably self-inflicted).
Relay Modern
We use Found Relay — there is currently no alternative when you need isomorphic rendering. Found Relay is the
<Router> and
Route. There is no mix and match. Found Relay has a naive approach to client rehydration that is less than ideal.
IRRR and IRR have not been updated to work with Relay Modern, and there appear to be no plans for them to do so. Because those libraries are pre-reqs for RRv3 to work with Relay, RRv3 is not an option. React Router v4 does not have any knowledge of Relay, it does not support middleware (RRv3 does), and
<Route>s can be rendered anywhere in the component tree. Even if we could do a render pass to extract all of the possible queries from the routes, they still might not be comprehendible — it is entirely possible that nested component query variables are constructed dynamically at runtime.
Apollo
We would use… any router we want. Queries are not configured on routes. Apollo out-of-the-box does a render pass that can comprehend possible queries. Rehydration works out of the box. Apollo uses Redux under the hood, so this whole process is pretty elegant.
Takeaways: Apollo takes away a lot of drama here. Found Relay works, but does not have a huge community behind it. The maintainer, Jimmy Jia, has been really helpful though and is always open to talk about these technical challenges. React Router v4 is possibly an anti-pattern for isomorphic apps, although it works just fine on React Native.
Environment
The “environment” is the layer that actually encapsulates network fetching, the store, and caching.
Relay Classic
There is a
Relay.DefaultNetworkLayer that can be configured to talk to GraphQL using
fbjs/lib/fetch, or you can roll your own (this part is strange in Classic). This initialization only happens on the client, the server uses IRRR’s imperative API.
Relay Modern
You create an instance of
Environment that contains instances of other pieces, including an instance of
Network, which you pass your Fetch implementation to. The
fetch() implementation can be really simple, basically the version you get from “Hello, World” will work for querying your GraphQL server. relay-runtime also exposes a
QueryResponseCache object that accepts TTLs that you can use in your fetch implementation. The caching is nice for React Native apps, where constant data-fetching is less necessary. Invalidation can become too complicated otherwise, unless you are just using low TTLs for performance. Persisted queries in Relay are currently a roll-your-own-imlementation (I did it here). The logic to send an ID instead of the query text lives in the fetcher.
Example using Found Relay and its
Resolver:
export function createResolver(fetcher) {
const environment = new Environment({
network: Network.create((...args) => fetcher.fetch(...args)),
store: new Store(new RecordSource()),
});
return new Resolver(environment);
}
Example using React Native:
const source = new RecordSource();
const store = new Store(source);const network = Network.create(fetchQuery);const environment = new Environment({
network,
store,
});export default environment;
Apollo
This logic is tucked away in
ApolloClient on the server and the client.
ApolloClient is also where persisted queries can be configured. Apollo doesn’t need your GraphQL schema, but this is also where you specify the JSON config for your
fragmentMatcher, which lists all of your
Union and
Interface types. Without this, Apollo will throw a bunch of warnings that it is using heuristics to determine types at runtime. Example:
// server
const client = new ApolloClient({
ssrMode: true,
networkInterface: new PersistedQueryNetworkInterface({
queryMap,
uri,
}),
fragmentMatcher,
});// client
const client = new ApolloClient({
initialState: window.__APOLLO_STATE__,
networkInterface: new PersistedQueryNetworkInterface({
queryMap,
uri,
}),
fragmentMatcher,
});
Takeaways: Apollo has an elegant solution. Relay Modern works out of the box with React Native, needs some 3rd-party love on the web.
Fragments
GraphQL fragments are typically co-located with React components. Higher-Order Components (HOCs) (via ES7 decorators or an imperative API) glue the components together with their “fragment container.”
Relay Classic
Fragments are atomic units. They are lazily constructed via a thunk that returns the result of the
Relay.QL tagged template literal. Unless I am mistaken, the Relay Classic Babel plugin intercedes here and turns the result into an AST at runtime, a
GraphQLTaggedNode. Nested Components fragments can be included via
Component.getFragment(‘node’) calls, and are subject to Data Masking: which means Relay will hide the portion of the query from all components except the one that required the specific slice of data represented by the
getFragment call. Here’s a concrete example:
fragments: {
card: () => Relay.QL`
fragment Card_card on Asset {
__typename
... on CardInterface {
card_type
promo_media_emphasis
news_status
promotional_media {
__typename
... on AssetInterface {
promotional_media {
__typename
}
}
}
}
... on AssetInterface {
last_modified_timestamp
last_major_modification_timestamp
}
${CardMeta.getFragment('card')}
${HeadlineCardContent.getFragment('card')}
${VideoHeadlineCardContent.getFragment('card')}
}
`,
},
WARNING: these fragments, in addition to allowing the inclusion of other component’s fragments (by design), also allow interpolation of string literals, and can be informed by variables local to the HOC itself via
initialVariables and
prepareParams, props that also lived on
Relay.Route, back when Relay was a routing framework at Facebook.
Because these fragments can take local variables, the result of this fragment can not be known statically, as
prepareParams can construct literally whatever it wants. Even more dangerous: the inclusion of fragments from another container can also be informed by these same variables. Example:
{
initialVariables: {
crop: ‘jumbo’,
},
prepareParams: () => { // set crop based on runtime },
fragments: {
media: ({ crop }) => Relay.QL`
fragment on Media {
${NestedComponent.getFragment(‘media’, { crop })}
}
`,
},
}
These types of fragments cannot be statically analyzed, and cause cascading complexity as they are passed down.
Relay Modern
The tagged template literal around fragments is
graphql. Fragments are strings with no interpolation.
${Component.getFragment(‘node’)} becomes, simply,
...Component_node. The actual component whose fragments you are spreading does even need to be in scope, so it is possible you can import fewer modules. A caveat: all fragments now need to be named.
fragment on Media { ... } needs to be:
fragment Component_media on Media { ... }.
The naming convention is not arbitrary. It is:
{FileName}_{prop}. If you use
index.js as the name of your file, the name will be the name of the parent folder.
Why do all of this?
All fragments and queries are known at build time this way, and can be statically analyzed when your app builds, so no runtime parsing has to take place.
Relay Modern has a compiler called relay-compiler that introduces a build step to your app. The build step generates artifacts. The Babel plugin for Relay Modern (not the same plugin as Classic!) causes your
graphql tagged template calls to lazy-require the artifact files that look like:
Component/__generated__/Component_media.graphql.js.
Apollo
The tagged template literal for fragments is
gql. The default pattern for specifying fragments for a component is via a static member on the component class called
fragments. Apollo uses the
...Component_media syntax for spreading fragments of other components, but it also requires you to add the actual fragment to the bottom via string interpolation. Example:
fragment Component_media on Media {
id
...Image_image
}
${Image.fragments.image}
Apollo does not enforce Data Masking, and doesn’t require co-locating your fragments, so fragment “snippets” (an anti-pattern and probably dangerous as pertains to compatibility) are more share-able by default. You can actually place the fragments in their own
.graphql files and use the graphl-tag module to load them. Instead of including the fragments from other components, you use the
#import syntax exposed by the Webpack loader:
// Component_media.graphql#import "./Image_image.graphql"fragment Component_media on Media {
id
...Image_image
}
An advantage to this approach is syntax-highlighting in your editor.
Takeaways: Fragments in Relay Modern are much cleaner and enable static queries. The ergonomics of fragments in Apollo are different, and possibly better when fragments are placed in separate files. However, this breaks the idea of co-locating fragments with React components. So, this may be a religious debate. What is most obvious to me: dynamic fragments in Relay Classic have to go.
Queries
Relay Classic
The queries for a given path live in the route config, and all of the fragments in the component tree below specify which parts they are interested in. A implicit component hierarchy is necessary, and is strictly enforced by Data Masking. You never specify the whole query in one place, you simply say:
query { asset(id: $id) } and the rest is mostly magic.
Relay Modern
Since Found Relay is our only option for routing right now, you specify the query fragment on the route, via a
graphql tagged template, or by including the module that exposes the query. I actually suggest that you place all top-level queries in a folder called
queries. The query can be statically analyzed at build time:
// queries/Story_Query.jsimport { graphql } from 'react-relay';export default graphql`
query Story_Query($id: ID!) {
...Story_whatever
}
`;
Data Masking is still enforced, but this query is represented in the AST as an entire query. The build artifact exports what is called a
ConcreteBatch, which contains a node called
text, which contains the plaintext GraphQL query. Because the text for the query is known, and the process for retrieving it is nominal, an ID can be assigned to the text representation, and sent in place of the query, so long as the GraphQL server knows this is happening and is able to turn the ID into the full query text.
Both servers, Relay and GraphQL, have to be set up to comprehend this exchange.
Instead of your fragments containing dynamic fragments, your queries take top-level variables. This may require rethinking portions of your app that wanted to remain dynamic at runtime. You may also have to request more data than you did in Classic and filter data at runtime.
In our Classic example above that had a fragment variable called
crop, we need to transition that fragment to receive a variable from the query itself. Here’s what it might look like after transition:
graphql`
fragment Component_node on Media {
crop(size: $crop) {
url
width
height
}
}
`
It might be hard or weird to transition some of your components in this way, but the payoff is use of a leaner Relay core and no expensive runtime query parsing.
Relay Modern exposes a Component called
QueryRenderer.
<QueryRenderer> can be dropped anywhere in your component tree and given your Relay
Environment instance, a query, and variables, make a request to your GraphQL server that is passed to a function exposed on the render prop. From the Relay Modern docs:
import { QueryRenderer, graphql } from 'react-relay';
// Render this somewhere with React:
<QueryRenderer
environment={environment}
query={graphql`
query ExampleQuery($pageID: ID!) {
page(id: $pageID) {
name
}
}
`}
variables={{
pageID: '110798995619330',
}}
render={({error, props}) => {
if (error) {
return <div>{error.message}</div>;
} else if (props) {
return <div>{props.page.name} is great!</div>;
}
return <div>Loading</div>;
}}
/>
In React Native, this is great, and makes your choice of routing solution less critical. There is no server/client scenario in native apps, which is why Relay Modern “just works” there.
On the web, and specifically for universal rendering, this is a problem. Because these queries can live somewhere other than your route config, it can become impossible to know all of your queries ahead of time to request data properly for server rendering your app. This is the task IRR did in Classic. As such,
QueryRenderer can only be used for client-only queries.
Client-only queries are not a first-class citizen in universal Relay apps, so
QueryRenderer should NOT be used for queries tied to routes, Found Relay will handle those queries.
QueryRenderer is only to be used for “extra” data that might be a result of a user interaction or page scroll. Found Relay attempts to implement the insides of
QueryRenderer when it resolves data on the server.
Apollo
Queries can live anywhere via the
graphql HOC, typically used as an ES7 decorator. On the server, Apollo does an initial render pass that only extracts data. Example:
import { graphql } from 'react-apollo';
import PageQuery from 'graphql/Page_Query.graphql';@graphql(PageQuery, {
options: ({ params: { slug } }) => ({
variables: {
slug,
},
}),
})
export default class Page extends Component { ... }
Takeaways: all three solutions query data in wildly different ways. Something to note: the trend is towards one Query per route. This can easily be accomplished if you make a top-level GraphQL type called
Viewer and specify all possible queries as fields below it.
Mutations
Relay Classic
Mutations are constructed via configuration, some of which uses Flux idioms. They are confusing and weird. The “store” is updated via a Flux config. The whole process is very black hole. Rather than dive in here, just peruse the docs.
Relay Modern
Mutations use an imperative API that is very similar to Apollo’s. Updating the store is mostly a manual process that requires interacting with the
ConnectionHandler API. Mutations still require a config object, but one that is (slightly) less confusing. Optimistic updates use the same data you passed to create the mutation. Mutations require your instance of
Environment. The documentation for interacting with the store is basically non-existent. Changes to the store cause UI to re-render. Editor’s Note: I just checked the docs, and they have now included references in Relay Modern to the Flux-style configs. Good Luck. I’m not even sure that looking at code makes this easier to understand, but here are some mutations I added to perform CRUD on a post’s comments.
Apollo
Uses an imperative API. Interacting with the store is kinda strange — this is not a knock. Changes are made manually and then committed back. Although they claim that data is more consistent in their store, I can see how developer error can mess this up. Mutations are specified using the
graphql HOC (like queries), and I like how this encourages the creation of atomic components to handle each mutation. For instance, rather than having a button that calls a method on its parent component triggering a mutation on click, the button itself can become a component that is wrapped in a HOC. The HOC provides the wrapped component with a
mutate prop. Here is my Delete Comment button.
Takeaways: Relay Modern moves closer to Apollo, but seems to not be able to quit the confusing Flux configs. Apollo has a pretty nice solution but a different Store implementation. Reading and writing are really different across Relay and Apollo. Optimistic UI updates are pretty similar. I also like how Apollo allows you to specify
refetchQueries when calling a mutation. That way, your mutation can return a small amount of data needed for the UI update, and the refetch queries can keep your store up to date.
Refetching
There may be times where you want to “refetch” a route’s top-level query with new query variables. For me, the most obvious reason is a feature like Search. In a single page app, I want to do as much as possible without a full page reload. Refetch behavior makes this possible.
Relay Classic
If you want to “refetch” the query on the client based on new input or client-only flags being set, you call
this.props.relay.setVariables({ ... }) in a component. This will either affect the variables of the fragment they are called in, or cascade down the chain.
setVariables() also triggers a store lookup and is when the cryptic “node” query is triggered. I am actually not sure if calling this function in a nested component can affect ancestor component variables, and I really don’t want to try to find out.
Relay Modern
There are different types of “containers” — in Classic,
Relay.createContainer() is the HOC. In Modern, Fragment Containers are created with
createFragmentContainer(), and Refetch Containers are created via
createRefetchContainer(). Creating a Refetch Container enables the following method:
this.props.relay.refetch({ ... }). When creating the Refetch Container, you specify what query you are refetching. Example, I created my own decorator to call
createRefetchContainer:
// decorators/RefetchContainer.jsimport { createRefetchContainer } from 'react-relay'; export default (spec, refetch) => component =>
createRefetchContainer(component, spec, refetch);// routes/Search.js@RefetchContainer(graphql`
fragment Search_viewer on Viewer {
posts(search: $search, first: $count) {
edges {
node {
...Post_post
}
cursor
}
pageInfo {
startCursor
endCursor
hasPreviousPage
hasNextPage
}
}
}
`,
SearchQuery)
export default class Search extends Component { ... }
I do not remember what happens if you don’t specify the query. Maybe it refetches the same query? The Relay docs market this HOC as useful for “Load More”, but as you’ll see below, there is also a Pagination Container, which is also concerned with Loading More. I prefer to think of the Refetch Container as a place to perform inline updates, or to refetch the query with client-only data.
Apollo
You always have access to a prop called
data in wrapped components, and
data has a method called
refetch() that works like Relay Modern. Example:
doRefetch = debounce(() => {
this.props.data.refetch({
...this.props.data.variables,
search: this.state.term,
}).catch(e => {
if (e) {
console.log(e);
}
}).then(() => {
this.input.blur();
});
}, 600);
Takeaways: Apollo doesn’t require doing anything new or different. The Relay Modern approach is useful and only requires a small amount of configuration. Relay Modern highlights the need to have your queries in a separate file so you DRY, as they may be needed in multiple places.
Pagination Containers
Most apps, especially something like a blog, need pagination and/or Infinite Scroll. All implementations make this easy.
Relay Classic
Pagination is mostly a roll-your-own solution using something like:
this.props.relay.setVariables({ count: prevCount + 10 })
Relay Modern
You use
createPaginationContainer(), which requires a lot of config, but then you get
this.props.relay.hasMore() and
this.props.relay.loadMore(10). Example:
export default createPaginationContainer(
Term,
graphql`
fragment Term_viewer on Viewer {
term(slug: $slug, taxonomy: $taxonomy) {
id
name
slug
taxonomy {
rewrite {
slug
}
labels {
singular
plural
}
}
}
posts(term: $slug, taxonomy: $taxonomy, after: $cursor, first: $count)
@connection(key: "Term_posts") {
edges {
node {
...Post_post
}
cursor
}
pageInfo {
startCursor
endCursor
hasNextPage
hasPreviousPage
}
}
}
`,
{
direction: 'forward',
getConnectionFromProps(props) {
return props.viewer && props.viewer.posts;
},
getVariables(props, { count, cursor }, fragmentVariables) {
return {
...fragmentVariables,
count,
cursor,
};
},
getFragmentVariables(vars, totalCount) {
return {
...vars,
count: totalCount,
};
},
query: TermQuery,
}
);
Yes, this configuration is confusing and weird. When Relay Modern first dropped, some of the configuration values were missing from the docs! I have a PR merged in Relay for this. As a trade-off, you just call
loadMore() and you’re done. All of the connection merging happens automatically.
Apollo
The
data prop always has a method called
fetchMore(), but you are responsible for merging the results with your previous results, which can be dangerous and weird. Example:
const Archive = ({ variables, fetchMore = null, posts: { pageInfo, edges } }) =>
<section>
<ul>
{edges.map(({ cursor, node }) =>
<li key={cursor}>
<Post post={node} />
</li>
)}
</ul>
{fetchMore &&
pageInfo.hasNextPage &&
<button
className={styles.button}
onClick={() =>
fetchMore({
variables: {
...variables,
cursor: pageInfo.endCursor,
},
updateQuery: (previousResult, { fetchMoreResult }) => {
const { edges: previousEdges } = previousResult.viewer.posts;
const { edges: newEdges } = fetchMoreResult.viewer.posts;
const newViewer = {
viewer: {
...fetchMoreResult.viewer,
...fetchMoreResult.viewer.posts,
edges: [...previousEdges, ...newEdges],
},
},
};
return newViewer;
},
})}
>
MORE
</button>}
</section>;
Takeaways: Sometimes things are very similar in both frameworks. Other times, they are equally as strange. I would note that both are probably better than the current Bring Your Own Implementation solutions that have been written in jQuery from days of yore.
Conclusion
Rather than spending a ton of time trying to pick the perfect solution before building anything, I have tried both, and am able to create what I want in both. I still want to try Apollo on React Native, and I still want to mix my React Native code with native mobile platform code. I think both will work just fine.
Relay Modern works like a dream in React Native. Using it on the web is possible with the tools I outlined above.
Apollo has an ecosystem around it, and its own ideas about how to do things. Facebook created GraphQL and Relay, but does not actively provide ALL of the tools you need.
My prediction: I could rewrite this post every 6 months with lots of new learnings based on changes and pivots from every corner of the ecosystem. GraphQL and React will probably remain “stable.” I think the frameworks around them are just getting started. | https://medium.com/@wonderboymusic/upgrading-to-relay-modern-or-apollo-ffa58d3a5d59 | CC-MAIN-2020-34 | en | refinedweb |
ZparkIOZparkIO
Boiler plate framework to use Spark and ZIO together.
The goal of this framework is to blend Spark and ZIO in an easy to use system for data engineers.
Allowing them to use Spark is a new, faster, more reliable way, leveraging ZIO power.
Table of ContentsTable of Contents
- What is this library for ?
- Public Presentation
- Why would you want to use ZIO and Spark together?
- How to use?
- Examples
- Authors
What is this library for ?What is this library for ?
This library will implement all the boiler plate for you to be able to include Spark and ZIO in your ML project.
It can be tricky to use ZIO to save an instance of Spark to reuse in your code and this library solve all the boilerplate problem for you.
Public PresentationPublic Presentation
Feel free to look at the slides on Google Drive or on SlideShare presented during the ScalaSF meetup on Thursday, March 26, 2020. You can also watch the presentation on Youtube.
ZparkIO was on
version 0.7.0, so things might be out of date.
Why would you want to use ZIO and Spark together?Why would you want to use ZIO and Spark together?
From my experience, using ZIO/Future in combination with Spark can speed up drastically the performance of your job. The reason being that sources (BigQuery, Postgresql, S3 files, etc...) can be fetch in parallel while the computation are not on hold. Obviously ZIO is much better than Future but it is harder to set up. Not anymore!
Some other nice aspect of ZIO is the error/exception handling as well as the build-in retry helpers. Which make retrying failed task a breath within Spark.
How to use?How to use?
I hope that you are now convinced that ZIO and Spark are a perfect match. Let's see how to use this Zparkio.
Include dependenciesInclude dependencies
First include the library in your project:
libraryDependencies += "com.leobenkel" %% "zparkio" % "[VERSION]"
This library depends on Spark, ZIO and Scallop.
Unit-testUnit-test
You can also add
libraryDependencies += "com.leobenkel" %% "zparkio-test" % "[VERSION]"
To get access to helper function to help you write unit tests.
How to use in your code?How to use in your code?
There is a project example you can look at. But here are the details.
MainMain
The first thing you have to do is extends the
ZparkioApp trait. For an example you can look at the ProjectExample: Application.
SparkSpark
By using this architecture, you will have access to
SparkSesion anywhere in your
ZIO code, via
import com.leobenkel.zparkio.Services._ for { spark <- SparkModule() } yield { ??? }
for instance you can see its use here.
Command linesCommand lines
You will also have access to all your command lines automatically parsed, generated and accessible to you via:
CommandLineArguments ; it is recommended to make this helper function to make the rest of your code easier to use.
Then using it, like here, is easy.
HelpersHelpers
In the implicits object, that you can include everywhere. You are getting specific helper functions to help streamline your projects.
Unit testUnit test
Using this architecture will literally allow you to run your main as a unit test.
ExamplesExamples
Simple exampleSimple example
Take a look at the simple project example to see example of working code using this library: SimpleProject.
More complex architectureMore complex architecture
A full fles production ready project will obviously need more code that the simple example. For this purpose, and upon suggestion of several awesome people, I added a more complex project. This is a WIP and more will be added as I go. MoreComplexProject. | https://index.scala-lang.org/leobenkel/zparkio/zparkio/0.9.1?target=_2.11 | CC-MAIN-2020-34 | en | refinedweb |
Calendarro
Calendar widget library for Flutter apps. Offers multiple ways to customize the widget.
Getting Started
Installation
Add dependency to your pubspec.yaml:
calendarro: ^1.2.0
Basic use
First, add an import to your code:
import 'package:calendarro/calendarro.dart';
Add a widget to your code:
Calendarro( startDate: DateUtils.getFirstDayOfCurrentMonth(), endDate: DateUtils.getLastDayOfCurrentMonth() )
Customization
1. Display Mode - If you prefer to operate on multiple rows to see whole month, use:
Calendarro( displayMode: DisplayMode.MONTHS, ... )
2. Selection Mode - If you want to select multiple dates, use:
Calendarro( selectionMode: SelectionMode.MULTI, ... )
3. Weekday Labels - If you want to provide your own row widget for displaying weekday names, use:
Calendarro( weekdayLabelsRow: CustomWeekdayLabelsRow() ... )
you can create your CustomWeekdayLabelsRow by looking at default CalendarroWeekdayLabelsView.
4. Day Tile Builder - If you want to build day tiles your own way, you can use:
Calendarro( dayTileBuilder: CustomDayTileBuilder() ... )
you can create your CustomDayTileBuilder looking upon DefaultDayTileBuilder.
5. Initial selected dates - When you want some dates to be selected from the scratch, use selectedDate (SelectionMode.SINGLE) or selectedDates (SelectionMode.MULTI) arguments:
Calendarro( selectedDate: DateTime(2018, 8, 1) //or selectedDates: [DateTime(2018, 8, 1), DateTime(2018, 8, 8)] ... )
you can create your CustomDayTileBuilder looking upon DefaultDayTileBuilder.
Selecting date callback
If you want to get a callback when a date tile is clicked, there is onTap param:
Calendarro( onTap: (date) { //your code } ... )
Advanced usage:
For more advanced usage see: | https://pub.dev/documentation/calendarro/latest/ | CC-MAIN-2020-34 | en | refinedweb |
I don't know if it's an Entity Framework's desing choice or a wrong approach on my behalf, but whenever I try to AddRange entities to a DbSet I can't seem to get the auto-generated IDENTITY fields.
[Table("entities")] public class Entity { [Key] [Column("id")] public long Id { get; set; } [Column("field")] public string Field { get; set; } } var entities = new Entity[] { new Entity() { Field = "A" }, new Entity() { Field = "B" }, }; _dbContext.Entities.AddRange(entities); await _dbContext.SaveChangesAsync(); //ids are still default(long) at this point!!
EDIT: Here's the updated code to show what was causing the problem: enumerables. No need to add other attributes to the entity classes.
public class Request { public string Field { get; set; } public Entity ToEntity() { return new Entity() { Field = Field }; } } public async Task<IEnumerable<long>> SaveRequests(IEnumerable<Request> requests) { var entities = requests.Select(r => r.ToEntity()); //not working var entities = requests.Select(r => r.ToEntity()).ToArray(); //working _dbContext.Entities.AddRange(entities); await _dbContext.SaveChangesAsync(); return entities.Select(e => e.Id); }
What was causing the problem? Enumerables! Take a look at the EDIT section in my question for the solution.
I'm using Database First in EF 6, and after trying for a period of time, I find a possible solution.
First, Check your Table in the Database, Ensure that you defined the 'ID' column as an auto-increment primary key field, which can be declared by using something like
ID int IDENTITY(1,1) PRIMARY KEY,
when creating your table. Some related information can see here1 or here2.
or you can check the data Properties in MSSQL IDE like:
Second, Set the 'ID' column's StoreGeneratedPattern as Identity, you can do it by open the edmx file in Visual Studio, right click on the Data Column in table and select Properties, and StoreGeneratedPattern setting is in the Properties Window :
Some related article see here.
After complete things above, using EF AddRange, ID will auto increment and all works great.
public class Entity { public long Id { get; set; } public string Field { get; set; } } var entities = new Entity[] { new Entity() { Field = "A" }, new Entity() { Field = "B" }, }; _dbContext.Entities.AddRange(entities); | https://entityframeworkcore.com/knowledge-base/42480952/can-t-auto-generate-identity-with-addrange-in-entity-framework | CC-MAIN-2020-34 | en | refinedweb |
Summary:.
As an example of this, today I'm going to share some Java Swing source code where I create a translucent JFrame. To make it a little more interesting/real, I've added a a JTextArea and JScrollPane to show you how it works, and how it looks.
Create a transparent JFrame with one magic line
Actually, creating a transparent JFrame on Mac OS X is no big deal, at least not as long as you're using the right software versions. The only line of code you really need is shown here:
editorFrame.getRootPane().putClientProperty("Window.alpha", new Float(0.8f));
This special Window.alpha property -- which is Mac OS X specific -- lets you set the translucency level of your JFrame or JWindow. If you're not shipping your product to other users on a variety of other systems where you need to check the operating system and revision level, this is all you have to do.
You can set this
Window.alpha property anywhere between
0.0 and
1.0, with the lowest values making your window almost invisible. I'll show the effect of different settings shortly.
My Java translucent JFrame source code
I've written an example Java/Swing application to demonstrate this JFrame transparency effect on Mac OS X, and I'm sharing the source code here. Most of the code is boilerplate Java Swing code, but here's a quick description of it:
- I create a
JFrameobject named
editorFrame.
- I create a
JTextArea, place that in a
JScrollPane, and place that in the center panel of the
JFrame's default
BorderLayout.
- I set the
Window.alphasetting, as shown above.
- I center the
JFrame, and then make it visible.
Given that introduction, here's my sample Java code:
package com.devdaily.swingtests.transparency; import javax.swing.*; import java.awt.BorderLayout; import java.awt.Dimension; /** * Creates a translucent frame (jframe) on Mac OS X. * @author alvin alexander, devdaily.com */ public class MacTranslucentFrame { public static void main(String[] args) { new MacTranslucentFrame(); } public MacTranslucentFrame() { SwingUtilities.invokeLater(new Runnable() { public void run() { JFrame editorFrame; editorFrame = new JFrame("Java Mac OS X Translucency Demo"); editorFrame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); // this is what sets the transparency/translucency on Mac OS X editorFrame.getRootPane().putClientProperty("Window.alpha", new Float(0.8f)); // create and add a scroll pane and text area JTextArea textArea = new JTextArea(5, 30); JScrollPane scrollPane = new JScrollPane(textArea); textArea.setText("Hello, world"); scrollPane.setPreferredSize(new Dimension(300, 185)); editorFrame.getContentPane().add(scrollPane, BorderLayout.CENTER); editorFrame.pack(); editorFrame.setLocationRelativeTo(null); editorFrame.setVisible(true); } }); } }
Sample screenshots
Here are a few sample screenshots of this application running. I set the translucency at three different levels for these shots:
0.8f,
0.6f, and
0.4f. The application is running in front of a blue background, with a folder intentionally placed behind the
JFrame.
Here's the translucency with a setting of
0.8f:
Here's roughly the same screen shot with a translucency with a setting of
0.6f:
And finally here's roughly the same screen shot with a translucency with a setting of
0.4f:
This is a very cool effect for Java Swing applications on the Mac platform. I've written my own custom editor that I use on the Mac, and this is one of those nice effects that gives an application a little extra "something" that customers appreciate. | https://alvinalexander.com/blog/post/jfc-swing/how-create-transparent-translucent-java-frame-jframe-mac/ | CC-MAIN-2020-16 | en | refinedweb |
public class LatencyAwarePolicy extends Object implements ChainableLoadBalancingPolicy
When used, this policy will collect the latencies of the queries to each Cassandra node and maintain a per-node average latency score. The nodes that are slower than the best performing node by more than a configurable threshold will be moved to the end of the query plan (that is, they will only be tried if all other nodes failed). Note that this policy only penalizes slow nodes, it does not globally sort the query plan by latency..
Please see the
LatencyAwarePolicy.Builder class.)
distancein interface
LoadBalancingPolicy
host- the host of which to return the distance of.
hostas returned by the wrapped policy.
public Iterator<Host> newQueryPlan(String loggedKeyspace, Statement statement)
The returned plan will be the same as the plan generated by the child policy, except that nodes that are slower than the best performing node by more than a configurable threshold will be moved to the end (that is, they will only be tried if all other nodes failed). Note that this policy only penalizes slow nodes, it does not globally sort the query plan by latency.. | https://docs.datastax.com/en/drivers/java/3.7/com/datastax/driver/core/policies/LatencyAwarePolicy.html | CC-MAIN-2020-16 | en | refinedweb |
There was a mistake with the post dont know ho to edit so I follow after "accounts in" before V2.
diferent devices, if this is not possible any more, what would be the best way to implement it on V2?
The app key and secret themselves are not enough to connect to a Dropbox account, in either API v1 or API v2.
To connect to a Dropbox account via the API, you need an "access token". You can get an access token for the user by sending the user through the OAuth app authorization flow. You can find an example implementation of doing this on Android with the API v2 Java SDK here:
Specifically, the process is initiated here:...
And completed here:...
One difference between API v1 and API v2 is that API v2 uses OAuth 2 exclusively, where the app secret is not necessary for client-side apps, such as on Android. Only the app key is needed to start the OAuth app authorization flow and get an access token for the current user.
Im not sure if I´m doing this OK, but this is what I do and application crushes:
Auth.startOAuth2Authentication(this, this.RASVIET_APP_KEY_V2);
String accessToken = Auth.getOAuth2Token();
this is an activitie that
extends PreferenceActivity
and rasviet_app_key_v2 is my key for the app
I am using 3.0.3 dropbox core sdk and it doesnt find v7 in android.support.v7.app.AppCompatActivity
In build.grade file I add this:
dependencies {
compile files('libs/dropbox-core-sdk-3.0.3.jar')
}
and add the jar with that name under the libs folder, then I create a new class just as the example with this code:
package com.rasviet.mobility.sincro;
import android.content.SharedPreferences;
import android.support.v7.app.AppCompatActivity;
import com.dropbox.core.android.Auth;
/**
* Created by Jritxal on 01/08/2017.
*/
public class DropboxActivity extends AppCompatActivity {
}
and it says it cannont resolve v7 that apears in red and so does AppCompatActivity ofc.
Hello, finally I made it work!!
My prob now is that I suppose I need the
DbxClientV2 client;
variable to keep making calls to the api, I have declared it as public static on my main class (SyncActivity) and do this on the DropboxConnection class:
protected void loadData() {
new GetCurrentAccountTask(DropboxClientFactory.getClient(), new GetCurrentAccountTask.Callback() {
@Override
public void onComplete(FullAccount result) {
SyncActivity.client = DropboxClientFactory.getClient();
finish();
}
@Override
public void onError(Exception e) {
Toast.makeText(getApplicationContext(), "Failed to get account details.",
Toast.LENGTH_LONG).show();
}
}).execute();
}
but I´m trying to use it on the onResume method of SyncActivity class to just get email addres of the user like this:
try {
if (client != null) {
FullAccount account = client.users().getCurrentAccount();
Toast.makeText(SyncActivity.this, "cliente: " + account.getEmail()+ " acc: "+account.toString(),
Toast.LENGTH_LONG).show();
}
}
catch(Exception e)
{
Toast.makeText(SyncActivity.this, "Excepcion: "+e.getMessage(),
Toast.LENGTH_LONG).show();
}
and Im getting an exception with message: "null"! | https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/Android-authentication/td-p/231853 | CC-MAIN-2020-16 | en | refinedweb |
- ×
pushState + ajax = pjax
Filed under application tools › loadersShow All
pjax = pushState + ajax
pjax is a jQuery plugin that uses ajax and pushState to deliver a fast browsing experience with real permalinks, page titles, and a working back button.
pjax works by fetching HTML from your server via ajax and replacing the content of a container element on your page with the loaded HTML. It then updates the current URL in the browser using pushState. This results in faster page navigation for two reasons:
- No page resources (JS, CSS) get re-executed or re-applied;
- If the server is configured for pjax, it can render only partial page contents and thus avoid the potentially costly full layout render.
Status of this project
jquery-pjax is largely unmaintained at this point. It might continue to receive important bug fixes, but its feature set is frozen and it's unlikely that it will get new features or enhancements.
Installation
pjax depends on jQuery 1.8 or higher.
npm
$ npm install jquery-pjax
standalone script
Download and include
jquery.pjax.jsin your web page:
curl -LO
Usage
$.fn.pjax
The simplest and most common use of pjax looks like this:
$(document).pjax('a', '#pjax-container')
This will enable pjax on all links on the page and designate the container as
#pjax-container.
If you are migrating an existing site, you probably don't want to enable pjax everywhere just yet. Instead of using a global selector like
a, try annotating pjaxable links with
data-pjax, then use
'a[data-pjax]'as your selector. Or, try this selector that matches any
<a data-pjax href=>links inside a
<div data-pjax>container:
$(document).pjax('[data-pjax] a, a[data-pjax]', '#pjax-container')
Server-side configuration
Ideally, your server should detect pjax requests by looking at the special
X-PJAXHTTP header, and render only the HTML meant to replace the contents of the container element (
#pjax-containerin our example) without the rest of the page layout. Here is an example of how this might be done in Ruby on Rails:
def index if request.headers['X-PJAX'] render :layout => false end end
If you'd like a more automatic solution than pjax for Rails check out Turbolinks.
Check if there is a pjax plugin for your favorite server framework.
Also check out RailsCasts #294: Playing with PJAX.
Arguments
The synopsis for the
$.fn.pjaxfunction is:
$(document).pjax(selector, [container], options)
selectoris a string to be used for click event delegation.
containeris a string selector that uniquely identifies the pjax container.
optionsis an object with keys described below.
pjax options
You can change the defaults globally by writing to the
$.pjax.defaultsobject:
$.pjax.defaults.timeout = 1200
$.pjax.click
This is a lower level function used by
$.fn.pjaxitself. It allows you to get a little more control over the pjax event handling.
This example uses the current click context to set an ancestor element as the container:
if ($.support.pjax) { $(document).on('click', 'a[data-pjax]', function(event) { var container = $(this).closest('[data-pjax-container]') var containerSelector = '#' + container.id $.pjax.click(event, {container: containerSelector}) }) }
NOTE Use the explicit
$.support.pjaxguard. We aren't using
$.fn.pjaxso we should avoid binding this event handler unless the browser is actually going to use pjax.
$.pjax.submit
Submits a form via pjax.
$(document).on('submit', 'form[data-pjax]', function(event) { $.pjax.submit(event, '#pjax-container') })
$.pjax.reload
Initiates a request for the current URL to the server using pjax mechanism and replaces the container with the response. Does not add a browser history entry.
$.pjax.reload('#pjax-container', options)
$.pjax
Manual pjax invocation. Used mainly when you want to start a pjax request in a handler that didn't originate from a click. If you can get access to a click
event, consider
$.pjax.click(event)instead.
function applyFilters() { var url = urlForFilters() $.pjax({url: url, container: '#pjax-container'}) }
Events
All pjax events except
pjax:click&
pjax:clickedare fired from the pjax container element.
pjax:send&
pjax:completeare a good pair of events to use if you are implementing a loading indicator. They'll only be triggered if an actual XHR request is made, not if the content is loaded from cache:
$(document).on('pjax:send', function() { $('#loading').show() }) $(document).on('pjax:complete', function() { $('#loading').hide() })
An example of canceling a
pjax:timeoutevent would be to disable the fallback timeout behavior if a spinner is being shown:
$(document).on('pjax:timeout', function(event) { // Prevent default timeout redirection behavior event.preventDefault() })
Advanced configuration
Reinitializing plugins/widget on new page content
The whole point of pjax is that it fetches and inserts new content without refreshing the page. However, other jQuery plugins or libraries that are set to react on page loaded event (such as
DOMContentLoaded) will not pick up on these changes. Therefore, it's usually a good idea to configure these plugins to reinitialize in the scope of the updated page content. This can be done like so:
$(document).on('ready pjax:end', function(event) { $(event.target).initializeMyPlugin() })
This will make
$.fn.initializeMyPlugin()be called at the document level on normal page load, and on the container level after any pjax navigation (either after clicking on a link or going Back in the browser).
Response types that force a reload
By default, pjax will force a full reload of the page if it receives one of the following responses from the server:
Page content that includes
<html>when
fragmentselector wasn't explicitly configured. Pjax presumes that the server's response hasn't been properly configured for pjax. If
fragmentpjax option is given, pjax will extract the content based on that selector.
Page content that is blank. Pjax assumes that the server is unable to deliver proper pjax contents.
HTTP response code that is 4xx or 5xx, indicating some server error.
Affecting the browser URL
If the server needs to affect the URL which will appear in the browser URL after pjax navigation (like HTTP redirects work for normal requests), it can set the
X-PJAX-URLheader:
def index request.headers['X-PJAX-URL'] = "" end
Layout Reloading
Layouts can be forced to do a hard reload when assets or html changes.
First set the initial layout version in your header with a custom meta tag.
<meta http-
Then from the server side, set the
X-PJAX-Versionheader to the same.
if request.headers['X-PJAX'] response.headers['X-PJAX-Version'] = "v123" end
Deploying a deploy, bumping the version constant to force clients to do a full reload the next request getting the new layout and assets. | https://www.javascripting.com/view/jquery-pjax | CC-MAIN-2020-16 | en | refinedweb |
Enterprise Library 2.0: The Logging Application Block
The Enterprise Library 2.0 (EL2) is the second major release of the robust frameworks released by the Microsoft Patterns and Practices team. At the time this article was written, EL2 is a "Community Technology Preview (CTP)", so features are subject to change.
EL2 consists of six main application blocks: Logging and Instrumentation, Caching, Data Access, Cryptography, Security, and Exception Handling. Along with the application blocks, EL2 is built on a number of core components, which can be consumed by the general public, but are primarily used internally by the application blocks. These core components can be classified as: Configuration, Instrumentation, and the ObjectBuilder. In the previous version of the Enterprise Library, Configuration was the seventh application block. In EL2, the functionality has been replaced by the .NET 2.0 System.Configuration namespace and has been moved into the core architecture.
Although EL2 will ship with a series of excellent "QuickStart" applications, these quick starts don't always explore all the features and have been known to lack sufficient documentation. In addition, logging (for example, debug, trace, error, and so on) is far from standardized in most organizations, making it important to have as much content on the subject as possible. For example, a number of organizations have multiple ASP.NET applications that all log information differently.
The Logging Application Block and EL2 are not turnkey solutions; however, they provide people with a mechanism to create a unified logging strategy. The goal of this article is to investigate some basic logging scenarios, explain and define some of the required terminology, and help you take those first steps to more efficient logging.
Major Changes in the Logging Block
There are too many changes in the Logging Application Block between the Enterprise Library (EL1) and EL2 to cover in this short article, but I want to highlight some of the major changes in the Logging Application Block.
First, the previous version of the Logging Application Block (in fact, most of the original version of EL1) was based on Avanade's ACA.NET framework. The word "based" is used very loosely, because in many cases the only change between ACA.NET and EL1 is the namespace. Although ACA.NET and EL1 are both great pieces of code, EL2 has simplified the logging strategy and has aligned better with the .NET 2.0 framework.
For example, Trace Listeners (A.K.A. LogSinks in previous versions) were custom code. In the new version, they derive from the very established TraceListener class in the System.Diagnostics namespace. Additionally, the frequently used LogEntry class now supports multiple categories. This may not sound like a big deal; however, this feature allows you to create a more standardized logging policy across applications. Multiple categories provide you with the ability to log different event types (for example, debug, audit, and so forth) specified in different application tiers (as in UI, Business, and the like) to different locations (for example, database, text, and so on), a big improvement from previous versions.
Along with the code improvements, the new logging application block has also simplified the "strategy" aspect. This was done by eliminating the concept of "distribution strategies" that were tedious and complicated in EL1. Instead, EL2 utilizes a far superior architecture of linking TraceListener classes together, which further aligns its design with the .NET 2.0 frameworks.
Logging Block Definitions
Before jumping into some examples of working with the EL2 Logging Application Block, you should first understand some terminology. The following is a summary of some of the definitions that you will need to know before using the Logging Application Block:
- Trace Listeners: Collects, stores, and routes messages
- Database Trace Listener: Logs to a Database
- Flat File Trace Listener: Logs to Flat File
- Formatted Event Log Trace Listener: Logs to Windows event log
- Msmq Trace Listener: Logs to MSMQ
- System Diagnostic Trace Listener: Logs to an event to an implementation of System.Diagnostic
- WMI Trace Listener: Logs a Windows Management Instrumentation event
- Custom Trace Listener: Logs to custom implementation of trace listener
- Filters: Provides log filtering based on pre-defined criteria
- Category Filter: Filter based on category
- Custom Filter: Extension point for creating custom filters
- Log Enabled Filter: Boolean to enable or disable log entries
- Priorty Filter: Filter based on priority
- Category Sources: This is the grouping mechanism used to classify a type of a message. A category can consist of multiple Trace Listeners or a single Trace Listener.
- Special Sources: Out-of-the-box sources for use in your application. As far as I can tell, you cannot remove or add to these, you can only wire them up for use in your application by specifying a Trace Listener. These can be used as a generic or general way of logging information if you want to avoid the generation of categories.
- Logging Errors & Warnings: Logs errors and warnings
- Unprocessed Category: Logs unprocessed categories
- All Events: Logs all events
- Formatters: Used to format a message
- Text Formatter: The default formatter comes with a customizable template driven system
- Binary Formatter: Used for object serialization/deserialization and in some queue scenarios
- Custom Formatter: Extension point
Jumping into Logging...
The best way to understand EL2's logging features is to jump right into a few examples. Let me set the stage a bit. The first example is very simple and its purpose is to get the wheels spinning; it demonstrates a simple database logging scenario. The second example extends the logging application block. Both examples demonstrate the extensibility points in EL2.
I assume you have the EL2 already. If not, you can obtain it from here. If you are using the December CTP, it is highly recommended that you build the EL2 configuration console prior to working with the examples in this article. The configuration console allows you to manipulate the configuration files through a simple GUI.
Along with the configuration console, the following examples require that you have Visual Studio 2005 and a MS SQL Server database installed. I used SQL 2005 Express, but I believe that, with some minor tweaks, any version should work.
EL2 Database Logging
To start the first example showing a simple database logging, navigate to the install directory of the Enterprise Library 2.0 and locate the "..\Src\Logging\TraceListeners\Database\Scripts\" directory. In this directory, you should find a batch file, "CreateLoggingDb.cmd," which will need to be executed (first ensure that SQL Server is running). This batch file executes the SQL script, which is located in the same directory and called LoggingDatabase.sql, creating the logging database natively used by EL2. The native logging database used in this example, "Logging," is very full-featured, but you do also have the choice to design and create your own database for use with the logging block.
Continue by creating a new C# Windows application in Visual Studio 2005; for ease of use, for this and all of the examples, Windows forms applications will be used. Add a new application configuration file, and open the Enterprise Library console.
EL2 is heavily driven through meta-data stored in application configuration files. The console creates a visual interpretation of the XML stored in these configuration files and makes it easy to manipulate the data. Navigate to File, Open Application, and select the app.config file that you just created. Another way to do this is to create the app config file in the EL2 console, but both methods have the same result.
The next step is to add a new node to the configuration by right-clicking (or navigating to "Action") in the console on the top-level node, and selecting "Logging Application Block."
Figure 1: The EL2 console is a straightforward GUI that reduces the time it takes to generate the required configuration information.
By default, the Logging application block gets configured with the EventLog trace listener enabled for the "General" category, and the "Special Source" category of "Logging Errors & Warnings." The event log is a fine logging store if you have a single server, but in most cases you want something more centralized that you can query against, such as a database.
For this example, you will need to remove all of the event log references by highlighting their nodes and going to "Action, Remove" or by right-clicking on their nodes and selecting "Remove." Add a new "Database Trace Listener" by highlighting the "Trace Listeners" node by right-clicking or going to "Action" again. You should see a new database trace listener and a new reference to the data access application block. The granular details about the data access block will be saved for another time, but do know that a new connection reference (which defaults to "Connection String") has been created. The setting attributes will need to be modified appropriately to reflect your database setup.
The next step is to associate the "Database Trace Listener" with the new database connection reference, and provide it a formatter; in this case, the default text formatter is used.
The final configuration step is to add a new category called "Audit" under "Category Sources" and reference the new "Database Trace Listener."
Rest assured, even though these steps are slightly time consuming, they are far simpler than editing the configuration files by hand. Downloading the example will help make sense of the process.
With the application configuration out of the way, the next step is very simple. Just add a reference in VS 2005 to Microsoft.Practices.EnterpriseLibrary.Logging, add the using statement (Using Microsoft.Practices.EnterpriseLibrary.Logging), and add the following code to a button:
LogEntry le = new LogEntry(); le.Categories.Add("Audit"); le.Message = "Testing our DB Logging"; le.EventId = 1234; le.Title = "Database Message"; le.Priority = 1; Logger.Write(le); | https://www.codeguru.com/csharp/.net/net_framework/systemnamespace/article.php/c11281/Enterprise-Library-20-The-Logging-Application-Block.htm | CC-MAIN-2020-16 | en | refinedweb |
On 07/24/2012 05:04 AM, Paolo Bonzini wrote: > We can provide fast versions based on the other functions defined > by host-utils.h. Some care is required on glibc, which provides > ffsl already. > > Signed-off-by: Paolo Bonzini <address@hidden> > --- > host-utils.h | 45 +++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 45 insertions(+) > +#ifdef __GLIBC__ > +#define ffsl qemu_ffsl > +#endif > +static inline int ffsl(long val) ffsl() makes sense in comparison to the standardized ffs() (why POSIX doesn't specify one is beyond me). > + > +static inline int flsl(long val) But what good is flsl (I'm assuming you mean find-last-set, or the most-significant set bit), especially since there is no standardized fls() and no fls() in host-utils.h? -- Eric Blake address@hidden +1-919-301-3266 Libvirt virtualization library
signature.asc
Description: OpenPGP digital signature | https://lists.gnu.org/archive/html/qemu-devel/2012-07/msg03913.html | CC-MAIN-2020-16 | en | refinedweb |
Easy-to-use game AI algorithms (Negamax etc. )
Project description
EasyAI (full documentation here) is a pure-Python artificial intelligence framework for two-players abstract games such as Tic Tac Toe, Connect 4, Reversi, etc. It makes it easy to define the mechanisms of a game, and play against the computer or solve the game. Under the hood, the AI is a Negamax algorithm with alpha-beta pruning and transposition tables as described on Wikipedia.
Installation
If you have pip installed, type this in a terminal
sudo pip install easyAI
Otherwise, dowload the source code (for instance on Github), unzip everything into one folder and in this folder, in a terminal, type
sudo python setup.py install
Additionnally you will need to install Numpy to be able to run some of the examples.
A quick example
Let us define the rules of a game and start a match against the AI:
from easyAI import TwoPlayersGame, Human_Player, AI_Player, Negamax class GameOfBones( TwoPlayersGame ): """ In turn, the players remove one, two or three bones from a pile of bones. The player who removes the last bone loses. """ def __init__(self, players): self.players = players self.pile = 20 # start with 20 bones in the pile self.nplayer = 1 # player 1 starts def possible_moves(self): return ['1','2','3'] def make_move(self,move): self.pile -= int(move) # remove bones. def win(self): return self.pile<=0 # opponent took the last bone ? def is_over(self): return self.win() # Game stops when someone wins. def show(self): print "%d bones left in the pile"%self.pile def scoring(self): return 100 if game.win() else 0 # For the AI # Start a match (and store the history of moves when it ends) ai = Negamax(13) # The AI will think 13 moves in advance game = GameOfBones( [ Human_Player(), AI_Player(ai) ] ) history = game.play()
Result:
20 bones left in the pile Player 1 what do you play ? 3 Move #1: player 1 plays 3 : 17 bones left in the pile Move #2: player 2 plays 1 : 16 bones left in the pile Player 1 what do you play ?
Solving the game
Let us now solve the game:
from easyAI import id_solve r,d,m = id_solve(GameOfBones, ai_depths=range(2,20), win_score=100)
We obtain r=1, meaning that if both players play perfectly, the first player to play can always win (-1 would have meant always lose), d=10, which means that the wins will be in ten moves (i.e. 5 moves per player) or less, and m='3', which indicates that the first player’s first move should be '3'.
These computations can be sped up using a transposition table which will store the situations encountered and the best moves for each:
tt = TT() GameOfBones.ttentry = lambda game : game.pile # key for the table r,d,m = id_solve(GameOfBones, range(2,20), win_score=100, tt=tt)
After these lines are run the variable tt contains a transposition table storing the possible situations (here, the possible sizes of the pile) and the optimal moves to perform. With tt you can play perfectly without thinking:
game = GameOfBones( [ AI_Player( tt ), Human_Player() ] ) game.play() # you will always lose this game :)
Contribute !
EasyAI is an open source software originally written by Zulko and released under the MIT licence. It could do with some improvements, so if your are a Python/AI guru maybe you can contribute through Github . Some ideas: AI algos for incomplete information games, better game solving strategies, (efficient) use of databases to store moves, AI algorithms using parallelisation.
For troubleshooting and bug reports, the best for now is to ask on Github.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/easyAI/ | CC-MAIN-2020-16 | en | refinedweb |
Install Bloomreach B2B Commerce Accelerator
Introduction
Goal
Install the Bloomreach B2B Commerce Accelerator application using the Bloomreach Commerce Accelerator B2B Boot project and connect it to your commerce backend platform.
Background
The Bloomreach Commerce Accelerator B2B Boot project is designed as an out-of-the-box accelerator to integrate your commerce backend with Bloomreach Experience Manager and Bloomreach Search & Merchandising.
The Boot project can be configured with a commerce backend platform and Bloomreach Search & Merchandising. At the moment Salesforce B2B Commerce is the only supported B2B commerce backend: you can find all the required steps below in order to integrate with your Salesforce B2B Commerce instance.
The first part of this page explains how to set up the Bloomreach Commerce Accelerator B2B Boot project in your local development environment, connect your commerce backend platform, and deploy onto a server environment. The second part focuses on the Salesforce integration.
Set Up Local Development Environment
The Bloomreach Commerce Accelerator B2B Boot project is available in the Bloomreach Experience Manager Enterprise Maven repository.
Log in at.
Point your browser to
Pick the appropriate release (version number of the format X.Y.Z) and open the corresponding folder.
Download the starterstore-b2b-boot-X.Y.Z-project.tar.gz (or .tar.bz2 or .zip) source distribution and extract it to a folder in your local environment.
Set the connection properties for your commerce backends in the Bloomreach Accelerator Configuration File. In the Install Integration Salesforce Components section below you will understand all the details of the commerece backend properties. The following snippet represents just an example:
# Bloomreach Accelerator parameters for bloomreach backend service. bloomreach.cache.enabled = true bloomreach.account.id = 0000 bloomreach.api.base.url = bloomreach.domain.key = your_brsm_instance bloomreach.view.id = bloomreach.widgets.auth.key = # StarterStore parameters for SalesForce CloudCraze backend service. salesforcecc.cache.enabled = (e.g. true or false) salesforcecc.clientId = (e.g your Connected App consumer key) salesforcecc.clientSecret = (e.g. your Connected App consumer secret) salesforcecc.username = (e.g. SalesForce B2B merchant username) salesforcecc.password = salesforcecc.securityToken = salesforcecc.baseUrl = (e.g. https://<your_instance>.force.com/DefaultStore) salesforcecc.accessTokenUri = (e.g. https://<your_instance>.salesforce.com) salesforcecc.storefront.user.signin.url = (e.g. https://<your_instance>.force.com/DefaultStore/SiteLogin?startURL=/CustomSiteLogin) salesforcecc.storefront.user.signout.url = (e.g. https://<your_instance>.force.com/DefaultStore/secur/logout.jsp) salesforcecc.storefront.user.register.url = (e.g. https://<your_instance>.force.com/DefaultStore/UserRegister) salesforcecc.storefront.user.changePassword.url = (e.g. https://<your_instance>.force.com/DefaultStore/ChangePassword) salesforcecc.storefront.locale.language.code = (e.g en_US) salesforcecc.storefront.locale.currency.ISO3code = (e.g. USD)
Build the application using Maven:
mvn clean verify
Run the application:
mvn -P cargo.run
Verify that the Bloomreach Experience Manager CMS application is accessible at (login with credentials admin/admin).
To verify that your commerce backend connection is working properly, navigate to the Content application and browse to Bloomreach Accelerator: Commerce B2B Boot Project > products. Open one of the example product documents or create a new one, make sure your commerce backend is selected under Commerce Connector, then click Browse under Related External Product. You should be able to browse through the products in your commerce backend.
In future versions, we will support an easier way (e.g, Essentials tools) to migrate existing projects.
Deploy in a Server Environment
The deployment process for the Bloomreach Commerce Accelerator B2B Boot application is the same as for any Bloomreach Experience Manager implementation project.
Deploy in Bloomreach Cloud
If you are deploying in Bloomreach Cloud, follow the instructions in Deploy in Bloomreach Cloud.
As explained in Configure Bloomreach Commerce Accelerator, any Bloomreach Commerce Accelerator Boot based project needs to read the Bloomreach Accelerator Configuration File braccelerator.properties by default. Therefore, make sure to deploy the distribution with the configuration file, following Set Environment Configuration Properties, renaming the uploaded configuration file to braccelerator.properties, which is referred by both conf/platform.xml and conf/hst.xml.
Deploy a Project Distribution in Other Environment
In enviroments other than Bloomreach Cloud, follow the instructions in Create a Project Distribution and Deploy a Project Distribution.
The Bloomreach Commerce Accelerator B2B Boot project needs to read the Bloomreach Accelerator Configuration File, braccelerator.properties by default. Therefore, make sure to deploy the distribution with the configuration file which is referred by both conf/platform.xml and conf/hst.xml.
Install Integration Salesforce Components
You are required to execute the installation steps below. Before starting, make sure that you set up a storefront in Salesforce, as described here.
Storefront Integration Components
Bloomreach Commerce Accelerator may work in combination with the Salesforce B2B storefront. As an example, the current version of the B2B Boot project delegates operations like user (contact) sign-in and sign-up directly to Salesforce. The B2B Boot project already includes different Visualforce Pages and Controllers that need be used to enable a seamless integration: you can find them in your B2B Boot project, more specifically below the /src/main/salesforcecc folder. You will be required to copy/replace some of the Visualforce Pages and Apex classes.
Site Redirects
Create a new Visualforce Page called SiteRedirect and copy the content from src/main/salesforcecc/SiteRedirect.page. In order to make this page accessible, you need to enable it in the "white-list": from the Force.com Setup page, search for Sites on the left pane; then select your customer community, click Public Settings -> Visualforce Page Access. Eventualy, move the SiteRedirect page from "Available" to "Enabled", as shown below.
Delegated Registration
Create a new Salesforce Component (a.k.a. controller) exposing the user registration functionality. You can copy the logic from the UserRegister.controller that you can find below the src/main/salesforcecc folder.
As a next step, you need to create the UserRegister Visualforce page. You can just copy the logic from src/main/salesforcecc/UserRegister.page. Please ensure to enable the new Visualforce page public access, following the same steps explained in the previous paragraph for the SiteRedirect.
As a next step, you need to change the logic of the login page. Under the Build->Develop tab, click on the Visualforce Component entry. The main section contains a list of components used in the B2B instance. Search for SiteLogin and click on the edit action. Please replace the existing content with the one defined in the src/main/salesforcecc/SiteLoginComponent.page. Once completed, save your changes. The image below summarizes all the pages editable throught the Salesforce admin.
Please ensure that the redirect URL is defined correctly in the Bloomreach Accelerator .properties file, more specifically in the salesforcecc.storefront.user.registration.url property.
Please also consider that the UserRegister Visualforce page must be enabled (as you did for the SiteRedirects). Moreover, the same logic applies for every custom Apex class: those need to be enabled, as displayed below. You can find more details here.
Finally, if everything has been configured correctly and the Bloomreach B2B Commerce Accelerator is up and running, visit the <your_brX_instance>/login page. You should get redirected to the Salesforce registration page. Please try to register a new user: if everything went well you should be redirected to the Bloomreach B2B Commerce Accelerator site.
Federated Authentication Integration
Create a new Visualforce Controller called SitePostLoginController and copy the content from src/main/salesforcc/SitePostLogin.controller. This controller is responsible for handling login tokens shared accross brX and Salesforce: the current strategy relies on Salesforce platform cache.
To create a new platform cache, please follow the steps below:
- Open the Develop section in the left pane of your Force.com Setup page.
- Click on the Platform cache, below the Develop section on the left pane.
- Click on the button "New Platform Cache Partition".
- Use token as label, name and set the partition as the default one.
- Please also ensure that total allocation size is properly configured: this depends mainly on your traffic. Considering that a token is stored as a String object, then you need to reserve enough space based on the number of registered users.
- Once finished, click on the Save button. If everything went well, the new partition will be part of the local namespace (see the image below)
As the next step, create a new Visualforce page and copy the content from src/main/salesforcc/SitePostLogin.page. Please also ensure to:
- Enable this page in the B2B Customer Community Profile.
- Enable the SitePostLogin controller in the Apex access section.
Once the page has been created, please modify the URL used to handle the federated login. In case the Visualforce page name is CustomSiteLogin, you can use that value as part of the salesforcecc.storefront.user.signin.url property in your accelerator .properties file.
The logout process must be also handled by Salesforce B2B Commerce. To trigger a logout in the StoreFront, it is sufficient to invoke the predefined logout URL, like https://<your_b2b_instance>/DefaultStore/secur/logout.jsp: this URL must be specified in the .properties file, more specifically this needs to be the value of the salesforcecc.storefront.user.signout.url property. The logout process will also be covered in the next paragraphs.
As a final step, visit the <your_brX_instance>/login page. Please fill the login form using the account detail created in the previous step: if everything goes well, you are redirected back to your brX instance. Once back, the login status on the top should display your first name and last name.
Delegated Password Change
First of all, you need to customize the existing ChangePassword Visualforce page: you can copy the content from the src/main/salesforcecc/ChangePassword.page and replace the existing one. This new page introduces a hidden field tracking the redirect URL parameter.
Moreover, edit the existing ChangePasswordController Apex class. You can just replace the entire content with the one defined below src/main/salesforcecc/ChangePassword.controller.
The Salesforce password change operation is accessible through the Bloomreach B2B Commerce Accelerator, more specifiically through https://<your_brX_instance>/account/creds URL. Once you get there, the visitor is redirected to the Change Password Visualforce Page powered by Salesforce. The redirect URL can be customized directly in the Bloomreach accelerator .properties file, more specifically with the salesforcecc.storefront.user.changePassword.url property.
You can now try to change the password of the user created in the previous step. Please visit the brX credentials page specified above and follow all the steps. Once that has been completed, please login again using the new password: if everything worked correctly, then you should be redirected back to your brX instance.
B2B Customer Community Settings
Once you have completed the installation of the additional Visualforce pages and controllers, you are now able to setup all the functionality delegated to Salesforce. First of all, you need to open the B2B Customer Community setting, as described here. Click on the Administration tab.
As a first step, please enable the "Customer Community User", the community profile for B2B customers. Click on the "Members" menu on the left (as shown in the image below). The "Customer Community User" is allowed to log into the Accelerator.
As the next step, click the "Login & Registration" tab. Please follow the image below:
Next, please enable "Allow external user to Self-Register": you can just follow the image above but for a complete reference you can also have a look here.
The "Logout Page URL" should refer to your brX site instance. More specifically, the URL should refer to the /logout sitemap item in order to trigger logout operation.
B2B Customer Community Email
You may also be interested in changing your email template. Please ensure to edit the "Classic Email Template" as explained here. In case you prefer to add a new email template, please ensure to update the related configuration in your Customer Community settings, as showed in the image below:
References
Salesorce B2B Commerce Documentation: | https://documentation.bloomreach.com/14/library/solutions/commerce-starterstore/install-b2b-commerce.html | CC-MAIN-2020-16 | en | refinedweb |
This notebook demonstrates the statsmodels MICE implementation.
The CHAIN data set, analyzed below, has also been used to illustrate the R mi package. Section 4 of this paper describes an analysis of the data set conducted in R
import sys sys.path.insert(0, "/projects/57433cc7-78ab-4105-a525-ba087aa3e2fc/statsmodels-mice2") %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm from statsmodels.sandbox.mice import mice import matplotlib.pyplot as plt
First we load the data and do a bit of cleanup.
data = pd.read_csv("chain.csv") del data["Unnamed: 0"] data.columns = [x.replace(".W1", "") for x in data.columns] print data.head()
h39b age c28 pcs mcs37 b05 haartadhere 0 9.996477 29 5 34.182713 1 4 1 1 0.000000 38 5 58.098125 0 5 2 2 NaN 47 6 21.876003 0 1 1 3 NaN 53 2 18.675938 0 5 0 4 7.090077 42 2 54.099964 0 4 1
We can make some simple graphs to visualize the missing data patterns. The colored cells correspond to missing values.
imp = mice.MICEData(data) _ = imp.plot_missing_pattern()
We can exclude the complete cases and completely observed variables in order to make it easier to see the different missing data patterns.
_ = imp.plot_missing_pattern(hide_complete_rows=True, hide_complete_columns=True)
Here is a simple example of how to coduct an analysis using MICE. The formula specifies the "analysis model", which is the model we primarily want to fit and interpret. Additional "imputation models" are used to impute missing values of each variable that has missing values. In this example we use defaults, so that all the imputation models use ordinary least squares, with each variable imputed from a model in which every other variable has a main effect.
mi = mice.MICE("h39b ~ age + c28 + pcs + mcs37 + b05 + haartadhere", sm.OLS, imp) result = mi.fit(20, 5) print(result.summary())
Results: MICE ================================================================== Method: MICE Sample size: 508 Model: OLS Scale 18.13 Dependent variable: h39b Num. imputations 5 ------------------------------------------------------------------ Coef. Std.Err. t P>|t| [0.025 0.975] FMI ------------------------------------------------------------------ Intercept 15.3742 1.5637 9.8319 0.0000 12.3094 18.4390 0.1145 age -0.0871 0.0274 -3.1767 0.0015 -0.1408 -0.0334 0.2460 c28 -0.3437 0.1029 -3.3399 0.0008 -0.5455 -0.1420 0.1000 pcs -0.0294 0.0227 -1.2924 0.1962 -0.0739 0.0152 0.4629 mcs37 1.3717 0.5243 2.6163 0.0089 0.3441 2.3994 0.3159 b05 -1.1323 0.1592 -7.1149 0.0000 -1.4443 -0.8204 0.1722 haartadhere -1.0442 0.2547 -4.0991 0.0000 -1.5435 -0.5449 0.2403 ==================================================================
We can assess how well the imputed values match the observed values by looking at histograms of the marginal distributions. These histograms show the imputed values from the final imputed data set together with the observed values.
plt.clf() for col in data.columns: plt.figure() ax = plt.axes() _ = imp.plot_imputed_hist(col, ax=ax, )
<matplotlib.figure.Figure at 0x7fc30a5e52d0>
We can also look at marginal relationships between each variable and the outcome variable The
plot_bivariate method colors the points accorording to whether they are missing or observed on each variable in the scatterplot. We hope to see the same trends and degree of scatter among the observed and imputed points.
plt.clf() jitter = {"age": None, "c28": None, "pcs": None, "mcs37": 0.1, "b05": 0.1, "haartadhere": 0.1} for col in data.columns: if col == "h39b": continue _ = imp.plot_bivariate("h39b", col, jitter=jitter[col])
<matplotlib.figure.Figure at 0x7fc30a1c9410>
Another useful diagnostic plot is to look at the fitted values from the imputation model plotted against either the observed or imputed values. The imputed values are taken from the final imputed data set. We hope to see similar trends and similar degrees of scatter in the observed and imputed values. This plot can be made for any of the variables in the data set. Here we show the result for h39b since that is the variable with the greatest number of missing values in the data set.
plt.clf() _ = imp.plot_fit_obs("h39b")
<matplotlib.figure.Figure at 0x7fc30a24ee50> | http://nbviewer.jupyter.org/urls/umich.box.com/shared/static/mv2wvdcicwl4ww1kamp8lkb7fahrnl7m.ipynb | CC-MAIN-2018-05 | en | refinedweb |
Hi! Getting error when trying to fetch the preview of reports Hi,
This is the error I am getting when I try to generate the preview of the report existed in OEMCRM
reports cannot be run because the connector for Microsoft SQL Server Reporting Services, a required componentfor reporting, is not installed on the server that is running Microsoft SQL Server Reporting Services.
Help me if you have any ideas. SQLCLI service not registered on the local machine error Hi,
With reference to the above title, I am trying to connect an SQL Server 2008 Express instance which came along with Visual Studio 2010 Ultimate from my C# app. My connection string is:
Provider= SQLNCLI;Server=host\SQLEXPRESS;Database=database;
When I try connecting with the above connection string, it says:
SQLNCLI provider not registered on the local machine
followed by my connection string.
What am I supposed to do to solve this issue?
Thank You in advance for your reply.
Regards,
Clifford Page Reload after TimeOut Error> </authentication> <authorization> <deny users="?"/> </authorization>Global.aspx protected void Application_AcquireRequestState(object sender, EventArgs e) { if (System.Web.HttpContext.Current.Session != null) {&nb How to solve this "Smtp: Mailbox Unavailable or not local " error ? I am trying to send Email to the Email Addresses stored in the Database, some are working fine but I am getting stuck when I get hit with this error..
The server rejected one or more recipient addresses. The server response was: 550 Requested action not taken: mailbox unavailable or not local
I think this is something to do with SMTP Virtual Server Settings in IIS but i dont know what actually i should change over there. The code in the ASP.NET page worked for 40-50 mail addresses but suddenly i am getting this error
Can anyone help please. Any ideas will be of great help to me ..
Thanks in Advance?? Deploy Reports Error (VS2008) to 2005 ReportServer Db... Hi, i'm trying to deploy a report made in Visual Studio 2008 to a ReportServer Data Base 2005, but a get this error...
ÃÂ
ErrorÃÂ 2ÃÂ The report definition is not valid.ÃÂ Details: The report definition has an invalid target namespace '' which cannot be upgraded.ÃÂ
ÃÂ
Is it posible to use a SQL 2005 DB with Rdl's reports 2008
ÃÂ
thanks
Request.GetRequestStream() in a loop gives Timeout error Hi,
ÃÂ ÃÂ ÃÂ ÃÂ I tried searching the net but didnt find any relevant information regarding this issue, I am trying to pass data to a servlet and am sending it in a loop as a different request everytime , after the second requestÃÂ the debugger ÃÂ just hangs on this statement Request.GetRequestStream() and comes back after a lot of time or either gives a timeout error
ÃÂ
Following is the code
ÃÂ
try
{
log.Info(xmlTrade);
UTF8Encoding encoding = new UTF8Encoding();
byte[] bytes = encoding.GetBytes(xmlTrade);
HttpWebRequest request = (HttpWebRequest) WebRequest.Create(tradeServletPath);
log.Debug("Servlet Path: " + tradeServletPath);
request.Method = "POST";
request.ContentType = "text/xml";
request.ContentLength = bytes.Length;
request.Timeout = timeout;
Stream stream = request.GetRequestStream();
try
{
stream.Write(bytes, 0, bytes.Length);
}
finally
{
stream.Close();
}
return "Success";
}
catch(Exception ex)
{
log.Error("Exception occured when submitting this trade xml: " + xmlTrade);
if(ex.Message.StartsWith("The underlying"))
return "Connectionerror";
else
return string.Empty;
} CS0102 Error on ReportViewer reports using VS2010 Professional I am trying to build reports using the ReportViewer feature in Visual Studio 2010 Professional and I am encountering the following error when I try to run the aspx page containing the report.Compiler Error Message: CS0102: The type 'SupApps_SecurityDataSet'
already contains a definition for '_schemaSerializationMode'The error is stating the issue is on line 27 of the file but I dont see how to view the code on a rdc file in Visual Studio.HERE IS THE CODE FOR THE ASPX PAGE<%@ Page Language="VB" AutoEventWireup="false" CodeFile="QMDailyJournalReport.aspx.vb" Inherits="reports_QMDailyJournalReport" %>
<%@ Register assembly="Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" namespace="Microsoft.Reporting.WebForms" tagprefix="rsweb" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div>
<rsweb:ReportViewer I The expression contains undefined function call MyUserFunction() error :Telerik Reports
Hello All,
I am getting the error."An error has occured while processing textbox:The expression contains undefined
function call FormType()".
I have a report..where in i used a user function called "FormType".
My datasource for teh report is sharepoint list.
I am passing a parameter to the user function in the code.My user functions retrieves three values(one each time based on the conditions)But when i select the user function..in the edit expression dialog box..
i am not writing any parameter inside the user function as i am not sure what to write there.
I have gone through
and thought i might fall in 2nd cateogry..but unable to find a solution for that
Please help me.
Here is the code.
InitializeComponent();
string strType;
SPSite oSite = new SPSite("spsiteurl");
SPWeb oWeb = oSite.OpenWeb();
string[] parameters = { "10", "100", "1000" };
T
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/32023-getting-timeout-expired-error-local-reports.aspx | CC-MAIN-2018-05 | en | refinedweb |
#include "petscmat.h" PetscErrorCode MatPtAP(Mat A,Mat P,MatReuse scall,PetscReal fill,Mat *C)Neighbor-wise Collective on Mat
This routine is currently only implemented for pairs of sequential dense matrices, AIJ matrices and classes which inherit from AIJ.
Level:intermediate
Location:src/mat/interface/matrix.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages | http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatPtAP.html | CC-MAIN-2018-05 | en | refinedweb |
I'm working on a soccer management program
I had some other members of my group to compile data on 400 real life players into a text file.
I planned on reading the text file using a program ,converting individual players into objects and then writing them to another file (in binary mode) to be used by my program..
The reading and writing seems to have gone well, but when I try to open the new file containing the player objects, the program seems to be able to read only 20 players for some reason..
#include <fstream> #include <iostream> using namespace std; int main() { class plr { public: int id; int shirt_no; int age; int goals; int pos;//0-GK,1-DEF,2-MID,3-ATT char LName[51]; char FName[51]; int team; int skill; int goaltime; int cards; void transfer(int t_id) { team = t_id; } }plrs[401],plrs2[401]; fstream reader("players.txt", ios::in); fstream binary("players2.dat", ios::out); //READ FROM TEXT FILE into objects : int j; for (int i=1;i<=401;i++) { reader>>j; plrs[j].id = j; reader>>plrs[j].LName>>plrs[j].FName>>plrs[j].team>>plrs[j].shirt_no>>plrs[j].age>>plrs[j].pos>>plrs[j].skill>>plrs[j].goaltime>>plrs[j].cards; cout << "\n\n"; } reader.close(); //Display all players for(int j=1;j<=400;j++) {cout<<"\n\n"<<plrs[j].LName<<"\n"<<plrs[j].FName<<"\n"<<plrs[j].team<<"\n"<<plrs[j].shirt_no<<"\n"<<plrs[j].age<<"\n"<<plrs[j].pos<<"\n"<<plrs[j].skill<<"\n"<<plrs[j].goaltime<<"\n"<<plrs[j].cards; } //Write objects to file for (int j=1;j<=400;j++){ binary.write((char*)&plrs[j], sizeof(plrs[j])); } binary.close(); //Read objects from that file through another filestream into a different array of objects fstream plrreader("players2.dat", ios::in); for (int j=1;j<=400;j++) { if (plrreader.read((char*)&plrs2[j], sizeof(plrs2[j]))) { cout<<j<<" succesful\n"; } } cout<<"Read successful"; //Display objects -- some error here, only 20 players seem to be readable : for (int j=1; j<400; j++) { cout<<"\n"<<plrs2[j].id<<"\nName :"<<plrs2[j].FName<<" "<<plrs2[j].LName<<"\nTeam :"<<plrs2[j].team<<"\n No.:"<<plrs2[j].shirt_no<<"\n Age:"<<plrs2[j].age<<"\n Pos :"<<plrs2[j].pos<<"\n Skill:"<<plrs2[j].skill<<"\n Max goals scored in"<<plrs2[j].goaltime<<"\n Cards"<<plrs2[j].cards; } plrreader.close(); return 0; }
I'm reading from players.txt, storing players in an array of 'plr' objects (plrs).
This seems to be working as expected (all 400 players are displayed correctly)
Then I'm writing these objects using write() to players2.dat. This too, seems to be working (I opened the dat file in notepad, and it has all 400 players)
But when I open the newly created dat file and read the objects into another array of player objects (plrs2), only the first 20 players are read correctly.. The rest are displayed as zeroes..
I've also tried putting the part which reads the objects (line 65 onwards) into another .cpp file, but I get the same result.
I've attached the players.txt file, if anyone needs to run the code..
Please help.. | https://www.daniweb.com/programming/software-development/threads/314911/binary-read-not-reading-all-objects | CC-MAIN-2018-05 | en | refinedweb |
Hello everyone. I am in a programming fundamentals class which we are using python. I am working on a Lab problem and really am not 100% sure how to do it.
The Problem:
A shipping company (Fast Freight Shipping Company) wants a program that asks the user to enter the weight of a package and then display the shipping charges. (This was the easy part)
Shipping Costs:
2 pounds or less = $1.10
over 2 but not more than 6 pounds = $2.20
over 6 but not more than 10 pounds = $3.70
over 10 = 3.80
My teacher added on to this saying we need to
Next, you will enhance this lab to include the following:
a. Include the name of the shipper and recipient.
b. Print a properly formatted invoice.
c. The shipper is required to purchase insurance on their package. The insurance
rates are based on the value of the contents of the package and are as follows:
Package Value Rate
0 – 500 3.99
501 – 2000 5.99
2000 10.99
2. The printed invoice must include the name of the shipper and recipient. As well, this
invoice will display the total of the shipping charge and the insurance cost.
So here is my code
def main(): #define shipping rates less_two = 1.10 twoPlus_six = 2.20 sixPlus_ten = 3.70 tenPlus = 3.80 shippingClass = 0 insuranceClass = 0 insurance1 = 3.99 insurance2 = 5.99 insurance3 = 10.99 #Getting name of shipper and recipient company = str(input('Please enter the name of the shipping company: ')) customer = str(input('Please enter the name of the customer :')) #get weight of the package weight = float(input('Enter the weight of the package: ')) insurance = shippingRate(weight, shippingClass) #shipping classification if weight <= 2.0: shippingClass = less_two elif weight > 2.0 and weight <= 6.0: shippingClass = twoPlus_six elif weight > 6.0 and weight <= 10.0: shippingClass = sixPlus_ten elif weight > 10.0: shippingClass = tenPlus else: print('Weight has to be a possitive number') print() if insurance <= 500 and insurance >= 0: insuranceClass = insurance1 elif insurance > 500 and insurance <= 2000: insuranceClass = insurance2 else: insuranceClass = insurance3 total = shippingRate + insuranceClass #New variables insurance = shippingRate(weight, shippingClass) shippingRate(weight, shippingClass) #Display print('The total with insurance will be $', total, '.', sep='') print() int(input('Press ENTER to end')) #calculating shipping cost. def shippingRate(poundage, classification): shipping_rate = poundage * classification print('The shipping will cost $', format(shipping_rate, '.2f'), '.', sep='') main()
I am getting this error
Traceback (most recent call last): File "F:\Python\ACC - Labs\Lab 3\lab3_shipping.py", line 50, in <module> main() File "F:\Python\ACC - Labs\Lab 3\lab3_shipping.py", line 31, in main if insurance <= 500 and insurance >= 0: TypeError: unorderable types: NoneType() <= int()
I'm sure you can see that I do not have a complete understanding of what I am doing yet so any help or direction would be great. Also I am sorry for the sloppy code. I did the first part and now trying to do the 2nd and it is just a mess right now. After I figure it out I was going to clean it up.
Thank you for anyone who spends time to help me.
Edited by nUmbdA: edit | https://www.daniweb.com/programming/software-development/threads/436830/python-typeerror-unorderable-types | CC-MAIN-2018-05 | en | refinedweb |
#include "ntw.h"
Go to the source code of this file.
A scrollpane is used to allow scroll bars to appear in a particular area. It's useful for containing large images or grids that might otherwise get truncated or make the window too large to fit on a screen. It can contain a single widget. | http://ntw.sourceforge.net/Docs/CServer/scrollpane_8h.html | CC-MAIN-2018-05 | en | refinedweb |
strcmp(3) BSD Library Functions Manual strcmp(3)
NAME
strcmp, strncmp -- compare strings
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <string.h> int strcmp(const char *s1, const char *s2); int strncmp(const char *s1, const char *s2, size_t n);
DESCRIPTION
The strcmp() and strncmp() functions lexicographically compare the null- terminated strings s1 and s2. The strncmp() function compares not more than n characters. Because strncmp() is designed for comparing strings rather than binary data, characters that appear after a `\0' character are not compared.
RETURN VALUES
bcmp(3), memcmp(3), strcasecmp(3), strcoll(3), strxfrm(3), wcscmp(3)
STANDARDS
The strcmp() and strncmp() functions conform to ISO/IEC 9899:1990 (``ISO C90''). BSD October 11, 2001 BSD
Mac OS X 10.8 - Generated Fri Aug 31 05:38:11 CDT 2012 | http://www.manpagez.com/man/3/strncmp/ | CC-MAIN-2018-05 | en | refinedweb |
Model Metadata and Validation Localization using Conventions nice, clean, and simple. To make it more useful, I’ll add validation and format how the properties are displayed.
public class Character { [Display(Name="First Name")] [Required] [StringLength(50)]] public string FirstName { get; set; } [Display(Name="Last Name")] [Required] [StringLength(50)]] public string LastName { get; set; } }
That’s busier, but not horrible. It sure is awful Anglo-centric though. I’ll fix that by making sure the property labels and error messages are pulled from a resource file.
public class Character { [Display(Name="Character_FirstName", ResourceType=typeof(ClassLib1.Resources))] [Required(ErrorMessageResourceType=typeof(ClassLib1.Resources), ErrorMessageResourceName="Character_FirstName_Required")] [StringLength(50, ErrorMessageResourceType = typeof(ClassLib1.Resources), ErrorMessageResourceName = "Character_FirstName_StringLength")] public string FirstName { get; set; } [Display(Name="Character_LastName", ResourceType=typeof(ClassLib1.Resources))] [Required(ErrorMessageResourceType=typeof(ClassLib1.Resources), ErrorMessageResourceName="Character_LastName_Required")] [StringLength(50, ErrorMessageResourceType = typeof(ClassLib1.Resources), ErrorMessageResourceName = "Character_LastName_StringLength")] public string LastName { get; set; } }
Wow! I don’t know about you, but I feel a little bit dirty typing all that in. Allow me a moment as I go wash up.
So what can I do to get rid of all that noise? Conventions to the rescue! By employing a simple set of conventions, I should be able to
look up error messages in resource files as well as property labels without having to specify all that information. In fact, by convention I shouldn’t even need to use the
DisplayAttribute.
I wrote a custom PROOF OF CONCEPT
ModelMetadataProvider that supports this approach. More specifically, mine is derived from the
DataAnnotationsModelMetadataProvider.
What Conventions Does It Apply?
The nice thing about this convention based model metadata provider is it allows you to specify as little or as much of the metadata you need and it fills in the rest.
Providing minimal metadata
For example, the following is a class with one simple property.
public class Character { [Required] [StringLength(50)] public string FirstName {get; set;} }
When displayed as a label, the custom metadata provider looks up the resource key, {ClassName}_{PropertyName},and uses the resource value as the label. For example, for the
FirstName property, the provider uses the key
Character_FirstName to look up the label in the resource file. I’ll cover how resource type is specified later.
If a value for that resource is not found, the code falls back to using the property name as the label, but splits it using Pascal/Camel casing as a guide. Therefore in this case, the label is “First Name”.
The error message for a validation attribute uses a resource key of {ClassName}_{PropertyName}_{AttributeName}. For example, to locate the error message for a
RequiredAttribute, the provider finds the resource key
Character_FirstName_Required.
Partial Metadata
There may be cases where you can provide some metadata, but not all of it. Ideally, the metadata that you don’t supply is inferred based on the conventions. Going back to previous example again:
public class Character { [Required(ErrorMessageResourceType=typeof(MyResources))] [StringLength(50, ErrorMessageResourceName="StringLength_Error")] [Display(Name="First Name")] public string FirstName {get; set;} }
Notice that the first attribute only specifies the error message resource type. In this case, the specified resource type will override
the default resource type. But the resource key is still inferred by convention (aka
Character_FirstName_Required).
In contrast, notice that the second
StringLengthAttribute, only specifies the resource name, and doesn’t specify a resource type. In
this case, the specified resource name is used to look up the error message using the default resource type. As you might expect, if the
ErrorMessage property is specified, that takes precedence over the conventions.
The
DisplayAttribute works slightly differently. By default, the
Name property is used as a resource key if a resource type is also
specified. If no resource type is specified, the
Name property is used directly. In the case of this convention based provider, an attempt to lookup a resource value using the
Name property as a resource always occurs before falling back to the default behavior.
Configuration
One detail I haven’t covered yet is what resource type is used to find these messages? Is that determined by convention?
Determining this by convention would be tricky so it’s the one bit of information that must be explicitly specified when configuring the provider itself. The following code in Global.asax.cs shows how to configure this.
ModelMetadataProviders.Current = new ConventionalModelMetadataProvider( requireConventionAttribute: false, defaultResourceType: typeof(MyResources.Resource) );
The model metadata provider’s constructor has two arguments used to configure it.
Some developers will want the conventions to apply to every model, while others will want to be explicit and have models opt in to this behavior. The first argument,
requireConventionAttribute, determines whether the conventions only apply to classes with the
MetadataConventionsAttribute applied.
The explicit folks will want to set this value to true so that only classes with the
MetadataConventionsAttribute applied to them (or
classes in an assembly where the attribute is applied to the assembly) will use these conventions.
The attribute can also be used to specify the resource type for resource strings.
The second property specifies the default resource type to use for resource strings. Note that this can be overridden by any attribute that specifies its own resource type.
Caveats, Issues, Potholes
This code is something I hacked together and there are a few issues to consider that I could not easily work around. First of all, the implementation has to mutate properties of attributes. In general, this is not a good thing to do because attributes tend to be global. If other code relies on the attributes having their original values, this could cause issues.
I think for most ASP.NET MVC applications (in fact most web applications period) this will not be an issue.
Another issue is that the conventions don’t work for implied validation. For example, if you have a property of a simple value type (such as int), the
DataAnnotationsValidatorProvider supplies a
RequiredValidator to validate the value. Since this validator didn’t
come from an attribute, it won’t use my convention based lookup for its error messages.
I thought about making this work, but it the hooks I need to do this without a large amount of code don’t appear to be there. I’d have to write my own validator provider (as far as I can tell) or register my own validator adapters in place of the default ones. I wasn’t up to the task just yet.
Try it out
- NuGet Package: To try it in your application, install it using NuGet:
Install-Package ModelMetadataExtensions
- Source Code:The source code is up on GitHub. | https://haacked.com/archive/2011/07/14/model-metadata-and-validation-localization-using-conventions.aspx/ | CC-MAIN-2018-05 | en | refinedweb |
The BaryonWidthGenerator class is designed to automatically calculate the running width for a given particle using information from the decayModes and the Baryon1MesonDecayer to construct the running width. More...
#include <BaryonWidthGenerator.h>
The BaryonWidthGenerator class is designed to automatically calculate the running width for a given particle using information from the decayModes and the Baryon1MesonDecayer to construct the running width.
It inherits from the GenericWidthGenerator.
Definition at line 26 of file BaryonWidthGenerator.h.
Make a simple clone of this object.
Reimplemented from Herwig::GenericWidthGenerator.
Definition at line 92 of file BaryonWidthGenerator.h.
Output the initialisation info for the database.
Reimplemented from Herwig::GenericWidthGenerator.
Initialize this object after the setup phase before saving and EventGenerator to disk.
Reimplemented from Herwig::GenericWidthGenerator.
Make a clone of this object, possibly modifying the cloned object to make it sane.
Reimplemented from Herwig::GenericWidthGenerator.
Definition at line 98 of file BaryonWidthGenerator
width for outgoing particles which can be off-shell.
Reimplemented from Herwig::GenericWidthGenerator.
Function used to read in object persistently.
Function used to write out object persistently.
Perform the set up for a mode, this is called by the base class.
Reimplemented from Herwig::GenericWidthGenerator.
The static object used to initialize the description of this class.
Indicates that this is a concrete class with persistent data.
Definition at line 120 of file BaryonWidthGenerator.h. | http://herwig.hepforge.org/doxygen/classHerwig_1_1BaryonWidthGenerator.html | CC-MAIN-2018-05 | en | refinedweb |
I'm working on a talents/skills tree system for my game made with Unity and I've got a class for my 45 talents buttons that looks like this:
public class SaveTalentClass { public int talId; public int talCurRank; public int talMaxRank; public SaveTalentClass(int id, int cRank, int mRank) { talId = id; talCurRank = cRank; talMaxRank = mRank; } }
I created 10 "Talent Lists" so the player can save different talents and I stored these 10 Lists in another List for easier access. So I've created a 2D list like that:
public List<List<SaveTalentClass>> containerList = new List<List<SaveTalentClass>>();
And added the 10 "talent Lists" into it but now I'm stuck trying to access/write in this 2D List.
I've tried a test like:
containerList[0][0].Add (new SaveTalentClass(0,1,2));
but got an error:
SaveTalentClass' does not contain a definition forAdd' and no extension method
Add' of typeSaveTalentClass' could be found (are you missing a using directive or an assembly reference?)
I'm pretty sure there's an easy fix for that but I couldn't figure out how to do it !
Thanks for helping me :) | http://www.howtobuildsoftware.com/index.php/how-do/c6k/c-list-nested-lists-c-2d-generic-list-closed | CC-MAIN-2018-05 | en | refinedweb |
I watned to learn about machine learning and I stumbled upon youtube siraj and his Udacity videos and wanted to try and pick up a few things.
His video in reference:
In his video, he had a txt file he imported and read, but when I tried to recreate the the txt file it couldnt be read in correctly. Instead, I tried to create a pandas dataframe with the same data and perform the linear regression/predict on it, but then I got the below error.
Found input variables with inconsistent numbers of samples: [1, 16] and something about passing 1d arrays and I need to reshape them.
Then when I tried to reshape them following this post: Sklearn : ValueError: Found input variables with inconsistent numbers of samples: [1, 6]
I get this error....
shapes (1,16) and (1,1) not aligned: 16 (dim 1) != 1 (dim 0)
This is my code down below. I know it's probably a syntax error, I'm just not familiar with this scklearn yet and would like some help.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn import linear_model
#DF = pd.read_fwf('BrainBodyWeight.txt')
DF = pd.DataFrame()
DF['Brain'] = [3.385, .480, 1.350, 465.00,36.330, 27.660, 14.830, 1.040, 4.190, 0.425, 0.101, 0.920, 1.000, 0.005, 0.060, 3.500 ]
DF['Body'] = [44.500, 15.5, 8.1, 423, 119.5, 115, 98.2, 5.5,58, 6.40, 4, 5.7,6.6, .140,1, 10.8]
try:
x = DF['Brain']
y = DF['Body']
x = x.tolist()
y = y.tolist()
x = np.asarray(x)
y = np.asarray(y)
body_reg = linear_model.LinearRegression()
body_reg.fit(x.reshape(-1,1),y.reshape(-1,1))
plt.scatter(x,y)
plt.plot(x,body_reg.predict(x))
plt.show()
except Exception as e:
print(e)
From documentation LinearRegression.fit() requires an x array with
[n_samples,n_features] shape. So that's why you are reshaping your
x array before calling fit. Since if you don't you'll have an array with (16,) shape, which does not meet the required
[n_samples,n_features] shape, there are no
n_features given.
x = DF['Brain'] x = x.tolist() x = np.asarray(x) # 16 samples, None feature x.shape (16,) # 16 samples, 1 feature x.reshape(-1,1).shape (16,1)
The same requirement goes for the LinearRegression.predict function (and also for consistency), you just simply need to do the same reshaping when calling the predict function.
plt.plot(x,body_reg.predict(x.reshape(-1,1)))
Or alternatively you can just reshape the
x array before calling any functions.
And for feature reference, you can easily get the inner numpy array of values by just calling
DF['Brain'].values. You don't need to cast it to list -> numpy array. So you can just use this instead of all the conversion:
x = DF['Brain'].values.reshape(1,-1) y = DF['Body'].values.reshape(1,-1) body_reg = linear_model.LinearRegression() body_reg.fit(x, y) | https://codedump.io/share/nv7s1wR0ZJUR/1/error-using-sklearn-and-linear-regression-shapes-116-and-11-not-aligned-16-dim-1--1-dim-0 | CC-MAIN-2018-05 | en | refinedweb |
24 03 2017
Using Azure AD with ASP.NET Core
Azure Active Directory is cloud-based directory service that allows users to use their personal or corporate accounts to log-in to different applications. Local Active Directory can sync data to its cloud counterpart. Also external users are supported. This blog post shows how to make ASP.NET Core application use Azure AD and how to read data that Azure AD provides about user account.
NB! To use Azure AD valid Microsoft Azure subscription is needed. It also goes for Azure AD services used by Office 365.
Using wizard for Azure AD authentication
Simplest way is adding Azure AD support to application using Visual Studio. Visual Studio 2017 allows to add Azure AD authentication for new applications. Hopefully there will be soon also support for adding Azure AD to existing applications. As this is how-to style post I will go here with new default application.
Steps are simple:
- Create new ASP.NET Core application
- Choose template
- Click on “Change Authentication” button
- Select “Work or School Accounts”
- Choose Azure AD you want to use
- Click “Ok”
Visual Studio adds automatically new configuration file with all required configuration parametes, updates Startup class of web application and adds Account controller that coordinates authentication processes. If everything went well then we have application that supports Azure AD authentication and we can stop here.
Connecting application to Azure Active Directory manually
If we can’t use nice wizard for some reason then we can enable Azure AD support manually. This section is a short guide to how to do it. Even when Visual Studio wizards work well I suggest you to go through following sections of this blog post too as it gives you better idea how Azure AD support is actually implented.
Add the following NuGet packages to web application:
- Microsoft.AspNetCore.Authentication.Cookies
- Microsoft.AspNetCore.Authentication.OpenIdConnect
Add these settings to appsettings.json (this data can be found from Azure Portal).
"Authentication": {
"AzureAd": {
"AADInstance": "",
"CallbackPath": "/signin-oidc",
"ClientId": "your client id",
"Domain": "your-domain.com",
"TenantId": "your tenant id"
}
}
We need couple of changes to Startup class too. To ConfigureServices() method we add call to AddAuthentication() and to Configure() method the call to UseOpenIdConnectAuthentication.
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMvc();
services.AddAuthentication(
SharedOptions => SharedOptions.SignInScheme =
CookieAuthenticationDefaults.AuthenticationScheme
);
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
// ...
app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
ClientId = Configuration["Authentication:AzureAd:ClientId"],
Authority = Configuration["Authentication:AzureAd:AADInstance"] + Configuration["Authentication:AzureAd:TenantId"],
CallbackPath = Configuration["Authentication:AzureAd:CallbackPath"]
});
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
We need additional controller to coordinate authentication operations. The tooling – when it works – adds automatically new Account controller. Here is the code.
public class AccountController : Controller
{
();
}
}
It’s easy to see from code that there are two views we need to add. Here is the view for SignedOut action.
@{
ViewData["Title"] = "Sign Out";
}
<h2>@ViewData["Title"].</h2>
<p class="text-success">You have successfully signed out.</p>
This is the view for AccessDenied action.
@{
ViewData["Title"] = "Access Denied";
}
<header>
<h1 class="text-danger">Access Denied.</h1>
<p class="text-danger">You do not have access to this resource.</p>
</header>
Now we are done with coding and it’s time to try out Azure AD authentication.
Trying out Azure AD authentication
When we run our application we are redirected to identity provider login page. In my case it is Microsoft Account login page. Take a look at page title and notice my holy lazyness.
After successful authentication we are returned back to our application. Notice that user name is not e-mail address or GUID or some long sequence of letters and numbers. It has a special format: <authentication provider>#<e-mail address>.
This user name format is good for one thing – it is unique and we can also use it when multiple active directories are available for application users.
What data we get from Azure AD?
User name is not in very user-friendly format and the question is: what we can do to get something better there? Well, it’s a claim based authentication identity and where we should look is claims collection of user. As claims contain also sensitive information I don’t show here screenshot but I show the code that displays claims out.
To display claims that current claim identity has we have to send claims from controller to view. Here is the Index action of Home controller.
public IActionResult Index()
{
var claims = ((ClaimsIdentity)User.Identity).Claims;
return View(claims);
}
NB! This code works only if we have identities of type ClaimsIdentity. With other authentication mechanisms we may have other identity types. To support different identitites in same code base we need more common way to detect user attributes.
To show claims on front page we use the following table.
@model IEnumerable<System.Security.Claims.Claim>
@{
ViewData["Title"] = "Home Page";
}
<div class="row">
<div class="col-md-12">
<table>
<thead>
<tr>
<th>Claim</th>
<th>Value</th>
</tr>
</thead>
<tbody>
@foreach(var claim in Model)
{
<tr>
<td>@claim.Type</td>
<td>@claim.Value</td>
</tr>
}
</tbody>
</table>
</div>
</div>
Run application, log in and take a look at table of claims. There is all basic information about user like first name, last name and e-mail address.
Displaying nice user name
User identificator with provider wasn’t the best choice of names we can show to user when he or she is logged in to site. There is _LoginPartial view under shared views folder and this is how this view looks like.
@using System.Security.Principal
@if (User.Identity.IsAuthenticated)
{
<ul class="nav navbar-nav navbar-right">
<li class="navbar-text">Hello @User.Identity>
}
We add some additional code here to make this view display full name of user. As we saw from claims table then there is claim called name. This is the claim we will use.
@using System.Security.Principal
@using System.Security.Claims
@{
var claims = ((ClaimsIdentity)User.Identity).Claims;
var name = claims.FirstOrDefault(c => c.Type == "name")?.Value;
}
@if (User.Identity.IsAuthenticated)
{
<ul class="nav navbar-nav navbar-right">
<li class="navbar-text">Hello >
}
NB! Instead of writing code to partial view we should use view component and move this mark-up to some view of view component. Here I just wanted to show how to get the full name of current user.
Wrapping up
Adding Azure AD support to ASP.NET Core applications is easy. It can be done using Visual Studio but it also can be done manually. We needed few additional configuration parameters, some lines of code and small change to login view. Although default user name used by ASP.NET Core internally doesn’t look nice we were able to get user e-mail and full name from claims collection we got back from Azure AD.
Out variables in C# 7.0 Solving bower and bundling errors in imported ASP.NET Core projects
Hi! How do you configure the access denied path so when you have a 403 status code you hit a custom action in a controller. Thanks! I can´t do it using Azure Auth.
Hi!
Please describe your scenario more.
Sorry for taking so long. I’m using a custom Authorization Policy Provider.
In the ConfigureServices methos I’m adding the my custom provider and handler:
services.AddTransient();
services.AddSingleton();
In the Configure Method I set the Cookie and Open IdConnect Options:
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AutomaticAuthenticate = true,
SlidingExpiration = false,
CookieName = “idfr_sga”,
ExpireTimeSpan = TimeSpan.FromHours(12),
AccessDeniedPath = new PathString(“/Site/NoAutorizado”),
Events = new CookieAuthenticationEvents
{
OnSigningIn = ctx =>
{
ctx.Principal = TransformClaims(ctx, config);
return Task.FromResult(null);
}
}
});
app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
AutomaticChallenge = true,
ClientId = config[“Authentication:AzureAd:ClientId”],
Authority = config[“Authentication:AzureAd:AADInstance”] + config[“Authentication:AzureAd:TenantId”],
PostLogoutRedirectUri = config[“Authentication:AzureAd:PostLogoutRedirectUri”],
UseTokenLifetime = false,
});
The thing is, when i get a 403 the app doesn’t redirect the user to the url “/Site/NoAutorizado” as i have configured in the Cookie Authentication options. Instead it sends the user to the login form, but the user is already logged in so the login form sends the user to the previous page, but that page gives a 403 so it goes to the login form, and then to the page with 403 and again and again and i get an infinite loop. I really can´t figure out what i´m doing wrong. Thanks!!
Do you know of an example of testing Controllers behind Azure AD?
To unit test controllers you have to get all real authentication out of your way. Otherwise your tests will be polluted with other aspects and integrations that doesn’t relate directly to controller logic.
For ASP.NET Core you should use different start-up and mock users. Perhaps the code here gives you some hint how to get started:.
A few details are not clear for me:
1) There doesn’t appear to be an endpoint in any of your controllers corresponding to signin-oidc, should there be? if not why not?
2) I don’t see any usage of client key/secret that surprises me. If no key is needed for the app? What circumstances do require a key and how/where would it be specified in the code?
3) Is the configuration value for AADInstance the same for everybody or is it specific to your Azure account?
4) What is the “App ID URI” value in Azure Portal, the one that looks like… – it has nothing to do with this?
Any response is appreciated.
Thanks in advance!
In Configure() method of start-up class OpenIdConnect is configured and there Azure AD config values are used. siginin-oidc end-point is created by OpenID component when application starts. AADInstance – in my case it is same for all applications as actually tenant ID and client ID help service to detect the correct Azure AD instance. App ID URI (fix me) is not needed if we don’t have API or Bearer token authentication.
Gunnar:
I’m receiving a “Untrusted Certificate” error when I attempt to authenticate with Azure AD. The project does work properly in that user is authenticated, but local website does display certificate errors. Do I need to alter the authentication methods to resolve this issue? Any insights or information you could provide would be appreciated.
What is the certificate you are using in your local machine? If you are not tricking with real certificate and hosts file then you are probably using development web server certificate that is automatically generated by IIS. If it’s about development server certificate that is issued to localhost then you can ignore the message or add it to trusted certificates store.
Gunnar:
Thanks, I was able to resolve the issue. Just added the cert to the trusted cert store. I recently starting reading about the MS Graph product, it appears that .net Core does not support Graph, but I could call the APIs via client code. Do you have any resources/documentation that you could recommend for calling the APIs via client code?
How can you use the signed in user to authenticate against a web api that also uses Azure AD with Oauth2 ?
My goal is that you sign into the web app and then the web app can use your authenticated context to make authenticated calls targeting the api.
Hi, Chris!
For this you should use Bearer Token authentication. Both web applications – the one that users use through browser and web api – must be registered in same Azure AD. To get things done on web application side please check my blog post about bearer token authentication here:
Hi Gunnar
is there any way to get all users list from active directory.
Any suggestions/Directions are appreciated.
Hi Jazz,
It should be possible. If I’m correct then these queries to Azure AD should be done using application level requests (then current user must not necessarily have high permission to Azure AD) and application must have enough permissions assigned to it in Azure AD.
You can take a look at my GitHub sample here: And this is the class where you can add similar methods for app level requests: | http://gunnarpeipman.com/2017/03/aspnet-core-azure-ad/ | CC-MAIN-2018-05 | en | refinedweb |
Spring Boot Support in Spring Tool Suite 3.6.4
Spring Boot STS Tutorial
Spring Tool Suite 3.6.4 was just released last week. This blog post is a tutorial demonstrating some of the new features STS provides to create and work with Spring Boot applications.
In this tutorial you’ll learn how to:
- create a Simple Spring Boot Application with STS
- launch and debug your boot application from STS
- use the new STS Properties editor to edit configuration properties.
- use @ConfigurationProperties in your code to get the same editor support for your own configuration properties.
Creating a Boot App
We use the “New Spring Starter” wizard to create a basic spring boot app.
Spring boot provides so called ‘starters’. A starter is set of classpath dependencies, which, together with Spring Boot auto configuration lets you get started with an app without needing to do any configuration. We pick the ‘web’ starter as we’ll build a simple ‘Hello’ rest service.
The wizard is a GUI frontend that, under the hood, uses the web service at start.spring.io to generate some basic scaffolding. You could use the web service directly yourself, download the zip it generates, unpack it, import it etc. Using the STS wizard does all of this at the click of a button and ensures the project is configured correctly so you can immediately start coding.
After you click the finish button, your workspace will look something like this:
The
HelloBootApplication Java-main class generated by start.spring.io is the only code in our app at the moment. Thanks to the ‘magic’ of spring boot, and because we added the ‘web’ starter to our dependencies, this tiny piece of code is already a fully functional web server! It just doesn’t have any real content yet. Before adding some content, let’s learn how to run the app, and verify it actually runs in the process.
Running a Boot App in STS
Spring boot apps created by the wizard come in two flavors ‘jar’ or ‘war’. The Starter wizard let’s you choose between them in its ‘packaging’ option. A great feature of spring-boot is that you can easily create standalone ‘jar’ packaged projects that contain a fully functional embedded web server. All you need to do to run your app, is run its Java Main type, just like you do any other plain Java application. This is a huge advantage as you don’t have to mess around with setting up local or remote Tomcat servers, war-packaging and deploying. If you really want to do things ‘the hard way’ you can still choose ‘war’ packaging. However there’s really no need to do so because:
- you can convert your ‘jar’ app to a ‘war’ app at any time
- the Cloud Foundry platform directly supports deploying standalone Java apps.
Note: We won’t cover how to deploy apps to Cloud Foundry here, but in this article you can learm more about using Cloud Foundry Eclipse to do that directly from your IDE.
Now, if you understood what I just said, then you probably realize you don’t actually need any ‘special’ tooling from STS to run the app locally. Just click on the Java Main type and select “Run As >> Java Application” and voila. Also all of your standard Eclipse Java debugging tools will ‘just work’. However, STS provides a dedicated launcher that does basically the same thing but adds a few useful bells and whistles. So let’s use that instead.
Your app should start and you should see some output in the console view:
You can open your app running locally at. All you’ll get is a
404 error page, but that is exactly as expected since we haven’t yet added any real content to our app.
Now, what about the bells and whistles I promised? “Run As >> Boot App” is pretty much a plain Java launcher but provides some extra options to customize the launch configurations it creates. To see those options we need to open the “Launch Configuration Editor”, accessible from the
or
toolbar button:
If you’ve used the Java Launch Configuration Editor in Eclipse, this should look familiar. For a Boot Launch Configuration, the ‘Main’ tab is a little different and has some extra stuff. I won’t discuss all of the extras, you can find out more in the STS 3.6.4 release notes. So let’s just do something simple, for example, override the default http port
8080 to something else, like
8888. You can probably guess that this can be done by setting a system property. In the ‘pure’ Java launcher you can set such properties via command-line arguments. But what, you might wonder, is the name of that property exactly “spring.port”, “http.port”, “spring.server.port”? Fortunately, the launch configuration editor helps. The Override Properties table provides some basic content assist. You just type ‘port’ and it makes a few suggestions:
server.port add the value
8888 in the right column and click “Run”.
If you followed the steps exactly up to this point, your launch probably terminates immediately with an exception:
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 31196; nested exception is: java.net.BindException: Address already in use
This may be a bit of a surprise, since we just changed our port didn’t we? Actually the port conflict here is not from the http port but a JMX port used to enable “Live Bean Graph Support” (I won’t discuss this feature in this Blog post, see STS 3.6.4 release notes).
There are a few things we could do to avoid the error. We could open the editor again and change the JMX port as well, or we could disable ‘Live Bean Support’. But probably we don’t really want to run more than one copy of our app in this scenario. So we should just stop the already running instance before launching a new one. As this is such a common thing to do, STS provides a
Toolbar Button for just this purpose. Click the Button, the running app is stopped and restarted with the changes you just made to the Launch Configuration now taking effect. If it worked you should now have a
404 error page at instead of
8080. (Note: the Relaunch button won’t work if you haven’t launched anything yet because it works from your current session’s launch history. However if you’ve launched an app at least once, it is okay to ‘Relaunch’ an app that is already terminated)
Editing Properties Files
Overriding default property values from the Launch Configuration editor is convenient for a ‘quick override’, but it probably isn’t a great idea to rely on this to configure many properties and manage more complex configurations for the longer term. For this it is better to manage properties in a properties file which you can commit to SCM. The starter Wizard already conveniently created an empty
application.properties for us.
To help you edit
application.properties STS 3.6.4 provides a brand new Spring Properties Editor. The editor provides nice content assist and error checking:
The above screen shot shows a bit of ‘messing around’ with the content assist and error checking. The only property shown that’s really meaningful for our very simple ‘error page App’ right now is
server.port. Try changing the port in the properties file and it should be picked up automatically when you run the app again. However be mindful that properties overridden in the Launch Configuration take priority over
application.properties. So you’ll have to uncheck or delete the
server.port property in the Launch Configuration to see the effect.
Making Our App More Interesting
Let’s make our app more interesting. Here’s what we’ll do:
- Create a ‘Hello’ rest service that returns a ‘greeting’ message.
- Make the greeting message configurable via Spring properties.
- Set up the project so user-defined properties get nice editor support.
Create a Simple Hello Rest Service
To create the rest service you could follow this guide. Hover we’re doing something even simpler and more direct.
Go ahead and create a controller class with this code:
package demo; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class HelloController { @RequestMapping("/hello") public String hello(@RequestParam String name) { return "Hello "+name; } }
Try this out by Relaunching (
) your app. The URL should return a text message “Hello Kris”.
Making the Greeting Configurable
This is actually quite easy to do, and you might be familiar with Spring’s @Value annotation. However, using
@Value you won’t be able get nice content assist. Spring Properties Editor won’t be aware of properties you define that way. To understand why, it is useful to understand a little bit about how the Spring Properties Editor gets its information about the known properties.
Some of the Spring Boot Jars starting from version 1.2.x contain special JSON meta-data files that the editor looks for on your project’s classpath and parses. These files contain information about the known configuration properties. If you dig for a little, you can find these files from STS. For example, open “spring-boot-autoconfigure-1.2.2.RELEASE.jar” (under “Maven Dependencies”) and browse to “META-INF/spring-configuration-metadata.json”. You’ll find properties like
server.port being documented there.
For our own user-defined properties to be picked-up by the editor we have to create this meta data. Fortunately this can be automated easily provided you define your properties using Spring Boot @ConfigurationProperties. So define a class like this:
package demo; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; @Component @ConfigurationProperties("hello") public class HelloProperties { /** * Greeting message returned by the Hello Rest service. */ private String greeting = "Welcome "; public String getGreeting() { return greeting; } public void setGreeting(String greeting) { this.greeting = greeting; } }
The
@ConfigurationProperties("hello") tells Boot to take configuration properties starting with
hello. and try to inject them into corresponding Bean properties of the
HelloProperties Bean. The
@Component annotation marks this class so that Spring Boot will pick up on it scanning the classpath and turn it into a Bean. Thus, if a configuration file (or another property source) contains a property
hello.greeting then the value of that property will be injected into
setGreeting of our
HelloProperties Bean.
Now, to actually use this property all we need is a reference to the bean. For example to customize the message returned by the rest service, we can add a
@Autowired field to the
HelloController and call its
getGreeting method:
@RestController public class HelloController { @Autowired HelloProperties props; @RequestMapping("/hello") public String hello(@RequestParam String name) { return props.getGreeting()+name; } }
Relaunch your app again and try to access. You should get the default “Welcome yourname” message.
Now go ahead and try editing
application.properties and change the greeting to something else. Allthough we already have everything in place to correctly define the property at run-time, you’ll notice that the editor is still unaware of our newly minted property:
What’s still missing to make the editor aware is the
spring-configuration-metadata.json file. This file is created at build-time by the
spring-boot-configuration-processor which is a Java Annotation Processor. We have to add this processor to our project and make sure it is executed during project builds.
Add this to the
pom.xml:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> </dependency>
Then perform a “Maven >> Update Project” to trigger a project configuration update. A Maven project configurator provided by STS will configure JDT APT and activate the processor for Eclipse builds. The warning will immediately disappear from the editor. You’ll also get proper Hover Info:
Now that the annotation processor has been activated, any future changes to your
HelloProperties class will trigger an automatic update of the json metadata. You can try it out by adding some extra properties, or renaming your
greeting property to something else. Warnings will appear / disappear as appropriate. If you are curious where your metadata file is, you can find it in
target/classes/META-INF. The file is there, even though Eclipse does its best to hide it from you. Eclipse does this with all files in a project’s output folder. You can get around this though by using the
Navigator view which doesn’t filter files as much and shows you a more direct view on the actual resources in your workspace. Open this view via “Window >> Show View >> Other >> Navigator”:
Note: We know that the manual step of adding the processor seems like an unnecessary complication. We have plans to automate this further in the future.
The End
I hope you enjoyed this Tutorial. Comments and questions are welcome. In another post, coming soon, I will show you more adanced uses of
@ConfigurationProperties and how the STS properties editor supports that.
Links
- Spring Tool Suite
- STS 3.6.4 release notes
- Cloud Foundry Eclipse
- Service Management Through Cloud Foundry Eclipse
- Java Buildpack for Cloud Foundry
- Spring Boot
- Getting Started Guide: Converting Boot project from Jar to War
- start.spring.io A Boot App to Generate ‘Getting Started’ Boot Apps
- Getting Started Guide: Building A Rest Service
- @ConfigurationProperties JavaDoc | http://spring.io/blog/2015/03/18/spring-boot-support-in-spring-tool-suite-3-6-4 | CC-MAIN-2018-05 | en | refinedweb |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
Hi, I have attached an updated patch that addresses all the comments raised. On Fri, Apr 12, 2013 at 1:58 AM, Jakub Jelinek <jakub@redhat.com> wrote: > On Thu, Apr 11, 2013 at 12:05:41PM -0700, Sriraman Tallam wrote: >> I have attached a patch that fixes this. I have added an option >> "-mgenerate-builtins" that will do two things. It will define a macro >> "__ALL_ISA__" which will expose the *intrin.h functions. It will also >> expose all the target specific builtins. -mgenerate-builtins will not >> affect code generation. > > 1) this shouldn't be an option, either it can be made to work reliably, > then it should be done always, or it can't, then it shouldn't be done Ok, it is on by default now. There is a way to turn it off, with -mno-generate-builtins. > 2) have you verified that if you always generate all builtins, that the > builtins not supported by the ISA selected from the command line are > created with the right vector modes? This issue does not arise. When the target builtin is expanded, it is checked if the ISA support is there, either via function specific target opts or global target opts. If not, an error is raised. Test case added for this, please see intrinsic_4.c in patch. > 3) the *intrin.h headers in the case where the guarding macro isn't defined > should be surrounded by something like > #ifndef __FMA4__ > #pragma GCC push options > #pragma GCC target("fma4") > #endif > ... > #ifndef __FMA4__ > #pragma GCC pop options > #endif > so that everything that is in the headers is compiled with the ISA > in question I do not think this should be done because it will break the inlining ability of the header function and cause issues if the caller does not specify the required ISA. The fact that the header functions are marked extern __inline, with gnu_inline guarantees that a body will not be generated and they will be inlined. If the caller does not have the required ISA, appropriate errors will be raised. Test cases added, see intrinsics_1.c, intrinsics_2.c > 4) what happens if you use the various vector types typedefed in the > *intrin.h headers in code that doesn't support those ISAs? As TYPE_MODE > for VECTOR_TYPE is a function call, perhaps it will just be handled as > generic BLKmode vectors, which is desirable I think I checked some tests here. With -mno-sse for instance, vector types are not permitted in function arguments and return values and gcc raises a warning/error in each case. With return values, gcc always gives an error if a SSE register is required in a return value. I even fixed this message to not do it for functions marked as extern inline, with "gnu_inline" keyword as a body for them will not be generated. > 5) what happens if you use a target builtin in a function not supporting > the corresponding ISA, do you get proper error explaining what you are > doing wrong? Yes, please sse intrinsic_4.c test in patch. > 6) what happens if you use some intrinsics in a function not supporting > the corresponding ISA? Dunno if the inliner chooses not to inline it > and error out because it is always_inline, or what exactly will happen > then Same deal here. The intrinsic function will, guaranteed, to be inlined into the caller which will be a corresponding builtin call. That builtin call will trigger an error if the ISA is not supported. Thanks Sri > > For all this you certainly need testcases. > > Jakub
Attachment:
mmintrin_patch.txt
Description: Text document | http://gcc.gnu.org/ml/gcc-patches/2013-04/msg00999.html | CC-MAIN-2015-48 | en | refinedweb |
Catalyst::Plugin::DateTime - DateTime plugin for Catalyst.
# as a default.
Alias to datetime.
This module's intention is to make the wonders of DateTime easily accesible within a Catalyst application via the Catalyst::Plugin interface.
It adds the methods
datetime and
dt to the
Catalyst namespace.
James Kiser james.kiser@gmail.com
Copyright (c) 2006 the aforementioned author(s). All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~jkiser/Catalyst-Plugin-DateTime-0.03/lib/Catalyst/Plugin/DateTime.pm | CC-MAIN-2015-48 | en | refinedweb |
Just a quick one. Just noticed that when trying to pass an empty string literal "" to a function that it's converted to null.
I want to use that argument as a context to execute a callback on. e.g. arg = fn.call(arg, xxx);
The only way around it was to pass a new String('') object instead, however I see on JSHint that you get reprimanded for using the string constructor.
Any other way around this?
Cheers
RLM
Something strange is going on then, for with the following code I see in my console different results, depending on if it's an empty string or null
function show(obj) {
console.log(obj);
}
> show("");
<-undefined
> show(null);
null
<-undefined
What in your situation causes it to be changed from an empty string to null?
You're right. It seemed strange. A flaw elsewhere in my code perhaps.
function check(x){
console.log(x);
console.log({}.toString.call(x));
};
check("");
-->(an empty string)
-->[object String]
It's in my css parser a call to parentLookUp. Will give it another look.
That would be this one here:
parentLookUp = function (child, fn, obj) {
if (obj) {
while (child = child.parent) {
obj = fn.call(obj, child);
}
return obj;
} else {
while (child = child.parent) fn(child);
}
},
What types of expected arguments are you wanting to use with that., Can you provide some examples that both include and exclude the empty string?
Paul it needs work.
I wanted the function to have some flexibility, so that if down the line if I want to pass an array to it I can. Needs thinking out.
Just knocked up a simplified example of usage
function func(fn, obj){
return fn.apply(obj);
}
function prefix () { return this.replace(/^/, 'Starts here...'); }
console.log(func(prefix, ""));
In parentLookUp 'Start here' would be a value supplied by a child property.
An example of the code calling the function is.
parentLookUp(cssRule, function (child) {
return (child.selector) ? this.replace(/^/, child.selector + ' ') : this;
}, "" );
Alternate usage is just a simple lookup so that a function can gain access to those properties. No object supplied. As in the parseVars function I use to replace variables with their values.
Spotted the flaw. 'if (obj)'
Will never be true for on an empty string
However with the new String wrapper.
var x = "";
if(x) console.log('false');
>
var x = new String('');
if(x) console.log('true');
> true
Need to sort out my type checking.
Something like this.
if (obj !== undefined || obj !== null)
Okay, so what are you wanting to check instead? Just that if something is passed as the third parameter?
if (obj !== undefined) {
...
}
That will let defined objects be used, even if they are an empty string too.
if (obj !== undefined)
Yes that's it. Sometimes can't see the wood for the trees. | https://www.sitepoint.com/community/t/passing-an-empty-string-literal-to-a-function/43599 | CC-MAIN-2015-48 | en | refinedweb |
Type::GetDefaultMembers Method
Searches for the members defined for the current Type whose DefaultMemberAttribute is set.
Assembly: mscorlib (in mscorlib.dll)
Return ValueType: array<System.Reflection::MemberInfo>
An array of MemberInfo objects representing all default members of the current Type.
-or-
An empty array of type MemberInfo, if the current Type does not have default members.
Implements_Type::GetDefaultMembers() namespace System; using namespace System::Reflection; using namespace System::IO; [DefaultMemberAttribute("Age")] public ref class MyClass { public: void Name( String^ s ){} property int Age { int get() { return 20; } } }; int main() { try { Type^ myType = MyClass::typeid; array<MemberInfo^>^memberInfoArray = myType->GetDefaultMembers(); if ( memberInfoArray->Length > 0 ) { System::Collections::IEnumerator^ myEnum = memberInfoArray->GetEnumerator(); while ( myEnum->MoveNext() ) { MemberInfo^ memberInfoObj = safe_cast<MemberInfo^>(myEnum->Current); Console::WriteLine( "The default member name is: {0}", memberInfoObj ); } } else { Console::WriteLine( "No default members are available." ); } } catch ( InvalidOperationException^ e ) { Console::WriteLine( "InvalidOperationException: {0}", e->Message ); } catch ( IOException^ e ) { Console::WriteLine( "IOException: {0}", e->Message ); } catch ( Exception^ e ) { Console::WriteLine( "Exception: . | https://msdn.microsoft.com/en-us/library/system.type.getdefaultmembers(d=printer,v=vs.100).aspx?cs-save-lang=1&cs-lang=cpp | CC-MAIN-2015-48 | en | refinedweb |
You can subscribe to this list here.
Showing
4
results of 4
Here's some code the I think CLISP incorrectly compiles:
(cl:defun split-assignment (args)
(loop for d in args by #'cddr as b on args by #'cddr
collect d into vars
when (not (null (cdr b)))
collect (cadr b) into binds
finally (return (values vars b binds))))
(split-assignment '(a 1 b 2 c 3)) should return
(A B C)
(C 3)
(1 2 3)
On compiling this, CLISP complains that B is an unknown variable and
assumes it's special. It works fine on CMUCL and LWW.
Note: this is not my code, and I'm going to replace this with an
equivalent do loop, but I thought I'd report this anyway.
Ray
Update of /cvsroot/clisp/clisp/src
In directory cvs1.i.sourceforge.net:/tmp/cvs-serv11446
Modified Files:
Tag: release-2000-Feb
ChangeLog arimips.d
Log Message:
Accomodate n32 ABI parameter passing conventions.
Update of /cvsroot/clisp/clisp/libiconv/src
In directory cvs1.i.sourceforge.net:/tmp/cvs-serv11012/src
Modified Files:
Tag: release-2000-Feb
utf7.h
Log Message:
Fix bug in the UTF-7 converter's handling of tab characters.
Update of /cvsroot/clisp/clisp/libiconv
In directory cvs1.i.sourceforge.net:/tmp/cvs-serv11012
Modified Files:
Tag: release-2000-Feb
ChangeLog
Log Message:
Fix bug in the UTF-7 converter's handling of tab characters. | http://sourceforge.net/p/clisp/mailman/clisp-devel/?viewmonth=200002&viewday=22 | CC-MAIN-2015-48 | en | refinedweb |
libs/type_erasure/example/basic.cpp
// Boost.TypeErasure library // // Copyright 2011 Steven Watanabe // // Distributed under the Boost Software License Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at //) // // $Id$ #include <boost/type_erasure/any.hpp> #include <boost/type_erasure/any_cast.hpp> #include <boost/type_erasure/builtin.hpp> #include <boost/type_erasure/operators.hpp> #include <boost/type_erasure/member.hpp> #include <boost/type_erasure/free.hpp> #include <boost/mpl/vector.hpp> #include <iostream> #include <vector> namespace mpl = boost::mpl; using namespace boost::type_erasure; void basic1() { //[basic1 /*` The main class in the library is __any. An __any can store objects that meet whatever requirements we specify. These requirements are passed to __any as an MPL sequence. [note The MPL sequence combines multiple concepts. In the rare case when we only want a single concept, it doesn't need to be wrapped in an MPL sequence.] */ any<mpl::vector<copy_constructible<>, typeid_<>, relaxed> > x(10); int i = any_cast<int>(x); // i == 10 /*` __copy_constructible is a builtin concept that allows us to copy and destroy the object. __typeid_ provides run-time type information so that we can use __any_cast. __relaxed enables various useful defaults. Without __relaxed, __any supports /exactly/ what you specify and nothing else. In particular, it allows default construction and assignment of __any. */ //] } void basic2() { //[basic2 /*` Now, this example doesn't do very much. `x` is approximately equivalent to a [@boost:/libs/any/index.html boost::any]. We can make it more interesting by adding some operators, such as `operator++` and `operator<<`. */ any< mpl::vector< copy_constructible<>, typeid_<>, incrementable<>, ostreamable<> > > x(10); ++x; std::cout << x << std::endl; // prints 11 //] } //[basic3 /*` The library provides concepts for most C++ operators, but this obviously won't cover all use cases; we often need to define our own requirements. Let's take the `push_back` member, defined by several STL containers. */ BOOST_TYPE_ERASURE_MEMBER((has_push_back), push_back, 1) void append_many(any<has_push_back<void(int)>, _self&> container) { for(int i = 0; i < 10; ++i) container.push_back(i); } /*` We use the macro __BOOST_TYPE_ERASURE_MEMBER to define a concept called `has_push_back`. The second parameter is the name of the member function and the last macro parameter indicates the number of arguments which is `1` since `push_back` is unary. When we use `has_push_back`, we have to tell it the signature of the function, `void(int)`. This means that the type we store in the any has to have a member that looks like: `` void push_back(int); `` Thus, we could call `append_many` with `std::vector<int>`, `std::list<int>`, or `std::vector<long>` (because `int` is convertible to `long`), but not `std::list<std::string>` or `std::set<int>`. Also, note that `append_many` has to operate directly on its argument. It cannot make a copy. To handle this we use `_self&` as the second argument of __any. `_self` is a __placeholder. By using `_self&`, we indicate that the __any stores a reference to an external object instead of allocating its own object. */ /*` There's actually another __placeholder here. The second parameter of `has_push_back` defaults to `_self`. If we wanted to define a const member function, we would have to change it to `const _self`, as shown below. */ BOOST_TYPE_ERASURE_MEMBER((has_empty), empty, 0) bool is_empty(any<has_empty<bool(), const _self>, const _self&> x) { return x.empty(); } /*` For free functions, we can use the macro __BOOST_TYPE_ERASURE_FREE. */ BOOST_TYPE_ERASURE_FREE((has_getline), getline, 2) std::vector<std::string> read_lines(any<has_getline<bool(_self&, std::string&)>, _self&> stream) { std::vector<std::string> result; std::string tmp; while(getline(stream, tmp)) result.push_back(tmp); return result; } /*` The use of `has_getline` is very similar to `has_push_back` above. The difference is that the placeholder `_self` is passed in the function signature instead of as a separate argument. The __placeholder doesn't have to be the first argument. We could just as easily make it the second argument. */ void read_line(any<has_getline<bool(std::istream&, _self&)>, _self&> str) { getline(std::cin, str); } //] //[basic //` (For the source of the examples in this section see //` [@boost:/libs/type_erasure/example/basic.cpp basic.cpp]) //` [basic1] //` [basic2] //` [basic3] //] | http://www.boost.org/doc/libs/1_59_0/libs/type_erasure/example/basic.cpp | CC-MAIN-2015-48 | en | refinedweb |
»
Ant, Maven and Other Build Tools
Author
merge multiple jars into one jar?
Geoffrey Falk
Ranch Hand
Joined: Aug 17, 2001
Posts: 171
1
posted
Aug 25, 2005 10:54:00
0
I need to merge a couple jar files into one jar using
Ant
.
It is easy to do this for a fixed list of jarfiles, using <zipfileset>. The problem is, I want to do this in a generic way, and avoid hard-coding the names of the jar files.
Here is a section of my build file:
<property name="dependencies" value="dependency-A.jar dependency-B.jar dependency-C.jar" /> <classpath id="build.classpath"> <fileset dir="lib" includes="${dependencies}" /> </classpath> <jar jarfile="myapp.jar" manifest="src/MANIFEST.MF"> <fileset dir="classes" includes="**/*.class" /> <zipfileset src="lib/dependency-A.jar" excludes="META-INF/*" /> <zipfileset src="lib/dependency-B.jar" excludes="META-INF/*" /> <zipfileset src="lib/dependency-C.jar" excludes="META-INF/*" /> </jar>
I have to specify the dependencies twice: Once for my build classpath, and once in the jar task. It would be nice to use the ${dependencies} property for both. Unfortunately I don't see how to make the <zipfileset> do that.
Thanks
Geoffrey
Sun Certified Programmer for the Java 2 Platform
Andy Hahn
Ranch Hand
Joined: Aug 31, 2004
Posts: 225
posted
Aug 25, 2005 19:01:00
0
I wouldn't recommend doing this. However to do this, you will have to either hard code the jar names in the mainfest file OR find a neat way to read the jar file names (using java.io) and build up the manifest file.
Geoffrey Falk
Ranch Hand
Joined: Aug 17, 2001
Posts: 171
1
posted
Aug 28, 2005 08:16:00
0
Why don't you recommend doing this? The obvious purpose is just to make a single application file (executable jar) that can run with "java -jar". My program has dependencies on some other jars (all open source). Merging everything into one jar seems to be the simplest way. As long as I obey the open source license, there aren't any legal issues.
I did not put the classpath in the Manifest file. I am not doing anything fancy with classloaders. The only thing I specified in the Manifest is the Main-Class.
Anyways, since I couldn't think of a way to do this using <zipfileset>, now I am expanding all the jars into a temp directory and then jarring it up again. This is a bit slower but it works.
Thanks
Geoffrey
Glenn Seg
Greenhorn
Joined: Sep 20, 2005
Posts: 2
posted
Sep 20, 2005 14:23:00
0
I see that in your orignal approach (naming each file) and in the new approach, you are omitting the data in the META-INF directory. Actually, in the second way, the contents of the META-INF directory gets overwritten for each jar file that is expanded.
Are you planning to merge the manifest files in a future revision of the build file? In addition to having possible legal problems (dependent on the jar files you are using), you are losing data that may or may not be important. Do you think that not having the manifest information will cause problems?
The reason I ask is that I am doing the same thing and I can't afford for this to cause some unforseen problem.
Lewin Chan
Ranch Hand
Joined: Oct 10, 2001
Posts: 214
posted
Sep 21, 2005 02:44:00
0
This achieves what you specified as your aim. As to how you can do it in ant generically...
Well, you could wrap the ant build in a shell script and do.
list=`ls -1A` for dir in $list do jarfiles=`echo ${dir}` done $ANT_HOME/bin/ant -Djar.files=${jarfiles} -f ${BUILDFILE}
And then do something with the foreach task from ant-contrib. Or keeping it all anty you just write your own task to do it. For instance :-
/** A custom unjar task that performs the following tasks. * <ul> * <li>Un-Jar a number of jars, based on some fileset and filter</li> * <li>Read any MANIFEST.MF file that is present from the unjarred file</li> * <li>Re-write the manifest for all the jars that have been unjarred</li> * </ul> * @author lchan */ public class Unjar extends Task { private String mainClass; private List filesets = new LinkedList(); private String filterString; private Manifest manifest; private File manifestOutput; private File destDir; /** @see Object#Object() * * */ public Unjar() { filterString = "*"; } /** * @see org.apache.tools.ant.Task#init() */ public void init() throws BuildException { ; } /** Add a fileset to be processed by this task. * * @param fs the fileset to add. */ public void addFileset(FileSet fs) { filesets.add(fs); } /** * @see org.apache.tools.ant.Task#execute() */ public void execute() throws BuildException { try { GlobFilenameFilter fileFilter = new GlobFilenameFilter(filterString); ArrayList fileList = new ArrayList(); manifest = new Manifest(); File parent = null; for (Iterator it = filesets.iterator(); it.hasNext() ; ) { FileSet fs = (FileSet) it.next(); DirectoryScanner ds = fs.getDirectoryScanner(getProject()); parent = ds.getBasedir(); String[] files = ds.getIncludedFiles(); for (int i = 0; i < files.length; i++) { log(files[i], Project.MSG_INFO); if (fileFilter.accept(new File(files[i]))) { fileList.add(files[i]); } } } log(fileList.toString(), Project.MSG_INFO); for (Iterator it = fileList.iterator(); it.hasNext() ; ) { String filename = it.next().toString(); performUnjar(new File(parent, filename)); readCurrentManifest(); } writeFinalManifest(); } catch (IOException e) { BuildException be = new BuildException(e.getMessage()); be.initCause(e); throw be; } } /** Set the filter for this Task. * * @param s the filter, matching a Glob style regexp. */ public void setFilter(String s) { filterString = s; } /** Set the destination where the jars will be unjarred to. * * @param f the destination directory. */ public void setDest(File f) { destDir = f; } /** Set the manifest that will be written. * * @param f the manifest file that will be written. */ public void setManifest(File f) { manifestOutput = f; } /** Set the Main-Class that will be written to the Manifest. * <p>If not set, then any that are merged from the Manifest files are used. * @param s the main class. */ public void setMain(String s) { mainClass = s; } private void performUnjar(File file) throws BuildException { Expand unjar = new Expand(); unjar.setProject(getProject()); unjar.setTaskName(this.getTaskName()); unjar.setLocation(this.getLocation()); unjar.setDest(destDir); unjar.setSrc(file); unjar.setOverwrite(true); unjar.execute(); } /** Read the manifest that was just extracted. * * @throws IOException */ private void readCurrentManifest() throws IOException { File manifestFile = new File( destDir.getCanonicalFile() + File.separator + "META-INF" + File.separator + "MANIFEST.MF"); if (manifestFile.exists()) { FileInputStream input = new FileInputStream(manifestFile); manifest.read(input); input.close(); } } private void writeFinalManifest() throws IOException { Attributes a = manifest.getMainAttributes(); a.putValue("Built-By", getProject().getProperty("user.name")); a.remove(new Attributes.Name("Ant-Version")); a.remove(new Attributes.Name("Created-By")); if (mainClass != null) { a.putValue("Main-Class", mainClass); } FileOutputStream output = new FileOutputStream(manifestOutput); manifest.write(output); output.close(); } }
Compile errors are yours to fix
[ September 21, 2005: Message edited by: Lewin Chan ]
I have no java certifications. This makes me a bad programmer. Ignore my post.
Glenn Seg
Greenhorn
Joined: Sep 20, 2005
Posts: 2
posted
Sep 21, 2005 07:21:00
0
Thanks...I will take a look at the link you posted. Also, there is a 'Manifest' class in the Ant jar file that has a merge method.
I agree. Here's the link:
subject: merge multiple jars into one jar?
Similar Threads
what is TStamp
ant exception
ANT Script Example for Websphere Application Server
Simple & Basic EJB application -- Help is needed
Emma is not producing coverage data
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/108130/tools/merge-multiple-jars-jar | CC-MAIN-2015-48 | en | refinedweb |
I've attached some patches for an incremental step towards pluggable
JACC to
Pre RTC I would check these in and pray that someone would notice and
if I was really lucky comment, and then continue with the next
steps. We'll see how RTC works with this. In particular I intend to
continue work on this stuff on my flight today so I don't actually
expect to check in these patches. I do think that there's enough
there to see where I'm headed and object if you don't like it.
These patches turn the security builder into a gbean implementing a
SecurityBuilder interface, and move stuff around with module
dependencies so it still works. The next steps are:
- make multiple security namespaces possible and to determine if a
namespace driven scheme is plausible: it might be even though I don't
think we can run more than one JACC implementation in a geronimo
instance at once.
- review the SecurityBuilder interface, especially the parts used by
TSSEditor
- provide an additional JACC implementation to prove this stuff
works. See if there's any spec-specific functionality in our current
security builder that can be moved into a superclass or helper for
use by other builders.
thanks
david jencks | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200606.mbox/%3CE9C8DF65-9D45-4AED-917A-DACF51324B00@yahoo.com%3E | CC-MAIN-2015-48 | en | refinedweb |
I am trying to make a dynamic plot to monitor sensor data, slowly over time.
It will be updated about once a second.
When I leave my app running it slowly fills up memory. I am new to OO
programming, python and Matplotlib so the chances of me doing something
wrong are huge!
I'm using WXpython and the OO api of matplotlib. My code is below, however
I have the same problem running dynamic_demo_wx.py from the examples(I used
this as my base). I am guessing that every so often I need to clear the
plot data, but am unsure how to do this. If I remove, the 3 lines that
actually set the data, draw the plot and repaint the GUI then the program
has no memory issues.
Any help on this would be greatly appreciated, as everything else works.
This monitor program will be running for days, right now it last only a few
hours before it has claimed most of the system memory.
#!/usr/bin/env python
import time, sys, os
import numpy
import matplotlib
matplotlib.use('WX')
from matplotlib.backends.backend_wx import
FigureCanvasWx,FigureManagerWx,NavigationToolbar2Wx
from matplotlib.figure import Figure
from matplotlib.axes import Subplot
import wx
TIMER_ID = wx.NewId()
class PlotFigure(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1, "Pyro Logger")
self.fig = Figure((12,3), 75)
self.canvas = FigureCanvasWx(self, -1, self.fig)
self.toolbar = NavigationToolbar2Wx(self.canvas)
self.toolbar.Realize()
self.figmgr = FigureManagerWx(self.canvas, 1, self)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.canvas, 1, wx.LEFT|wx.TOP|wx.GROW)
sizer.Add(self.toolbar, 0, wx.GROW)
self.SetSizer(sizer)
self.Fit()
wx.EVT_TIMER(self, TIMER_ID, self.onTimer)
def GetToolBar(self):
return self.toolbar
def init_plot_data(self):
self.xlim = 100
self.ylim = 100
a =
self.fig.add_subplot(111,xlim=(0,self.xlim),ylim=(0,self.ylim),autoscale_on=False)
self.x = numpy.array([0])
self.y = numpy.array([0])
self.lines = a.plot(self.x,self.y,'-')
self.count = 0
def onTimer(self, evt):
if self.count <= self.xlim:
self.count = self.count + 1
if self.count <= self.xlim:
self.x = numpy.append(self.x,self.count)
if self.count > self.xlim:
self.y = self.y[1:self.xlim + 1]
#Simulating with random Data for now
self.y=numpy.append(self.y,((numpy.random.random()*50)+25))
#Problem seems to come in here
self.lines[0].set_data(self.x,self.y)
self.canvas.draw()
self.canvas.gui_repaint()
if __name__ == '__main__':
app = wx.PySimpleApp()
frame = PlotFigure()
frame.init_plot_data()
t = wx.Timer(frame, TIMER_ID)
t.Start(100)
frame.Show()
app.MainLoop()
--
View this message in context:
Sent from the matplotlib - users mailing list archive at Nabble.com.@...
Christopher Barker wrote:
>
>.
>
>
switched matplotlib.use('WX') matplotlib.use('WXAgg')
not sure if I need to change anything else in my code to use WXAgg
everything looked the same to me, memory just keeps creeping up.
is there something that needs to be done to clear the:
self.lines[0].set_data(self.x,self.y)
That I am setting with:
self.lines = a.plot(self.x,self.y,'-')
I am confused about what exactly self.lines ends up being, I mean
self.lines[0] makes it seem like an array, however the assignment of the
a.plot does not seem like one.
thanks for the clues
Versions
Python 2.4.3
2.6.15-28-386 (Ubuntu Linux)
Matplotlib 0.82
Numpy 1.0
--
View this message in context:
Sent from the matplotlib - users mailing list archive at Nabble.com. | http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/thread/4625442A.4090904@noaa.gov/ | CC-MAIN-2015-48 | en | refinedweb |
import java.util.Scanner;
public class IfTest0
{
public static void main(String[] args)
{
Scanner scan = new Scanner( System.in );
//Prompt Hits
System.out.print( "Enter number of hits > " );
int hitAmount = scan.nextInt();
System.out.println( "Number of hits is " + hitAmount );
//Promt at bats
System.out.print( "Enter number of at bats > " );
int atBats = scan.nextInt();
System.out.println( "Number of at bats is " + atBats );
int Average;
Average = (hitAmount/atBats);
if ( Average>0.300 )
{
System.out.println( "Eligible for All-Stars " );
}
else
{
System.out.println ( "Not eligible for All-Stars " );
}
}
}
So I'm trying to use an if/else statement to state that when the batting average is greater than .300 the player is eligible for all stars. What happens is that when I input the hits and at bats it always says the player is not eligible. But when the average equals one the player is eligible which I find very strange but I can't figure out what is wrong with my code.
Edit: I think I figured it out, I was using the / to divide which would give me a value of 0 I just need to figure out how to properly divide now I'm assuming. | http://www.javaprogrammingforums.com/whats-wrong-my-code/10050-very-new-java-basic-question.html | CC-MAIN-2015-48 | en | refinedweb |
/* ** ( the HTML parser, but it is important to note that none of it is required in order to use the Library.
This module is implemented by HTInit.c, and it is a part of the W3C Sample Code Library. You can also have a look at the other Initialization modules.
#ifndef HTHINIT_H #define HTHINIT_H #include "WWWLib.h"
The Converters are used to convert a media type to another media type, or to present it on screen. This is a part of the stream stack algorithm. The Presenters are also used in the stream stack, but are initialized separately.
#include "HTML.h" /* Uses HTML/HText interface */ #include "HTPlain.h" /* Uses HTML/HText interface */ #include "HTTeXGen.h" #include "HTMLGen.h" extern void HTMLInit (HTList * conversions);
#endif | http://www.w3.org/Library/src/HTHInit.html | CC-MAIN-2015-48 | en | refinedweb |
(For more resources related to this topic, see here.)
Preparing the environment
Before jumping into configuring and setting up the cluster network, we have to check some parameters and prepare the environment.
To enable sharding for a database or collection, we have to configure some configuration servers that hold the cluster network metadata and shards information. Other parts of the cluster network use these configuration servers to get information about other shards.
In production, it's recommended to have exactly three configuration servers in different machines. The reason for establishing each shard on a different server is to improve the safety of data and nodes. If one of the machines crashes, the whole cluster won't be unavailable.
For the testing and developing environment, you can host all the configuration servers on a single server.Besides, we have two more parts for our cluster network, shards and mongos, or query routers. Query routers are the interface for all clients. All read/write requests are routed to this module, and the query router or mongos instance, using configuration servers, route the request to the corresponding shard.
The following diagram shows the cluster network, modules, and the relation between them:
It's important that all modules and parts have network access and are able to connect to each other. If you have any firewall, you should configure it correctly and give proper access to all cluster modules.
Each configuration server has an address that routes to the target machine. We have exactly three configuration servers in our example, and the following list shows the hostnames:
- cfg1.sharding.com
- cfg2.sharding.com
- cfg3.sharding.com
In our example, because we are going to set up a demo of sharding feature, we deploy all configuration servers on a single machine with different ports. This means all configuration servers addresses point to the same server, but we use different ports to establish the configuration server.
For production use, all things will be the same, except you need to host the configuration servers on separate machines.
In the next section, we will implement all parts and finally connect all of them together to start the sharding server and run the cluster network.
Implementing configuration servers
Now it's time to start the first part of our sharding. Establishing a configuration server is as easy as running a mongod instance using the --configsvr parameter.
The following scheme shows the structure of the command:
mongod --configsvr --dbpath <path> --port <port>
If you don't pass the dbpath or port parameters, the configuration server uses /data/configdb as the path to store data and port 27019 to execute the instance. However, you can override the default values using the preceding command.
If this is the first time that you have run the configuration server, you might be faced with some issues due to the existence of dbpath. Before running the configuration server, make sure that you have created the path; otherwise, you will see an error as shown in the following screenshot:
You can simply create the directory using the mkdir command as shown in the following line of command:
mkdir /data/configdb
Also, make sure that you are executing the instance with sufficient permission level; otherwise, you will get an error as shown in the following screenshot:
The problem is that the mongod instance can't create the lock file because of the lack of permission. To address this issue, you should simply execute the command using a root or administrator permission level.
After executing the command using the proper permission level, you should see a result like the following screenshot:
As you can see now, we have a configuration server for the hostname cfg1.sharding.com with port 27019 and with dbpath as /data/configdb.
Also, there is a web console to watch and control the configuration server running on port 28019. By pointing the web browser to the address, you can see the console.
The following screenshot shows a part of this web console:
Now, we have the first configuration server up and running. With the same method, you can launch other instances, that is, using /data/configdb2 with port 27020 for the second configuration server, and /data/configdb3 with port 27021 for the third configuration server.
Configuring mongos instance
After configuring the configuration servers, we should bind them to the core module of clustering. The mongos instance is responsible to bind all modules and parts together to make a complete sharding core.
This module is simple and lightweight, and we can host it on the same machine that hosts other modules, such as configuration servers. It doesn't need a separate directory to store data. The mongos process uses port 27017 by default, but you can change the port using the configuration parameters.
To define the configuration servers, you can use the configuration file or command-line parameters. Create a new file using your text editor in the /etc/ directory and add the following configuring settings:
configdb = cfg1.sharding.com:27019, cfg2.sharding.com:27020
cfg3.sharding.com:27021
To execute and run the mongos instance, you can simply use the following command:
mongos -f /etc/mongos.conf
After executing the command, you should see an output like the following screenshot:
Please note that if you have a configuration server that has been already used in a different sharding network, you can't use the existing data directory. You should create a new and empty data directory for the configuration server.
Currently, we have mongos and all configuration servers that work together pretty well. In the next part, we will add shards to the mongos instance to complete the whole network.
Managing mongos instance
Now it's time to add shards and split whole dataset into smaller pieces. For production use, each shard should be a replica set network, but for the development and testing environment, you can simply add a single mongod instances to the cluster.
To control and manage the mongos instance, we can simply use the mongo shell to connect to the mongos and execute commands. To connect to the mongos, you use the following command:
mongo --host <mongos hostname> --port <mongos port>
For instance, our mongos address is mongos1.sharding.com and the port is 27017. This is depicted in the following screenshot:
After connecting to the mongos instance, we have a command environment, and we can use it to add, remove, or modify shards, or even get the status of the entire sharding network.
Using the following command, you can get the status of the sharding network:
sh.status()
The following screenshot illustrates the output of this command:
Because we haven't added any shards to sharding, you see an error that says there are no shards in the sharding network.
Using the sh.help() command, you can see all commands as shown in the following screenshot:
Using the sh.addShard() function, you can add shards to the network.
Adding shards to mongos
After connecting to the mongos, you can add shards to sharding. Basically, you can add two types of endpoints to the mongos as a shard; replica set or a standalone mongod instance.
MongoDB has a sh namespace and a function called addShard(), which is used to add a new shard to an existing sharding network. Here is the example of a command to add a new shard. This is shown in the following screenshot:
To add a replica set to mongos you should follow this scheme:
setname/server:port
For instance, if you have a replica set with the name of rs1, hostname mongod1.replicaset.com, and port number 27017, the command will be as follows:
sh.addShard("rs1/mongod1.replicaset.com:27017")
Using the same function, we can add standalone mongod instances. So, if we have a mongod instance with the hostname mongod1.sharding.com listening on port 27017, the command will be as follows:
sh.addShard("mongod1.sharding.com:27017")
You can use a secondary or primary hostname to add the replica set as a shard to the sharding network. MongoDB will detect the primary and use the primary node to interact with sharding.
Now, we add the replica set network using the following command:
sh.addShard("rs1/mongod1.replicaset.com:27017")
If everything goes well, you won't see any output from the console, which means the adding process was successful. This is shown in the following screenshot:
To see the status of sharding, you can use the sh.status() command. This is demonstrated in the following screenshot:
Next, we will establish another standalone mongod instance and add it to sharding. The port number of mongod is 27016 and the hostname is mongod1.sharding.com.
The following screenshot shows the output after starting the new mongod instance:
Using the same approach, we will add the preceding node to sharding. This is shown in the following screenshot:
It's time to see the sharding status using the sh.status() command:
As you can see in the preceding screenshot, now we have two shards. The first one is a replica set with the name rs1, and the second shard is a standalone mongod instance on port 27016.
If you create a new database on each shard, MongoDB syncs this new database with the mongos instance. Using the show dbs command, you can see all databases from all shards as shown in the following screenshot:
The configuration database is an internal database that MongoDB uses to configure and manage the sharding network.
Currently, we have all sharding modules working together. The last and final step is to enable sharding for a database and collection.
Summary
In this article, we prepared the environment for sharding of a database. We also learned about the implementation of a configuration server. Next, after configuring the configuration servers, we saw how to bind them to the core module of clustering.
Resources for Article:
Further resources on this subject:
- MongoDB data modeling [article]
- Ruby with MongoDB for Web Development [article]
- Dart Server with Dartling and MongoDB [article] | https://www.packtpub.com/books/content/sharding-action | CC-MAIN-2015-48 | en | refinedweb |
When developing line of business applications for field service agents it can be handy to have the application send SMS text messages to alert customers of updated status information, such as the potential for the service agent to arrive late for an appointment. This blog post discusses how to send SMS text messages programatically via the .NET Compact Framework.
Supported Platforms
This blog post makes use of classes within the Microsoft.WindowsMobile.PocketOutlook assembly. This assembly is not part of the .NET Compact Framework, instead it is shipped as part of the Windows Mobile operating system.
The assembly was first introduced as part of Windows Mobile 5.0. If you need to send SMS messages from a Windows Mobile 2003 device you will need to utilise a third party component such as the Mobile In The Hand product (which provides a compatible interface) or manually wrap the underlying native APIs.
In order for the demos in this blog post to work you need to add references to the following two assemblies:
If you forget the reference to the Microsoft.WindowsMobile assembly you will get the following compile time error:
The type ‘Microsoft.WindowsMobile.IApplicationLauncher’ is defined in an assembly that is not referenced. You must add a reference to assembly ‘Microsoft.WindowsMobile, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35′.
Creating an SMS Message
To create a new SMS text message you need to create an instance of the SmsMessage class and then set its various properties.
using Microsoft.WindowsMobile.PocketOutlook; SmsMessage message = new SmsMessage(); message.To.Add(new Recipient("Jane Doe", "+1 45 123456")); message.Body = "Would you like to go to lunch?";
The Body property is a string which contains the message you want to send. Notice that the To property is a collection of recipients, so a single SMS can be addressed to one or more recipients.
There is even a constructor overload which helps with the common case of a simple message intended for a single recipient:
SmsMessage message = new SmsMessage("+1 45 123456", "Would you like to go to lunch?");
Sending an SMS Message
Once we have created the SMS message we need a way to cause it to be sent. There are a couple of ways to achieve this.
The easiest is to call the send method on the SmsMessage instance as shown below:
// Send the SMS to its recipient(s) message.Send();
Calling the send method sends the SMS message behind the scenes with no visual indication that something is occurring.
Alternatively if you would like to give the user a chance to review and edit the contents of the message before it is sent you can display the message within the built in messaging application via the MessagingApplication.DisplayComposeForm method.
// Display the new SMS in the standard // messaging application MessagingApplication.DisplayComposeForm(message);
The third and final way is to create an instance of the OutlookSession class and use it’s SmsAccount property as follows:
using (OutlookSession session = new OutlookSession()) { session.SmsAccount.Send(message); }
Testing within an Emulator
The Windows Mobile 6 SDK introduced a Cellular Emulator tool which makes it easy to test applications which interact with cellular phone based functionality. One advantage of using this tool to test SMS sending applications is that it avoids the charges typically associated with sending SMS messages via real devices.
You can find the Cellular Emulator within your desktop’s Windows start menu. The first step in using the Cellular Emulator is to connect it to your device emulator. This can be achieved by following the Cellular Emulator Quick Start instructions available on MSDN.
If you need further assistance configuring the Cellular Emulator, Jim Wilson has created a great video titled “How Do I: Configure the Device Emulator to Use an Emulated Cellular Connection?“.
Once correctly configured you can switch to the SMS tab of the Cellular Emulator to send messages to, or view messages received from the device emulator.
Demo Application
[Download sendsms.zip - 9KB]
A small example application is available for download which demonstrates how to send SMS messages programatically. It displays a simple form to capture the desired message and recipient phone number and then demonstrates a few techniques for sending the message.
Of note is a checkbox which makes the program append onto the end of the user’s message the current battery charge status. This is a lead into the next blog post which will discuss how to programmatically respond to a Windows Mobile device receiving a SMS text message.
i’m having a problem saying “Could not load sms.dll”
Your Article is very useful for me. thanks!
THANK YOU SO MUCH. Do you think I could find this MessagingApplication.DisplayComposeForm(message); on MSDN!!
I also facing same problem of loading sms.dll
Simple and effective.
Thanks | http://www.christec.co.nz/blog/archives/495 | CC-MAIN-2015-48 | en | refinedweb |
The code hasn't changed from page 1, I just copied and pasted it unto page 2.
"The logic worked the same in both I thought, just have problems with dbImage. Am I mising something?"
Yes I saw it,...
The code hasn't changed from page 1, I just copied and pasted it unto page 2.
"The logic worked the same in both I thought, just have problems with dbImage. Am I mising something?"
Yes I saw it,...
Initially, but I assign it too createImage(500,500). I am starting to think that createImage is null. I'm not sure how createImage works totally, I navigated to source but it is kind of confusing.
...
dbImage is the variable that is null, even right after the assign statement it is still null.
The logic worked the same in both I thought, just have problems with dbImage. Am I mising something?
Here is the code for my 3 classes:
package bandits;
import javax.swing.JFrame;
public class Bandits extends JFrame {
run:
in loadArrays(): i = 0
DBImage is still null!
in loadArrays(): i = 1
java.lang.NullPointerException
DBImage is still null!
java.lang.NullPointerException
DBImage is still null!...
I assigned it too dbImage=createImage(500, 500). Is that a wrong type to assign it to?
There's the test loop I did to make sure the way I cycle through x's and y's worked.
int x=0;
int y=0;
for(int i =0; i < 500; i++){
if(x >500){
...
For some reason dbImage is still Null!... The if statement says if dbImage is null, dbImage = createImage(500, 500);. but for some reason it skips this step and goes to the next step. The next step...
I'm stumped. I have been staring at this the whole week. I cannot figure it out. I did a test loop and I didn't run into any problems, I just have the null pointer problem. Any further advice would...
Thanks for the help you have given so far! I really appreciate this community, hopefully one day I'll be answering these types of questions!
Also thank you for showing me a way to debug, didn't...
Hey there.
I am trying to practice my Java and work on this basic game I'm creating.
I am at the stage where I am trying to draw the tiled map, but I'm having two problems. The first problem...
lol awesome thanks! I knew it was something simple!
Hello.
I am making a test game to try out some things I learned, it had worked previously but after I mixed up similar file names I thought I fixed it, however now it runs with no errors at all,...
jps you are awesome, thanks for not spoon feeding me and just giving me hints on which direction, it really helps a lot and I feel like I learn quite a bit as a result.
New code: Works perfectly
//Thomas Harrald
//IT215 Checkpoint Inventory Program
//Camera Inventory program
import java.util.Arrays;
import java.io.InputStreamReader;
import...
Thats what I was afraid of, because when I put it in the main method, i get 35 errors. most of them centering around referencing non static variables; inv, current, showCamera, a few of the get(). I...
Heres my code:
//Thomas Harrald
//IT215 Checkpoint Inventory Program
//Camera Inventory program
import java.util.Arrays;
import java.io.InputStreamReader;
import java.io.BufferedReader;
I'll take a look at better methods in the morning...
I modified the toString() to include a \n, and appended all the arrays elements toString().
It gets the job done... but I will try to find a...
Norm is there anything specific to look for GUI related? There is a LOT of data in the package lol.
Thanks a lot Norm, I'll take a look at that.
I only put two of the Array elements in to troubleshoot. It only shows the last element. cameras[0] overrides cameras[1]. Is there a way I can just make it add the text instead of setting it?
Just...
Hello again!
In class we started this chapter on Graphics and I am struggling to understand it, it seems like it all hit me at once, I did the readings but it just seems like gibberish.
My...
Totally lacking comments... will work on that.
Here's tonight's assignment, part 3 of the last one.
//Thomas Harrald
//Camera Super Class
//IT215 Inventory Program
public class CameraClass
Here's the cleaned up code... and I believe it accomplishes what the professor was looking for too.
//Thomas Harrald
//IT215 Checkpoint Inventory Program
//Camera Inventory program
import... | http://www.javaprogrammingforums.com/search.php?s=8a203cb10184b5ba07b3ab3941c1343d&searchid=1929423 | CC-MAIN-2015-48 | en | refinedweb |
in reply to
Registering Subclass Modules via Plugin
I think a factory is for when the caller knows what they want. Your case is different.
I find out which subclasses are available by scanning all or part of the @INC path list for modules in a given namespace. I do this once, as the app starts. Note that big @INC scans can be time prohibitive. In that case, I look to see which directory my parent module lives in by checking %INC, then look there for my children, making the big assumption that they get installed to the same place as the parent. It works for Bigtop's tentmaker. | http://www.perlmonks.org/index.pl?node_id=650832 | CC-MAIN-2015-48 | en | refinedweb |
This action might not be possible to undo. Are you sure you want to continue?
07/25/2013
text
original
A STUDY IN ANTIGNOSTIC POLEMICS
Irenaeus, Hippolytus, and Epiphanius Gérard Vallée
Canadian Cataloguing in Publication Data
Vallée» Gérard, 1933A study in anti-Gnostic polemics {Studies in Christianity and Judaism = Etudes sur le christianisme et le judaïsme, ISSN 0711-5903 ; 1} Bibliography: p. ISBN 0-919812-14-7 1. Irenaeus, Saint, Bishop of Lyons. Ad versus haereses. 2. Hippolytus, Saint, fl. 217-235. Refutatio omnium haeresium. 3. Epiphanius, Saint. Bishop of Constantia in Cyprus. Panarion. 4. Heresies and heretics-Early literature-History and. criticism. I. Title. II. Series: Studies in Christianity and Judaism ; 1. BT1390.V34 273'.2 C82-094052-6
€> 1 9 8 1 C o r p o r a t i o n C a n a d i e n n e d e s S c i e n c e s R e l i g i e u s e s / C a n a d i a n C o r p o r a t i o n for Studies in Religion 8 1 8 2 83 8 4 8 5 4 3 2 1 No part of this book may be stored in a retrieval system, transJated or reproduced in any form, by print, photoprint, microfilm, microfiche, or any other means, without written permission from the publisher.
C o v e r d e s i g n by M i c h a e l B a l d w i n , MSIAD
O r d e r from: Wilfrid Laurier University Press Wilfrid L a u r i e r U n i v e r s i t y W a t e r l o o , Ontario, C a n a d a N 2 L 3 C 5
TABLE OF CONTENTS
FOREWORD ABBREVIATIONS . . . . . . . . INTRODUCTION: I. HERESÏOLOGY AND NORMATIVE CHRISTIANITY .
vii ix 1 9 12 16 24 34 41
IRENAEUS'S REFUTATION OF THE GNOSTICS 1. 2. 3. Philosophical Arguments . Theological Arguments Socio-political Motives . . . . Irenaeus and the Montanists. . . . . .
Excursus: II.
THE ELENCHOS AGAINST ALL HERESIES 1. 2.
Hippolytus's Three Ways of Refuting Heresies. 4 7 The Basic Disagreement with the Gnostics. . . 56 63 69 75 ... 83 88 92 .105
III.
EPIPHANIUS ' Sv PANARION 1. 2. 3. Epiphanius's Objective and Method The Gnostic Heresies The Core of the Refutation The Style of Argumentation in Pan.haer. 27. .
Appendix: CONCLUSION: BIBLIOGRAPHY
CHRISTIAN POLEMICS AND THE EMERGENCE OF ORTHODOXY . . . . .
This page intentionally left blank
FOREWORD
In
many
ways
this
monograph
is
the
result
of
a
corporate effort.
It was prepared
at McMaster University
under the auspices of a research project on Normative SelfDefinition in Judaism and Christianity funded by the Social Sciences and Humanities Research Counci1 of Canada. thankful in (ed. read to S CM and Press for permission to publish Vol. are I am as The to here I, due
chapter I a revised version of an essay that appeared first Jewish E.P. Christian 1980. Self-Definition, Special thanks Shaping of Christianity Sanders) in the Second and Third Centuries
Professor Frederik Wisse (McGi11 University), who carefully my manuscript and made numerous stimulating comments to which I have tried to live up; Professor Alan Mendelson (McMaster University) and Dr. Tamar Frank, who contributed editorial advice; his seminar Brox on Professor Pierre Nautin Epiphanius have in 1978-1979; (Ecole pratique and Professor des Hautes Etudes, Paris), who allowed me to participate in Norbert friendly (Universität Regensburg), whose writings and these years
inspired me throughout
of research.
But, as usual, none beside myself should be been published with the help of a grant funds Science and Humanities Research
held responsible for the shortcomings of this work. This book has provided from the Canadian Federation for the Humanities using by the Social Council of Canada.
McMaster University Hamilton November 1980
This page intentionally left blank
ABBREVIATIONS AC ' A Adv. haer. Adv. Val. AHC ANF BCNH BKV CCL CH CSCO CSEL D Ancient Christian Writers Irenaeus, Adversus haereses (see chapter I, note 1) l'ertullian, Adversus Valentinianos Annuarium historiae cone i1îorurn. The Ante-Nicene Fathers Bibl iothc-que copte de Nag Hammadi. Bibliothek der Kirchenväter. Quebec Amsterdam
München
Corpus ehristianorum, series Latina Church History. Chicago
Corpus scriptorum ehristianorum orientalium Corpus scriptorum eccles iast icorum latinorum Denzinger-Schönmetzer, Enchiridion symbolorum definitionum et déclarâtionum de rebus fidei et morum Dictionnaire d'histoire ecclésiastiques. Paris et de Paris géographie
DHGE PS FC G CS HE HTR In Jo. In Matth. JEH JR JTS MTZ
Dictionnaire de spiritualité. Fathers of the ChurchDie griechischen christlichen der ersten Jahrhunderte
Schriftsteller
Eusebius, Historia ecclesiastica Harv ard Theological Review. Origen, Commentary on John Oriqen, Commentary on Matthew Journal of Ecclesiastical History. The Journal of Religion. Chicago Oxford München London Cambridge (Mass.)
Journal of Theological Studies.
M'ünchener theologische Zeitschrift.
ix
NHS NT Pan. PG PL PO RAC
Nag H arrima d i S t u d i e s . Novum Testamentum. Epiphanius, Panarion Patrologia graeca Patrologia latina Patrologia orientalis
Leiden
Leiden (see chapter III, note 8)
Peallexikon für Antike und Christentum. Stuttgart nippolytus, Refutatio omnium haeresium (or f'lenchos ) (see chapter 11, notes 1 and 2 ) Revue d 1 histoire ecclésiastique. Louvain
Ret".
RHE
RHR RSPT
Revue de l'histoire des religions. Revue des sciences philosophiques et théolog iques. Paris Recherches de science religieuse. Revue des sciences religieuses.
Paris
RechSR RevSR RTAM
Paris Strasbourg
Revue de théologie ancienne et mediévale. Louvain Revue de théologie et de philosophie. Sources chrétiennes Clement of Alexandria, Stromate is Studia theologica. Oslo Berlin Lausanne
RThPh SC Str. StTh TLZ TR TrTZ TU
Theologische Literaturzeitung. Theologische Revue. Münster
Trierer theologische Zeitschrift.
Trier
Texte und Untersuchungen zur Geschichte der altchristlichen Literatur Theologische Zeitschrift. Vigiliae christianae. Basel
TZ VC ZKG
Amsterdam Stuttgart
Zeitschrift für Kirchengeschichte.
Zeitschrift für die neutestamentliche Wissenschaft. Berlin Zeitschrift fur Theologie und Kirche. Tübingen
xi
This page intentionally left blank
INTRODUCTION:
HERESIOLOGY AND NORMATIVE CHRISTIANITY
The Christian Lyons, of and the
study who
presented his
here and
is devoted their
to
three around
early of 18 0;
heres ioloqists wrote
works :
Irenaeus
Adversus
haereses
Hippolytus of Rome, who is generally held to be the author the Elenchos Against All Heresies, written Epiphanius question of Salamis, that it will may be about whose Panarion 374 and 377. be in the at the of order study to after 2 22; eighty of a our few in in against
heres ies was wr i t.ten between investigation, preliminary general and particular. 1. of
Before posing offer
center
remarks
heresiology works
these
three
heresiological
Why study heres iology ? those Do most we not
What truth can we hope to to 'satanize ' far more their
wring from the most tradition, adversaries?
intransigent authors of the Christian inclined now possess reliable
and the ensuing revival of gnostic studies? This answer is a reasonable the question, but
Can we not now satisfactory important been
dispense with the 'biased' witness of the Church Fathers? any is bound to be complex. resulting Although the Nag Hammadi
discoveries—and period, the
scholarship—are have they to by no the
for our knowledge of the religious history of the patristic hercsiologists On the contrary, of data source means a supplanted. independent inquiry. represent largely than, And if these of (Of
guide
historian's other
The heresiologists
drew on sources
though sometimes similar to, the Nag Hammadi texts. the patristic by the writings offer to evidence must be complemented evidence too of permit, the of by heterodox and themselves, they scattered and a d i rec t writings, a
corrected fund
fragmentary
information
sat isfactory
reconstruction
heterodoxy
represent.
3 0 course, of the patristic evidence
Anti-Gnostic Polemics itself ought to be this the one-
complemented by the new sources, thereby ridding it of some its one-sidedness; but we do not have to belabor to recall is the that, that the than context that of for provided the all to the Nag by
important texts;
heresiologists that sidedness, gnos is ; and writings
broader us
Hammadi
heresiologists, valuable these for
their
offer
attempts reasons, for the
conceptualize heresiological of
remain
indispensable
interpretation
these new sources.-^ In fact, Nag Hammadi studies frequently refer to the evidence of the Church Fathers in the attempt to assess the meaning of the newly discovered texts. look for parallel and information, H ippolytus, In any heresy from Epiphanius, Scholars regularly in that of Irenaeus, the new especially
demonstrating
sources have not superseded the heresiologists. effort the to gain a better knowledge heresiologists, however, a ancient preliminary
condition has to be borne in mind. heresies which interest us here,
Information on heresies is embedded in an anti-
found in heresiological works, particularly on the gnostic heretical argument. This mode of argument always reflects
provided, quot ing the a
especially source, of the
when it is
the
author
is to
not be as
explicitly clear to as the
necessary they
possible about the concerns of the heres iologists and about nature arguments wish to oppose heretics. Although a better knowledge of heres iology undoubtedly improves the quality of our knowledge of ancient heresies,
•'•The same point is forcefully emphasized by H.-M. S c h e n k e , 'Die R e l e v a n z der Kirchenväter für die Erschliessung der Nag-Hammadi Texte', Das Korpus der griechischen christlichen Schriftsteller. Historie, Gegenwart, Zukunft, eds. J. Irmscher and K. Treu, TU 12 0, Berlin 1977, pp. 209-218.
Introduction the study of heresiology as a literary field. et genre etudes les
3 is, sur
surprisingly has already les procédés qu 1 elle present
enough, a rather neglected remarked: ordinaires pour de not close it rien does pay de tel
P. Nautin risques n'avons The of implied the same
'S'il existe de bonnes l'hagiographie exacte, to in answer to the own pour l'h istoi re pretend echoes
comporte study to
nous the way
malheureusement i nv itat ion invitation.
1 1 hérésiologie 1 .2 methods
attention
heresio]ogy ; rather,
its
The question of 'heretics' has an ominous relevance to our day.3 'Heresy' is no longer used in an exclusively heretics for survival is too The struggle over battles in If Christian ideologica1 less bitter re l igious sense, but striking meaning between antiquity the analogy between ancient
and contemporary minorities struggling to be overlooked or dismissed. in our the are world and the hardly the orthodox and
heretics
reciprocally
illuminating.
conflict today is a life-and-death
issue, so was the issue
of religious truth for early Christianity. We also wish to ernp h a s i z e the relevance of heresiology to the question of what people for 4 . We must not expect thought Christianity to find in stood heresiological
2p. Nautin, 'Histoire des dogmes et des sacrements chrétiens ' , Problèmes et méthodes d'histoire des religions (Ecole pratique des Hautes Etudes, Section Sciences religieuses), Paris 1968, pp. 177-191, here p. 183. In Mélanges d'histoire des relig ions offerts à H.-C. Puech, Paris 19 7 4, pp. 393-403 ('Les fragments de Basilide sur la souffrance'), Nautin remarks again th a t our be t te r knowledge of G n o s t i c i s m d e p e n d s not only on the
3 See G. HasenhUttl and J. Nolte, Formen Ketzerbewältigung, Düsseldorf 1976, p. 11.
kirchlicher
^Origen, Contra Celsum 3, 13 (H. Chadwick, ed., Cambridge 1953, p. 136) said something similar about the heresies themselves when he saw in the necessary development of sects a fortunate expression of the richness of Christianity and of its essential features: 'I would
3
0
Anti-Gnostic Polemics
writings 'the truthj about the Gnostics'? too often in these writings within an the information argument is tainted by passion it. or woven can we alien as that obscures Nor
expect to draw 'h istory ever, the
from such wri tings a ready-made account of happened'; decisively for here, more placed within than an are
it actually data
interpretative
scheme that colours them.
But we can hope
to find in those writings what certain influential authors in the emergence of catholic Christianity considered to be the pivotal and point how on wh ich acted Christianity to secure would stand or Each f all, they that point.
heresiologist, to be sure, had his own view of 'the essence of Christianity'. and political in Collectively, worth ou r It may have reflected social, cultural, preferences any to and case, ask these what and idiosyncracies . had a momentous views of views the there
impact on the shaping of tradition. wh i 1 e were heresiologists between them. Let these reflections the study of the question arises: who wrote the a during Ad versus is whether
In this context it is diverse was any conti nuity
suff ice as a justification in general. A
for
heresiolc ji t the
further
out of th< larjc number of heresiologists first centuries, why concentrate on of Irenaeus, works the Elenchos of are The answer, in
haereses that
Hippolytus, and the Panarion of Epiphanius? nutshell, these typical ; own day. They are avai 1 able - that H ippolytus. reconstruct them back to
are available ; they
and they each took on "all the heresies' of their is, they survived. Justin
wrote a Syntagma dealing also with all heres ies and so did Unfortunately, both are lost, and efforts to them us have in obviously their not succeeded At best in we giving might entirety.
say that a man who looks carefully into the sects of Judaism and Christianity becomes a very wise Christiar. ' (OOtJjÜTaTOV) .
Introduction identify history gain generically the heresies they refuted and
5 infer to
something of the influence of these now lost works on the of heresiology. idea of the It is not possible, however, arguments they used a clear to counter
their opponents. Second, called
For that reason these works fall beyond
the scope of our interests in the present study. the Adversus haereses, the Elenchos, and the These works offer us excellent successive
'heresiology*.
illustrations
of what heresiology was
in three
centuries, and they allow us to follow the development of heres iology in that period. and lasting influence on Their Christian polemics. Moreover, they had a decisive the fixing of the style or of respect ive sources their
interdependence are of their
not of primary
concern here, although is in the authors
at times it will be useful to indicate the probable source ideas,* but our main interest themselves. Each is seen as representing one major moment
in the heres iolog ical tradition. Third, knew, not all three did battle with all heresies they This
only them
with from
particular
heresies. like
distinguishes
heresiologists
Tertullian,
Theoph i lus of Antioch, or Or igen After energies. primarily Filastrius knowledge and of Epiphanius heres iology
who took betrays
aim at one or a depletion of of
another chosen target (Marcion or the Valentinians). Pseudo-Tertu11ian, for their and Filastrius Brescia,
Theodoret of Cyrus, also writing against all heres ies, rely information on Hippolytus1 s Syntagma. do not The directly same may depend be said on of Th eodore t
early
heresies.
it is on them that he based his information in his De After Epiphanius no fresh knowledge of ancient be expected. 5 New methods of dealing with can
haeresibus. heresies
venture Hilgenfeld and
the following chart, based on Lipsius, others, showing the 'genealogy' of
3 0
Anti-Gnostic Polemics
heret ies will methods study. 2. study of our that three study our would Reformation,
indeed which
be deve loped. medieval beyond the is
But a study of these up the to the scope of present
include
heresiology
We are interested here in understanding the methods The scope of our study has now emerged. these three authors, all the rich has new nor a detailed information relevance considered on to in We do not
of dealing with ancient heresies. intend, in such a limited space, to present a comprehensive analysis of into the Nag Gnostics Hammadi some specific passages of their works ; produced studies. nor to bring
in the wake of Nag Hammadi, although we do think discussion We intend some to address works a precise question to the themselves.
heresiological
heres iologists and the central position of the three authors we study here. 'Q' would be a reworking of Justin's Syntagma (according to Lipsius); • indicates that the contact is well attested ; indicates that the contact is probable. Further explanations will be provided by the following chapters. Justin's Syntagma i Q? ^,Heges ippus
i , ,. Irenaeus (c. 18 0) Clement of Alex'. H ippolytus s Syntagma (c.210) Origen (Eusebius) Epiphanius^ (374-377) Filastrius Augustine (428) Theodoret (c.453) (c.380/90)
1
Tertullian's Adv.valent. Elenchos (after 222) (before 250)
Pseudo-Tertullian
Introduction Beyond all literary in these devices, writings, we rhetorical wish or
7 other, the
encountered
to determine
substance of the arguments forged by each heresiologist to counter the Gnostics. reaching sects, the a clear kind of We are not primarily picture arguments of the interested in each them opponents against
heresiologist is attacking; 1 ike the actual descriptions of marshalled sometimes indeed betrays who the heres iologist thought his Nevertheless we want to probe elsewhere in order to overthrow heresy. Thereby we
opponents were. has devi sed in
an effort to uncover the central argument that each writer assume that the many arguments encountered in each work are i nformed isolate by a preva i1ing argument ; it is our objective to such an argument. connected find most In order to find an answer to question: offensive in what the did each
our question, it is also implied that we have to answer the essentially heresiologist positions? heretical
will have gained a deeper knowledge of the development of the Also style the of Christian polemics and in the first of centuries. polemics essential content mot ive such
might emerge in a fuller light; for what each heresiologist sadly misses stand close Christianity. The present study was undertaken research and project on Normative The project Christianity. was in the context of a in the Judaism Social funded by Self-Definition in the combat ted doctrines to what he holds to be is very likely to the backbone of
Sc iences and Humanities Research Counci . 1 of Canada and was based mind study to do in the Department The been part and reader one in of Religious Studies is therefore by invited at McMaster to keep of in the and in will University. has my
that the heuristic and control 1 ing framework of this determined the question I have intended in it to understand developed The Conclusion
emergence of orthodox Christianity. a corporate normat ive in which expia in how why
attempt
Christianity
precisely
the way
it did.
3 reconsider th is
0 broader question
Anti-Gnostic Polemics in the light of the
results of the preceding chapters.
I*
IRENAEUS'S REFUTATION OP THE GNOSTICS
Irenaeus of Lyons heresies,1 as
(c.130-20 2) wrote his refutation of gnostic heresies, at a time
principally presence,
(beginning c. 180) when gnostic groups were still perceived a dangerous was, the if not as a threat gnostic knew other to the very above all he had existence still f rom of of the Church. the place Church. writings. The Rhone valley had been, and activists, them
where
Marcosian Gnostics, had made headway and won many converts Irenaeus Gn the personally ; Irenaeus supposedly had 'conversations' with them, and had read some their hand, predecessors in the task of overthrowing heretics and it is generally assumed that he knew the lost Syntagma of Justin, among other heresiological sources. Thus, on account of the 'Gnosis of his knowledge of both heresy and heres iology, he seemed to have been well equipped falsely funct ion
1
to speak out against gave h im the
so-called , the more so if one considers that his as a bishop respons ibi1i ty
speaking a word of warning and speaking it with authority.
l'EXévxou Kai dvaxporrns " r i f e iJjeuôojvtfuou yvtibeue ßißPud Ttévxe ( a ccord i ng to Eusebius, HE V, 7 ) - Detec t i on i s et eversionis falso cognominatae agnitionis seu contra omnes haereses libri quinque ( = Adv. haer.). We quote the work in the following way : for the text we follow W.W. Harvey's edition (Cambridge 1857) and SC 263-264, 210-211, 100 ( 2 vols.), 152-153 (eds. A. Rousseau, J. Doutreleau, C. Mercier, B. Hemmerdinger, Paris 1979, 1974, 1965, 1969) for Books I, III, IV and V. (Book II is forthcoming in SC series. ) - For the divisions of the text we follow P. Massuet (PG 7), whose divisions are reproduced by A. Stieren (Leipzig 18 53) and SC, wh i1e they can be found in the margins of Harvey ' s edition. Our translation takes account of those found in ANF I (Edinburgh 1867} and SC. A new English translation is expected to appear in "Fathers of the Church" (Washington ) by A.C. Way and in "Ancient Christian Writers" (Washington) by D. Unger.
3 0 Anti-Gnostic Polemics Throughout ( Adv. haer. ) and Irenaeus all the refers know more about successfully. doomed another to bishop. the to a five Books of Adversus haereses Book, to
especially
in the Prefaces expressed how to
2
to each
'friend' who of and at to
the wish oppose that he
the heretics Attempts
the time
(i.e. first of them was 'friend ' are 'lettre
Valentinians}, failure.
hear
identifying nothing
that
For
indicates
i'e might even
not have been the
influent' Doutreleau th inks he was, 3 for philosophical considerations. The
if that were the might be
case, Irenaeus might have given more weight than he did to 'friend*
fictitious, or stand for a segment of Irenaeus's community which was disturbed by gnostic agitations and wished to be in a better position to discern among those teachings and defend real itself. addressee Thus officially danger But even granted and an that the person, 'friend' was a he does as saw not an the influential not
appear to have had any official status in the Church. Adv.haer. does present Rather itself commiss ioned work. Irenaeus
represented by the act ivity of gnostic teachers in Without being asked by his peers to ./rite a
his entourage and stood up as a pastor concerned with true teaching.4 tractate against. the 'Gnosis falsely so-called ' , he took it
O n the use of the term 'Gnostics ' by early heresiologists, see M, Brox, ' FVÙÛCJTIKGC als häresiologischer Terminus', ZNW 57, 1966, pp. 105-114. On selfdesignations of the Gnostics, see K. Rudolph, Die Gnosis. Wesen und Geschichte einer spatantiken Religion, Göttingen 1977, pp. 220-221. On the use of the term in Adv. haer. I, see A. Rousseau, Iréne'e de Lyon. Contre les hérésies, Livre I (SC 263), Paris 1979, pp. 299-300. 3L. Doutreleau, 'Irenée de Lyon', DS ViI, Paris 1971, 19 33. A. Rousseau ( SC 263, p. 115 n. 1 ) sugges ts, on the basis of Adv. haer. I. praef. 2 and 1.31.4 ('omnibus his qui sunt tecum*): 'Peut-être s'agit-il du chef d ' une communauté chrétienne.. . ' . ^Irenaeus would be among the first writers in the West who tried to unite the authority of a bishop with that of a teacher. See W. Bousset, Jüdisch-christlicher Schulbetrieb in Alexandria und Rom, Gott ingen 1915, p " I 317. Look ing
2
Irenaeus upon himself to an 'The true to show to his people wider is audience) the view ing gnosis
11 (naturally with an eye is of in right the line and true. the own apostles' with his
eventual
what himself to
doctrine
(IV.33.8) . fact and,
Irenaeus, at the
apostles and the primitive Church, writes to establish this same time, vindicate authority. But there was something more constructive and creative in Irenaeus ' s speaking out. heretic/orthodox clear. a
5
He wrote at a time when the does not seem to have been
polarization
The tractates written by Theophilus of Antioch and line of demarcation could these between contend i ng written, but the parties; even in
Justin aqainst divergent teachings would not have effected clear gnostic many Rome. not teachers after the still move freely tractates were achieved, at least refutation, among Christians in the West, was lasting
years only
What Irenaeus
intended
polarization of Christian fronts. How did Irenaeus achieve this? to speak out? What gave him Why did he feel he had that he was
the assurance
What was the character of the arguments he used in order to 'compel wild th e animal
1
to break
cove r,...not only
expose
the
beast to view, but inflict wounds upon it from every finally slay that destructive brute' (1.31.4)?
side', and
back at the second century, Danielou, Origène (Paris 1948, p. 37), says concern i ng 0 r i g e n ' s difficulties with the b ishop Demetrius : 'Nous re trouvons là cette dist inct ion du courant hiérarchique et du courant des didascales qui s'était rencontrée au lie siècle. Les rapports entre les deux n'étaient pas encore bien définis dans l'Eglise'. The emerging of orthodoxy will be the triumph of the bishops and, wi. th them, assuredly of the 'majority ' .
5 This can be said without contradicting, among others, H. J. Carpenter ('Popular Christianity and the Theologians in the Early Centuries ' , JTS 14, 1963, pp. 294-310), who holds that '...Irenaeus and Tertullian and H ippolytus dealt with Marcion and the Gnos ti cs when the great church had demonstrably survived the impact of these movements for half a century or more * (p. 297).
3 0 Anti-Gnostic Polemics Such questions An groups analysis of and will of guide our inquiry work himself into Irenaeus 's shows a two as less th i rd
motives for writing his refutation. Irenaeus's wh ich he clearly But arguments characterizes
philosophical
scriptural/theological.
group, which we shall call socio-political, although have given the initial stimulus to Irenaeus's
explicitly put forward, is nevertheless operative and might enterprise. The first two groups of arguments have already been studied by many authors ; my intention in present ing them here is to offer as comprehensive a view of Irenaeus's group. refutation as possible, and to provide necessary background for the third
1.
Philosophical Arguments The refutation proper of the gnostic system (of their
régula : Irenaeus's order should of be
Il.praef. the
2) begins with arguments
Book are in
II where most of found. Book of I. the Here of But the the it
philosophical 'headings' of noted the th a t
corresponds tenets the
to the order
presentation
gnostic
'exposition*
gnostic
'hypothesis' in Book I was itself intended to show its lack of internal cohesion. Irenaeus ' s We wh ich refutation not Th is constituted the first stage of (1.31.3: step 'Simply by step gnost ic to the exh i bi t their
sentiments, is to obtain a victory over them. '). shall at follow argumentation does not with the aims showing that the system
harmonize (II.25.1), common make some
'with what actually exists or with right reason1 nor with general human experience 1.16.3). on here and (11.27,1) , nor We the shall, character of (11.2 6.3 ; see arguments however,
sense
observât ions found
philosophical Adv.haer.
in other parts of
Throughout his refutation, Irenaeus shows acquaintance with secular learning and especially with the rhetorical arguments and techniques of the Hellenistic schools of the
Irenaeus second c e n t u r y . T h e rhetorical weaker technique first. his
13 very order of the arguments in Books to hold back the decisive this arguments by the
II to V betrays such an acquaintance, for it was a common for the later parts of the development and to present the ones Irenaeus follows pattern against presenting arguments. More precisely, Irenaeus ' s rhetorical training uses almost and is all the also methods question. the one of argumentation The who principles makes 1 ; (except he uses (which the are he might have received in Rome) is seen in the fact that he syllogism, as Reynders noted),7 with a predilection for the d i lemma conceives effect'; He excels s impie, almost commonplaces. For example : 'The one who contains retort, f irst philosophica1 arguments
Gnostics and then by offering the more decisive scriptural
'cause
'what is prior contains what is posterior'; etc.8 in the use of irony and the ad hominem
thus showing a certain talent and training.9 Irenaeus's somewhat seems to acquaintance with He part philosophy can from itself 'is an superficial'. 10 be for the most
surely drawn
formulate
c i itcjuntont f but whâ t i s pirop6t*ly ph 1 losoph i câ 1 x n his work doxographical
6 S ee W. R. Schoedel, ' Philosophy and Rhetoric in the Adversus Haereses of Irenaeus', VC 13, 19 59, pp. 22-32, esp. 27-32. R. M. Grant, 'Irenaeus and Hellenistic Culture', HTR 42, 1949, pp. 41-51, esp. 47-51. P. Perkins, 'Irenaeus and the Gnostics. Rhetoric and Composition in Adversus Haereses Book One ' , VC 30, 1976, pp. 193-200. ^D. B. Reynders, * La polémique de saint Irénée. Methode et principes', RTAM 7, 1935, pp. 5-27, here p. 8. ^See Reynders, 'Polémiqué ' . ^See R. M. Grant, ' Irenaeus ' , p. 51 : 'Too often we are content with a picture of Irenaeus as orthodox but rather stupid. The camera needs to be refocussed...He represents the confluence of Hellenism and Christianity no less distinctly than the apologists do...He should not be neglected simply because his results survived '. lOschoedel, 31. 'Philosophy and Rhetoric ', p. 22 ; see p.
3
0
Anti-Gnostic Polemics
rest seems
of
his
thought Th is with
could
be
characterized
as
popular Irenaeus that is
philosophy." harmonizes arguments point. Irenaeus's sense; they well of
is precisely the praise belong
the area where of simple faith
to be most at ease.
At least that type of wisdom The most typical and frequent
found throughout Adv.haer. Irenaeus philosophy.
to this category
of popular
Some examples migh t suffice to illustrate th . i s favourite into a refrain fit of is : the Gnostics are
talking nonsense, folly ; their discourse departs from good 'fall frenzy ' , they propound 'fictitious doctrines'; they are seriously sick and foolish (see 1.16.3). arbitrary The disqualify good There Their teaching is absurd and their exegesis, that gnost ic teach ings are borrowed is not a for a is doctrines. commends Plato (II.24.1-6). accusation them.
from philosophy (11.14.2-7; see IV.33.3) is itself meant to Plagiarizing in the philosophers matters, Irenaeus recommendation is only one Christian in which since,
Irenaeus, philosophy
is at the source of wrong passage it is only to say that
philosopher—Plato—but more religious not But amount Valentinian
than Marcion speculation it is
(III.25.5). is not only
This surely does philosophy. from the describes taken
in Irenaeus's eyes
to a praise of Gnos is
ph ilosophers: real entities
ph ilosophy.
philosophical or psychological processes wh ich it takes for (11.14.6 and 8; 11.13.10; II.28.6)? 13 making
• ' • • ' • O n e source is probably the Pseud o-Plut arch. See Schoedel, 'Philosophy and Rhetoric', pp. 23-4 ; Grant, 'Irenaeus', pp. 43-7: 'Irenaeus cannot be classified among philosophical schools. His interest...is more rhetorical 1 than philosophical (p. 47), l^see, for instance, his use of proverbs ( 11.19 .8 ) ; his appeal to the authority of the past, e.g. Plato; his appeal to universal opinion; the sceptical use he makes of the doxographical material. l^see F.-M.-M. Sagnard, La gnose valentinienne et le
Irenaeus dangerous accusations or right and a excessive series their of use of human In analogies, to of the
15 it
hypostasizes
mental
processes.
addition
these 'gnosis
denunciations
falsely so-called' is found. reason;
Gnostics contradict the facts are recent, less originating respectable without and,
teachings are
from S imon who is not only despicable, but also a nova tor (II.28.8 than 'recens' ) ; they teachings; therefore are between ancient they subtle,
simplicity;Gnostics cowardice. 15
disagree
themselves
lack ing practical knowledge and vi rtue, they display only
témoignage de saint Irénëe, Paris 1947, pp. 281ff. , 321 n. 1, 410. 14 S ee M. Widmann, Väter', ZTK 54, 1957, 1967, 9 pp. 265-91. H. C. Frend states that the Gnost ies ' faded out a t the time of the Great P e r s e c u t ion, and their place is immediately taken by the Manichees', in Africa at least : Frend, ' The Gnost ic-Manichaean Tradition in Roman North Africa ' , JEH 4, 1953, pp. 13-26, here p. 15. In another article, Frend writes that Gnostics, because of their readiness to syncretism and to compromise with the Greco-Roman civilization, 'were not generally molested '. ( 'The Gnostic Sects and the Roman Empire ' , JEH 5, 1954, pp. 2 5-37, here p. 28.) The fact that Gnostics least resembled the synagogue both in its ethic and in its outlook towards the Gentiles (p. 26) and did not show the form of religious exclusiveness characteristic of the synagogue and the church accounts for the relative peace Gnostics enjoyed. Frend is led to the conclus ion that ' in the first two centuries the persecutions were confined to one type of Christian who might reasonably be called "the new Israel"' (p. 35) : those men and women had been schooled to regard persecution as their lot. It is to this view that W. U1Imann ('Gnostische und poli t ische Häresie bei Celsus', Theologische Versuche 11 [eds. J. Rogge und G. Schille], Berlin 197 0, pp. 153-58) seems to take exception when he suggests that we should investigate more carefully ' [nach] möglichen Zusammenhängen zwischen gnostischer Lehre und einem Bild des Christentums bei seinen Gegnern.. . , das Verfolgung provozieren musste'. We shall return to this point below.
1
Irenaus und seine theologischen pp. 156-73, esp. 17 2f. On the
3 0 Anti-Gnostic Polemics Only a few of these assertions have real philosophical significance. Plotinus found We find many agreements between Irenaeus and is in the critique of the Gnostics ; but nothing
in Irenaeus that has the philosophical character of Vita Plotini at as not, 16 ) . th is in If one persists point philosophy in or calling they the that popular
the argumentâtion put forward by Plotinus in Enn. II.9 (see Porphyry, Irenaeus s should be
1
arguments qualified do
ph ilosophical, constitute are not
popular
wisdom.They to agree that
themselves, arguments
overthrow of Gnosis that has been promised. h is philosophical
Irenaeus seems
decisive; otherwise Irenaeus the rhetorician would not have presented them at the outset, thus conceding their relative
2.
Theological Arguments The dec i s ive from the his many the arguments predecessors theological Gnostics must and be theological an that few or
scriptura1. to him contribution. Among mounts constantly
Here Irenaeus uses all the resources available adds orig inal Irenaeus that are
arguments are a
against
there
repeated. aga i ns t
These can obviously the to Gnostics ordinary the
lead us to what accusation of
Irenaeus thought was at stake in the debate. Turn i ng ignorance accuses they address Christians, Irenaeus
them of of
ignoring God's dispensation, truth, pointed substance out how of
the rule of faith. of
faith, scripture and tradition. 'hypothesis' P. Hefner has
In a word, they ignore the the Christian this crucial concept
'hypothes is' is to Irenaeus's réfutât i o n . ^
It designates
i6
See P. Perkins,
1
Irenaeus and the Gnostics'.
1?P. Hefner, 'Theological Methodology and St. Irenaeus', JR 44, 1964, pp. 294-309. According to Hefner 1 (p. 295) ' the one highest authority that stands out in Irenaeus's work 'is the system, framework, or "hypothesis"
Irenaeus the 'organic system or framework which constitutes
17 the
shape and meaning of God's revelation'. all other norms : Lord, delivered
In a formal sense,
it functions as the ultimate norm of truth and encompasses it includes God's economy of redempt ion, by of the apostles; and it is derived the f rom is rooted in God, announced by the prophets, taught by the scripture and also serves to expound scripture; it resides in the community the Church reaches community through tradition ; it is summarized in creedlike statements and can be expounded by reason. the and ultimate to which authority all applied by which other The hypothesis of truth is guides to Irenaeus's are meet the crit icism gnost ic
au thorit ies Irenaeus
s u b o r d i n a t e d .
Concretely
assertions, it is in practice equivalent to the 'rule' that there is one God, creator of the world, Father of our Lord Jesus Christ, and author of the economy (1.10.1).
of the Faith whose substance is comprised in God ' s redemptive dispensation on man's behalf'. Th i s is the authority wh ich holds together all others and to wh ich all others are subordinated : scripture, tradition, church, b ishop, creed and revelation. Hefner's study of the not ion of 'hypothesis' of truth as essential for the enucleation of Irenaeus 1 s theological methodology ends with the suggestion that 'closer attention be paid to the concept of regula fidei' (p. 302), Reynders ('Polémique', pp. 16-7) h ad already seen the ultimate norm of authority for Irenaeus as being that ' synthèse doctrinale' which is ' le corps de vérité ' . On the meaning of 'hypothesis ' see recently W. C. van Unnik, 'An Interesting Document of the 196-228, C Tsp U , rY 206-208? 91Cal
lscusslon
' —
31
'
1977
'
PP
"
l g The question of legitimate authority is crucial to Irenaeus's refutation; the emphasis he puts on authority gives his theology its specific character. However, after Hefner's contribution it should have become impossible to pit the authorit ies whom Irenaeus cons iders normative against each other. l^The proximi ty of the not ion of 'hypothesis' to that of the ' rule of fai th' appears in N. Brox's description of the rule as 'Inbegriff dessen, was er [Irenaus] für heilsnotwendig, für tatsächlich geschehen, von Gott geoffenbart und darum für unüberbietbar hält'. Offenbarung, Gnosis und Gnostischer Mythos bei Irenaus von Lyon, Salzbürg/München 19 66, p. 113. See also B. Hägglund,
3
0
Anti-Gnostic Polemics
For this hypothes is of faith, the Gnostics substitute their own hypothes is Obviously which they "dreamt into existence ' (1.9.3). subverted Thus faith acts mental develops a nd its of by the hypothes is of faith is conceived they beyond and God's Thereby revelation is radically the Creator. destroy real and the salvific jugglings which Pleroma
if another God distort pious. own the
Gnostics their
replace
inventions, an
fictions,
processes into an aeons
into and
atemporal speculation to an
framework20 about the utterly
arbitrary
amounts
indiscrete
theologia gloriae. the apostolic and ultimately not saved, Thus hypothesis opposed to
In so doing, they not only deceive the they do not possess. Th i s
simple believers, but also show that they do not care about tradition which leads men to despair about their salvation to deny salvation. of man is salvation does not nothing V.20.1). Gnostics of freely subtract This from and and his add is to not the on ly truth. 22 arbitrariness (IV.praef.4)21
For the gnostic view of saved (II.29.3; V.6-7;
include the flesh; but if the flesh is
Irenaeus1 s
temperament
oos it ivi st
"Die B e d e u t u n g der "régula fidei" als Grundlage theologischer Aussagen", StTh 12, 1958, pp. 1-44 ; A. Rousseau and J. Doutreleau, eds., Irénee de Lyon. Contre les heresies, Livre III (SC 210), Paris 1974, pp. 220-21.
20
See Sagnard, Gnose valentinienne, pp. 259, 571.
2iSee Reynders, 'Optimisme et théocentrisme chez saint Irénée', RTAM 8, 1936, pp. 225-52, esp. 252.
22 Reynders ("Optimisme1, pp. 229-30} has collected the expressions used by Irenaeus to describe how casually the G n o s t i c s deal with truth: adaptare, assimilare, adulterare, calumniantes, transvertentes, abutentes, transferunt, auferentes, trans f ingunt, transf igurant, transformantes, solvens, compingentes, conf ingentes, figmentum, transfictio, f ictio, in captivitatem ducunt a veritate, falsi tes tes, frustrantur speciem evangeli i, circumcidentes evangelium, eligentes, decurtantes, intercidentes deminoraverunt. See N. Brox, Offenbarung, p. 197, on th is 'heillose Automonie" of the Gnostics.
Irenaeus
19
emphasis on the clear and real facts of the economy, 23 it is also blasphemous. With objections, what he the for accusation this about in of 'blasphemy'cast in its context, they and at the Gnostics by Irenaeus we come to the core of his theological accusation, own sums up men th inks the Va lent i nians : salvation, render
'disbelievers
their
blasphemous
against God who shaped them' (IV.praef.4}. faith? splits real. etc. their the very thinking and breaks about God
They are guilty is blasphemous it 8). the
(IV.praei.3) because it introduces divisions into God ; divine its uni ty God (II.28.2 and throughout the and Moreover Gnostics They introduce such divisions between
distinguish
Creator,
between Christ and Christ, between different Irenaeus most ; 2 ^ it is above all
'economies', in the
The division of the divine is the point which upsets expressed
2 ^On the positivist character of the 'true gnosis' (and also of the rule of faith), see N. Brox, Offenbarung, pp. 179-89, 196-99; HSgglund, 'Bedeutung'. 2 ^'l'his view is widespread in antiquity: adding to or subtracting from a received tradit ion is cons ide red to be blasphemous. See W. C. van Unnik, ' De la règle MTITC Trpoaee~ivct i ynxe â<j>eAe\v dans l'histoire du canon', VC 3, 1949, pp. 1-3 6, esp. pp. 32-5. This 'rule* is found above all in texts coming from Asia Minor, Irenaeus's place of orig in : see ibid., pp. 9, 36. Irenaeus (1.10.2-3) strongly emphasizes that faith is one and -_he same; it cannot be augmented by those who have a greater degree of intelligence, nor diminished by those who are less g ifted.
11.9.16,
2
'
'
'
% e e R. A. Markus, ' Pleroma and Fulfilment. The Significance of H is tory in St. Irenaeus' Opposition to Gnosticism', VC 8, 1954, pp. 193-224, esp. p. 212. Against the breaking up of the divine Irenaeus makes the case of unity, wh ich is the main theme of Adv. haer. See A. Benoit, Saint Irénée. Introduction" ä 1 'étude de sa r théologie^ Paris 19 60, pp. 203-205 : Le thème que la lecture de l'ouvrage Contre les hére'sies accentue avec le plus de force est celu i de 1'unité... Par cette affirmation de l'unité, Irénée relève le défi que lui lance la gnose. Car l'essence de cette dernière, c'est le morcellement, la
3 denigrating repeatedly Testament of
0 the by God the of the on Old the
Anti-Gnostic Polemics Testament. of God unity of Irenaeus the and Old the
counters God
gnost ic
devaluation
insisting
Creator and by affirming Testament God.
the truth and reality of the Old
This 'Ringen um den Status des AT Gottes' 27
is to Irenaeus of utmost importance. The blasphemous split of the divine introduced by the Gnostics is the starting point for Irenaeus's attack against their dualistic teach ing. dualism. attack. I renaeus, making a case for the unity wh i ch is truth and which lies in the Church goes through all the forms of dualism, rejecting each one. 'morcellement dualism: du divin
1
From that point Irenaeus the target of his
will investigate the many facets and expressions of gnostic Dualism will thus become
First of all, he attacks the we may God call and theological demiurge, between the
which
i.e.,
the split
wh ich is the central poi nt of his attack. d i vi s ion of the divine in the Plerorna
He also attacks expressed by the
the divis ion between the good God and the just God and the doctrine of Aeons. one : 'Such divisions cannot be ascribed to God is
God ' ( II. 28.4); they suppress the deity (III.25.3).
the identity of God the Creator and God the Father is (see II.31.1, where he summarizes his
the central art icle of Irenaeus's creed and the sum of his argument argument).
division, le dualisme... Il y a vu l'hérésie '.
lui-même
la
réponse à
Brox, Offenbarung, pp. 48-9. See W. Ullmann ('Gnostische und politische Häresie', p. 155), for whom the central difference between the Great Church and Gnos is resides in their 'gegensätzliche Stellung zu dem Gott der Juden'.
2 ®Irenaeus is among those who cannot tolerate the idea that creation and universe could be the work of an ignorant or imperfect demiurge (improvident, negligent, incapable, indifferent, powerless, capricious or jealous ), or the result of a downfall or of a deficiency; he cannot bear the idea that human life could be a prey to a 'mauvais génie' .
Irenaeus
21
articles are derived: sees Jesus Logos the Christological the
one Christ, one economy. dualism, (111.9.3 ; separating III.16.2;
Irenaeus from the
Christ
(IV.praef.3 ; IV.2.4; from Saviour above
III.17.4; etc.), 111.16 . 8 ;
IV.praef.3; (111.11.1 ;
etc. ), the Christ b lasphemy as well
from the Christ be 1ow
III.17.4; etc.), as a typical gnostic affirmation and as a ( IV. praef. 3 ) . Th is he attacks, as well There is as the soteriological dualism, whereby the universality of God 1 s economy and will for salvation is denied. 2 9 only one economy, which We may enumerate which Christ will recapitulate all things. 30 other forms of dualism to wh ich Irenaeus objects : Scriptural dualism, which separates the the God of the OT from the of the two convenants . is universal, and on the basis of
NT from the OT and ultimately the unity and 'harmony '
God announced by the Savior, against which Irenaeus affirms Ecclesiastical dualism, accord i ng to wh ich a distinction is made between simple believers and pneumaties, thus breaking the unity of the Church. [The spiritual disciple] shall also judge those who give rise to sch isms... and who for
Th is would ru in the idea of providence and that of human freedom, ideas central to religious thought in the 2nd and 3rd century. Gnostic dualism introduces into these ideas an element which creates anxiety since it implies that 'Gott als König herrscht, aber nicht regiert ', and that therefore 'die Herrschaft Got tes zwar gut, aber die Regierung des Demiurgen...schlecht ist ' (E. Peterson, Der Monotheismus als politisches Problem, Leipzig 1935, pp. 201). Irenaeus's parti-pris for optimism, which is not based on philosophy and which represents a form of instinctive humanism, leads him to counter all that threatens order in universe and life. See Reynders, 'Optimisme'. It should be noted here that, while Irenaeus finds comfort in the idea that the Creator is close to the world, Gnostics despise the Creator precisely because of h is proximi ty to the world.
29
See Brox, Offenbarung, p. 178. See Benoit, Saint Irénée, pp. 219-27.
30
3 trif1ing which divide
0 reasons, to great the or them, and any cut
Anti-Gnostic Polemics kind in of reason and of
occurs
pieces body
glorious
Christ, and so far as in them lies, destroy importance can be effected by them, as will compensate for the some mischief are said which arising
31
from
their schism (IV.33.7; see IV.26.2). Social dualism, whereby evil, by nature contradicting (IV.37 .2 ) ,
to be good, others Irenaeus sees as
the equality of all men before God's offer 3 2
and as threatening the unity of the church and its peace. Practical dualism, according to which some recommend, over against the common by discipline, or be the either the of rigorism the from sothe attainable attacked by only a few can libertinism as
called superior men. Irenaeus, fundamental
These forms of dualism, detected and seen derived
theological dualism dividing the divine or as
d iverse expressions of a metaphysical dual ism opposing the world above to the world below, spirit to matter. 33 11 may be surprising that the focus of Irenaeus 's It understood
charge against the Gnostics is their dualistic outlook. is not my intention to decide whether Irenaeus
3 isee Sagnard, Gnose valentinienne, p. 506 and pass im. C on t ra ry to th G C1 me n t of Str. VI and VII (e.g. Str. VI 1. 2 and 16) and to Origen (In Jo. II.3.27-31; In Matth. 12.30) Irenaeus does not see room in the church for classes, distinctions, and levels due to degrees of perfection and understanding. 32
See Brox, Offenbarung, p. 178.
33 Reynders ('Polémique, p. 27 ) says concerning the hardeninq of gnostic dualism in Irenaeus's description of it : 'Aurait-il été si difficile de rapprocher les points de vue en transportant, par exemple, le dualisme du champ de la métaphysique à celui de la psychologie? '. But Irenaeus has not completely neglected to do so and to reduce gnostic speculations to exercises in thought: see 11.12.2; II.13.1-10.
Irenaeus
23
understanding, 35 or was fair to their profound
concerns. 3 ^
The fact is that in describing the mitigated dualism which is Gnosticism 37 Irenaeus perceived two essential aspects of its dynamic in system: the first, of its aeons emanat ion is t and, scheme its expressed doctrine secondly,
dualistic outlook. gnostic sources system, the exclusively. concentration Irenaeus's claims. second on
He describes both aspects, rely ing on But when he attacks their comes we to the have fore to almost look at In order to account for this shall for motives rejecting gnostic aspect
(I.praef.2).3®
Why is it so? dualism
non-theological
3 ^See verdict.
Widmann
('Irenaus', of
p.
171)
for
a
negative
35 For important elements see Brox, Offenbarung, passim.
that
self-understanding
3 ^F. Wisse, 1 The Nag Hammadi L i bra ry and the Heresiologists', VC 25, 19 71, pp. 20 5-23, thinks that
d iscourse and writings from the point of view of doctrine, while they propound rather a sort of 'mystical poetry' (p. 2 22). Further, Irenaeus would have too readily assumed that uni ty in doctrine is the only kind of unity. 'By taking the differences in mythological detai1 as doctrinal differences, Gnosticism came to look like an absurdly fragmented movement1 (p. 221). Also K. Rudolph (Gnosis, p. 16) says concerning Irenaeus 's knowledge of the Gnosties : 'Sein Wissen [ist] sehr begrenzt und einseitig gewesen'.
37 Mit igated dualism is opposed to absolute dualism, wh ich is more static and leads to withdrawal from the world ; the latter is found in Manichaeism, but is marginal in Gnosticism. The medieval Cathars will have both forms of dualism—mitigated and absolute. See C. Thouzellier, ed., Le livre des deux principes (SC 198), Paris 1973 ; C. Thouzellier, Catharisme et valdéisme au Languedoc, Paris 1969.
38See
F#
wisse, 'The Nag Hammadi Library', pp. 212-19.
3 0 3. Socio-political Motives
Anti-Gnostic Polemics
There are a number of incidental remarks found among Irenaeus's arguments which arquments mentioned extensively signif icant. Because of the arguments Irenaeus's to that the opposing temperaments of Irenaeus and not expect the Irenaeus suspicion as the to And present indeed by are exclusively reflects intellectual. Gnostics,we refutation such of a third cannot be considered These passages they In in some as having constitute they are are only is philosophical or theological character. kind. almost as a casually ; theme. others, But their
implicit in Irenaeus's refutation.
Never are they treated presence
harbored of
simple Christians of a theological speculation that seemed endanger bas ic truths unity God • Irenaeus imagines himself as the spokesman of the masses, in the tradition and in the faith of the Thus he reflects and propounds a form and is suspicious of those
strongly anchored average Christian. of
1
popular
theology '
39»Deux temperaments incompatibles...' (Reynders, Polémique * , p. 27). See T. A. Audet, 'Orientations théologiques chez saint Irénée', Traditio 1, 1943, pp. 1554, who speaks of a spontaneous rather than an intellectual reaction to Gnosticism (pp. 33-39). 4 Owe take the problematic concept of 'popular theology ' to mean here the faith of the simple Christians as opposed to the speculations of the learned. Thus also Reynders, 'Polémique', p. 22: 'On trouvera sans doute qu'Irénée, soucieux de sauver les simples et les doux, a un peu négligé les meneurs'. On 'popular theology', see H. J , Carpenter, 'Popular Christianity'. Carpenter tends to find in the Apostolic Fathers themselves 'the bulk of popular Christianity throughout the second century and well on into the third * (p. 296 ). It is regrettable, though, that 'popular' is here left so loosely defined and only seems to mean the 'majority view ' . his eyes are mere philosophers, and bad ones at that. for unity and unanimity discourse This e xpressed and behavior ; for they His natural propensity the had WG 1 *£. C I I R S already of
is shocked by their undisciplined endanger unity
individuals as well as of the Empire.42 concern peace and been in Irenaeus's intervent ion against rigoris t and
encratite tendencies that could divide the church ; Irenaeus s ided with those who favoured tolerance and indulgence for the lapsi. it is poss i ble that he even defended the and he was inclined to show them tolerance.
M o n t a n i s t s 4 4
1924, pp. 5-37, for whom also popular faith means the fai th of the s impie. I l e notes 'parfois opposition, plus souvent un désaccord ou du moins un malentendu entre la spéculation des savants et la foi des simples' (p. 481). See also J. Lebre ton, 'Le désaccord entre la foi popula i re et la théologie savante ' in Fliche-Martin, Histoire de l'Église 2, Paris 1948, pp. 3 61-374. Looking ahead to the upcoming evolution, it could be argued that the line of development will go from 'faith of the simple' to 1 common faith' (faith of the masses ) to orthodoxy. Further on simple faith and theology, see N. Brox, ' Der einfache Glaube und die Theologie. Zur altkirchlichen Geschichte eines Dauerproblems 1 , Kairos 14, 197 2, pp. 161-187, esp. 167-168 on Irenaeus,* A Komigl iano, ' Popular Religious Beliefs and the Late Roman Historians', Studies inChurch History, eds. G. J. Cuming and D. Baker, vol. 8, 1972, pp. 1-18.
41 For a typical statement of this suspicion, see Adv. haer. II.26.1.
42see in th is context A.H.M. Jones, 'Were Ancient Heres ies National or Social Movements in Disgu ise?', JTS 10, 1959, pp. 280-298. In the later Roman Empire 'the generality of people f irmly believed that not only individual salvation but the fortune of the empire depended on correct doctrine' (p. 296).
43 See Eusebius, HE V.1-2 ? 11-18. On this controversy and Irenaeus 's part in it, see P. Nautin, Lettres et écrivains chrétiens des Ile et Ille siècles, Paris, 19 61, pp. 33-61. 44 Eusebius, HE V.4.1-2. See N. Brox, 'Juden und Heiden bei Irenaus', MT2 16, 1965, pp. 89-106, esp. 105. See the "Excursus" below, pp. 34-40.
3 He was ready was not attack lies
0 to praise in beyond
Anti-Gnostic Polemics the Empire for favoring attacking his scope the and could unity and such an only have
interested
'pagans ' ;
contributed
to jeopardizing
the peace of society.
In fact Why?
he showed himself to be much harder on the Gnostics than on the Jews and was altogether gent le with the pagans. 4 5 passionate rejection? Before writing against the Gnostics, Irenaeus enjoyed the reputation those unity groups was seen of a peace-maker,
1
Does the proximity of the Gnostics alone account for their
tolerant and permissive. generally emerged among and opposed Where was might i ndulgence where
(11 is signif icant that 'orthodoxy that as favoured already reacted rigorism.) challenged,
But his permissiveness went only so far. broken, Irenaeus strongly. Montanists
authority
have represented and perceived it
the same threat as the Gnostics ; but they Irenaeus witnessed the gnostic preaching in the of Christian Church. the as a divis ive element endangered miss ion
were far from Gaul. communities wh ich
Irenaeus complains with that the it image of was
indeed that those who corrupt the truth the church' (1.27.4). that Concerned they bring as the church, he to take thinks
'affect the preaching of dishonor upon it (1.25.3). poss ible typically
Celsus had just (ca. 178) shown gnostic of extravagances Gnostics folly the cou Id
Christian.The
draw the attention of the civil author it ies.
' Men hearing
the things which they speak, and imagining that we are all such as they, may turn away their ears from the preaching of the truth ; or, again, seeing the things they practice, may defame us all, who have or in fact no fellowship or in our with daily them, either in doctrine in morals,
45 See Brox, 'Juden und Heiden ' : 'Die Juden sind ant igr.os t isches Argument ' (p. 9 6 n. 15a). 'Irenaus kennt die Heiden nur friedlich... ' (p. 104).
pp. 153-56. According to Ulimann, 1 gerade die gnöstische Haltung gegenüber Welt und Menschheit ist es, die er [Celsus] als die typisch christliche ansieht' (p. 155).
Irenaeus conduct' (1.25.3). at stake. 4 7
27 Clearly the reputation of the Church is these 'magicians' from (see 11.31.1-3) who time
It is imperative to stress that the Christians Likewise, it is essential that those radicals at a
have noth ing to do with Christians hold upon the dissociate martyrs
and instruments of Satan.
themselves
'unauthorized assemblies' (II 1.18.5;
(111.3 . 2 ) and pour contempt IV.26.3; IV.33.9)
when the church needs to offer a common front to a society still suspicious and not qui te ready to welcome Christ ians Irenaeus sees that, makes Irenaeus a long in addition to unity concerned complaints authority and to of in its own him effect. presbyters to
especially series of th e
urges that the
formulate Gnostics (V.20.2:
relativize
'Those...who desert the preaching of the church,
call in question the knowledge of the holy presbyters, not
47 I n th is context one might look at orthodoxy in terms of 'ecclesiastical vested interest'. For a survey of this question, see R. A. Markus, ' Christianity and Dissent in Roman North Africa; Changing Perspectives in Recent Work', Studies in Church History, ed. D. Baker, Vol. 9, 1972, pp. 21-36. 48 T O be sure, Gnostics are generally seen by Irenaeus as Christ ians, since he calls them to repentance and conversion. He does not consider them to be clearly outside the church. At least they are close enough to the church as to represent a threat. He himself is so close to them that he cannot, for instance, say (as Tertullian will do in Adv. Val. IV.1-3) that Va lent inus was an intelligent person, without feeling that he would be conceding too much.
^Carpenter ('Popular Christiani ty ', p. 297 ) thinks that Irenaeus writes against the Gnostics at a time when they had already been overcome: i.e. when the 'masses' had already rejected them. ^ Is that so? It seems that the precisely because they teach 'inside'. Were they to teach 'outside', as Cyprian said concerning Novatian, Irenaeus should not be curious about what they say. See S. L. Greenslade, 'Heresy and Sch ism in the Later Roman Empire', Studies in Church History 9, 19 7 2, p. 8; N. Drox, Offenbarung, p. 22.
4
3 taking is a
0 into consideration simple but of sophist' ) ; Jesus.51 are mere share insinuate,
Anti-Gnostic Polemics of how much greater man, than a they criticize in the of consequence and Church's Church,
5 2
religious
blasphemous the the
impudent Gnostics thus
understanding
Authorities servants
d e m i u r g e ;
these authorities and
in the demiurge's freely with
ambiguous and and
nature and have no power over the 'children of the Father'. Scripture misused. sacred tradition They are clash accommodated authorities Gncst ies crit i cize what the whole Church holds as (1.10.2). all 'They affirm that many of his disciples' If one adds to th is vs J L c i t z ion in his
undermine them.
were mis taken about Jesus (I.30.13). 53 mâ d lG to i nô n J ins t Gsd of
their understanding of revelation as a direct communication s G G I oÇ J G od his torical ocisf Geschichte der it might be said ist that they reject all niemals Gnosis
forms of mediation.
As N. Brox says, 5 5 for the Gnostics ünheilsgeschichte, Welt, welche den mit
Heilsgeschichte, denn sie ist ja das Markmal demiurgischen und Heil nichts zu tun hat...[Der Gnostiker] kennt ausschliesslich jenseitiger vertikalen Die ohne und Einbruch Of fenbarung.
Gnos is erreicht
ihn im Augenblick
Vermittlung wirk1 icher geschichtlicher Überlieferung oder Autorität.
5
^See Brox, Offenbarung, p. 119. ^See Brox, Offenbarung, p. 122.
5
52 Sec E. H. Pageis, '"The Demiurge and His Archons"—A Gnostic View of the Bishop and Presbyters?', HTR 69, 1976, pp. 301-24, esp. 315-16, 319-20. 53 See Brox, 'Antignostische' , Offenbarung, p. 122.
pp.
273-75;
id. ,
54 S ee N. Brox, 'Offenbarung — g n o s t i s c h und christlich', Stimmen der Zeit 182, 1968, pp. 105-17, here 109-11. * 55 Brox, 'Offenbarung ', pp. Markus, 'Pleroma', pp. 219-24.
110-11.
See
also
R.
A.
Irenaeus Irenaeus particular rooted in to a ist seems have to have been very the impressed by
29 the and und him
d isruptive attitude of the Gnostics in his entourage and in perceived gegen des undisciplined Geschichte und in To revolutionary character of their outlook. 'Revolte Negation it has Zeit, Welt...Sie Stehenden ' ; Vorhandenen This outlook is Geltung
revolutionary
contours.
Gnostics show radical tendencies. Without taking sides heresies strong were disguised (and acknowledged Irenaeus
i m p
on the issue of whether ancient social m o v e m e n t s , i t saw
e t u
has to be to
it) that Gnosticism had a
s . i
revolutionary
want
therefore
^ B r o x , 1 Ant ig nos tische ' , p. 277 . See H.-Ch. Puech, 'La gnose et le temps ', Eranos-Jahrbuch 20, 1951, pp. 57113 (now in En quête d e l a gnose, 2 vols., Paris 1978); K. Rudolph, Cnos is, pp. 7 2, 281-90, 310 . According to Rudolph the gnostic movement 'enthält eine Kritik an allem Bestehenden, die in der Antike kaum ihres gleichen findet' ( p . 2 8 1). Rudolph strongly emphasizes the 'gesellschafskritische und soz ialkritische Haltung der Gnosis ', its 'Ablehnung der diesseitigen Protest' (p. 310). Aga in, K.-w. Tröger, in Actes du Colloque international sur les textes de Nag Hammadi d'aou"t 1978 (ed. B. Bare), (forthcoming), develops a view close to Rudolph's. The gnostic religion, as it appears in the Nag Hammadi texts, directed its protest not only against the established Church, but more specifically against the Church's assertive view of the Old Testament as well as against the this-worldliness of the Jewish tradition.
57 See W.H.C. Frend, 'Heresy and Schism as Social and National Movements', Studies in Church History 9, 1972, pp. 37-56. A.H.M. Jones ('Popular', p. 295) writes about later heres ies: 'Modern h istorians are retrojecting into the past the sentiments of the present age when they argue that mere relig ious or doctrinal dissension cannot have generated such violent and enduring animos ity as that evinced by the Donatists, Arians, or Monophys ites, and that the real moving force behind these movements must have been national or class feeling '. Granted ; but it would be unwise to exclude a priori the impact of non-theological or non-religious factors in the emergence of the main stream in the Church. Moreover, the Gnostics' doctrines obviously had pract ical and political implications which were perceived by their opponents. 58
Rudolph
(Gnosis, p. 287) speaks of 'Sprengkraft'.
30 suggest socially whole that Irenaeus so. perceived in the
Anti-Gnostic Polemics gnostic movement to as
subvers ive
addit ion,
of
course,
being
theologically church Why then
Celsus on the other hand, who saw the danger, only extended attack he to it the the The
as a social did
accusation that Irenaeus reserved for the Gnostics. Irenaeus almos t exclusively also describes . dualistic aspect of Gnosticism? He says little against its emanat ionist scheme, wh ich emanation pri nciple is not seen as socially subvers ive; it is only said to be arbitrary and absurd. outlook represents a social threat. criticizes what is the status quo. authority, challenges universally But the dualist received, for and It spares no mundane disturbing
Its potential
peace and order knows no limit and, consequently, Gnostics are seen as dangerous radicals. Since his that attack Irenaeus upon saw the It is greatest might of ten be
6
th reat that the the by
in
the
dualistic aspect tradition
of Gnosticism, he decided that. subversive
to concentrate dualist same way 'simple
contains
e l e m e n t s
^
i
n
'learned
theology '
suspected
can here recall the inspiring statement found in E. Peterson, Mor.othe ismus, pp. 10 4-5 n. . 1 6: 'Die politischen FoIgen eines qnostischen oder dualistischen Weltbildes sind m. E. noch niemals in einem grösseren Zusammenhang dargestellt worden'. We do not pretend to carry out the task indicated by Peterson. But it seems appropriate to repeat here his invitation which should be seen as a complement to the repeated calls for the study of the sociology of gnosticism. See H. A. Green, ' Gnosis and Gnosticism. A Study in Methodology', Numen 24, 1977, pp. 95-134. ee H. Jonas, 'A Retrospective View', Proceedings of the International Colloquium on Gnosticism, Stockholm August 20-2 5, 1973, Stockholm 1977, p. 14 : 'Gnosticism has been the most radical embodiment of dualism ever to have appeared on the stage of history...It is a split between self and world, men's alienation from ^ nature, the the spirit and the nihilism of mundane norms ; and in its general extremist style it shows what radicalism really is'. The medieval Cathars can throw some light on the subversive aspect of the dualist tradition. They too were s trongly critical of the visible church and anticlerical ; 60S
Irenaeus believers' attack Irenaeus teachings have given of overthrowing exclusively elements He and the the on might faith.61 one aspect seen in focusing of in
31 his
almost
Gnosticism, to gnostic dualistic that writing were his
neglected social him
that are essential have implications for
self-understanding. abhorrent to him. refutation.
political
These non-theological factors might well ultimate motivation
Irenaeus thought he was in a better position to answer the not such odds. gnostic succeeded a
62
threat in
than having
his the But
predecessors Gnostics he surely
were
(see and the at
condemned reinforced already
expelled.
It is not certain that Irenaeus himself achieved overthrow. between the two groups that were
final
polarization
they res isted the structures of the church, especially the Gregorian structures. See C. Thouzellier and E. Delaruelle in Hérésies et sociétés dans l'Europe préindustrielle 1118e siècle (ed. J. Le Goff), Paris/La Haye 1968, pp. 111 and 153. 61-The tension between simple believers and learned theologians only reflects the tension, recurrent throughout the history of theology, between Amt and learned theology. This is not to say, however, that the masses had no share in the gnostic movement. See N. Brox, 1 Antignostische', p. 289, n. 69.
62 I t has long been fashionable to say that the impact of the conservative Irenaeus is limited to the West. It is attested that even the Middle Ages generally ignored Adv. haer. (perhaps because of the no longer acceptable eschatologica1 section in Book V) ; Augustine quotes Adv. haer. only a few times {see SC 152, pp. 46-8) . But there are now indications that Adv. haer. was known, quite early,
century as being a section of Adv. haer. III.9.2-3, indicates its presence in Upper Egypt at the end of the second or the beginning of the third century. This leads Doutreleau to state: 'L'oeuvre d ' I renée...serait ainsi parvenue...à plus de 400 kilometres au sud d'Alexandrie, quelques vingt ans, et peut-être plus rapidement encore, après sa rédaction à Lyon 1 (SC 210, p. 128). Many passages of Clement's Str. are strikingly parallel to Adv. haer: Str. II .72-75 (Stählin); III.216-7; VI. 503.10-17 ; VI 1.18.
30Anti-Gnostic Polemics In trying to determine Irenaeus's motives and reasons for attacking Irenaeus1 s question the Gnostics, I do not mean to de-emphasize reasoning. Nor do I intend to contribution. I am theolog i ca1
the value of his theological
rather interested temperament
in finding out what decided Irenaeus to I th ink that his decision. The for part of his
oppose the Gnost ies in the way he did. accounts
subversive character of Gnosis represented an instance that he cou Id not see reconciled with the life in the
Church.^3
Irenaeus appears to have thought that Gnos is was aiming at destroying all that the apostolic tradition had transmitted and that constituted the foundation of the church. His attack was against a life-enemy. Irenaeus thus contributed 1ines. beyond although century were Church,64 only After the him, poi nt never he fixed. to the formation of battledid not step have Dual ist ideologies the Christ ian community
cons is tently have been combatted throughout the centuries, they completely not disappeared. the chief (Cathars for on In the 12th enemy of and threat the it of Bogomils gnostic dualism was still important small to an in numbers but based minorities), institution
the
represented authority.
the princ iple
Origen himself would have known Adv. haer. if one accepts A. Le Boul luec ' s hypothes is in ' Y a-t-il des traces de la polémique anti-gnostique d ' Irénée dans le Peri Archon d'Origène?', Gnosis and Gnosticism {ed. M. Krause), Leiden 1977, pp. 138-47. See K. Koschorke, Die Polemik der Gnostiker gegen das kirchliche Christentum, Leiden 1978, p. 247 n. 15. 63 N . Brox (Offenbarung, pp. 33-5) points to the ' versöhnliche Haltung 1 öf the Gnostics who did not see their Gnos is as d irectly contrad icting the church. 'Sie wollen nicht ausserhalb als Häretiker, sondern in der 1 Kirche als Pneumatiker gelten (p. 34). But Irenaeus refuses all compromise with them and insists on seeing in gnostic groups heretical 'Konventikel' ('unauthorized assemblies' or rival communities : see SC 210, pp. 223-36 on Adv. haer. II 1.4.2 ) . In that way he helped to force them out of the community. ^ 4 See J. Duchesne-Guillemin, Stuttgart 1959, col. 349. 'Dualismus', RAC 4,
Irenaeus Irenaeus ' s Theologians to bow rejection
a n t i s o m a t
33 contribution told to avoid and
t h a t
to
the
exclusion
of
the and this and the that Such the In th is its in
Gnostics surely deprived the church of colorful tendencies. were also view dangerous speculations the be majority. But with gnos tic It plan ant icosmism further before author ity meant said of God 1 s were
i s m ^ S
to
irreconcilable meant of
Christian of all
salvat ion. universal struck that
discrimination among men was attacked before assertions simple time and retrospect, in identity prevented certainly the fact
in favor of equality salvation. chord among at save majority. forward That
a responsive the came Irenaeus helped melting being
believers
who
cons t i tu ted
the way
he did from
Christianity pot.66 a marginal
in the Greco-Roman Christianity
67
in turn
movement
the Western world. But with impetus was
the triumph of Irenaeus's itself felt.
ideas in Rome and in the to the An
of the Roman theology in the fourth century, a conservative to make Irenaeus's part rejection of gnos is in favor of pistis contributed choice of an authoritarian structure authoritarian
c h a l l e n g e s , 6 8
in Christisnity. to meet and
pattern of of
was
devised
heretical that of
the essential features of this pattern being antiquity Christian (apostolicity ) was development, Th is pattern element to be the while obligato for
the for
criterion centuries became
consent style
(majority).
Irenaeus'H polemic
a standard
in Christian
centuries.
6 5 '...dann ist an der gnostischen Beschimpfung der Welt und i h re r Apos troph ierung als "Illusion, Sehe in, Nichts" sowie an den Protesten des Plotin der entscheidende Unterschied (mit Neupiaton ismus ) abzulesen 1 . N. Brox, 'Antignostische', p. 28 0 n. 42. Christi anity, London/Philadelphia 1972/1971, p. 240« 6 7 see K. Rudolph, Gnosis, p. 391«
6ö
S e e S. L. Greenslade, 'Heresy and Schism', pp. 1-20.
3 0 Anti-Gnostic Polemics
Excursus :
Irenaeus and the Montanists
Since Irenaeus is eager to protect the faith from any deviation, his silence concerning Montanism is surpris ing. He voices no clear objection
2
against
the
'new prophecy'^ (according
3
that or ig i nated does not even
in his own As ia Minor in 156/7 (according us with provide any
to Epiphanius ) or 172/3 movement.
to Eusebius ); and he information on the
Could it be that he agrees with Montanist views Do we find elements of an answer easy to explain to Irenaeus's Baur and was the the to On
invo.1 ved in the movement? It gnostic would be
in recent stud ies of Montanism? relatively of s ilence if we were to follow F.C, Baur1 s view of an anticharacter between a final Montanism. the 'Jewish' versions of According conflict result (Petrine) Christianity
'Hellenistic' into
(Pauline)
synthesis
in early
Catholicism.
this view, Montanism would clearly find its place along the Jewish line over against the Hellenistic Gnosis and thus be on Irena eus 1 s side. to Such a picture has of Montanism as a counter-movement Gnosticism prevailed until
-*-Adv.haer. IV. 3 3 . 6-7 may have been directed Montanism, but no explicit mention of it is made.
2
against
J. A. Fischer, ' Die antimontanistischen Synoden des 2. /3. Jahrhunderts ' AHC 6, 1974, pp. 241-273, favors the earlier date: 'Man darf daher ve rmu ten, da s s der Montanismus um 157 a u f t r a t und die Bewegung des erreichte,*" was zu führte' (p, 247). Gibson, 'Montanism Universi ty 1974.
3
ihrer Trennung von der Grosskirche The same position is found in Elsa and its Monuments', Diss. Harvard
Th e later date is p r e f e r r e d by D. Powell, 'Tertullianists and Cataphryqians', VC 29, 1975, pp. 33-54, esp. p. 4 1 ; and by T.D. Barnes, 'The Chronology of Montanism', JTS NS 21, 1970, pp. 403-408.
Excursus
35
recently, taking its cue from the Montanist Tertullian who did fight the Gnos tics. K. Froehlich 4 app roach. has taken this a radically different as Questioning 'antithetical picture'
inspired ultimately by a Hegelian scheme, he has shown, on the basis of the Montanist oracles, how much Montanism and Gnosticism have in common. He pointed
1
to
the
striking
'closeness of terminology and thought' (p. 108) in the two movements and concluded in which proximity Is He view Jewish that there is a common matrix part' (pp. 109elements played a major
of the two movements, local proximity been of
it becomes most the Gnostics wi th did a to
difficult I renaeus so
to explain Irenaeus ' s s ilence on Montan ism. the sufficient to account for his passionate rejection of them? would not have concerned movement geographically remote from Gaul as Montanism was. encounters difficulties. with counts Irenaeus valley ; heretics, little not he doctrinal primarily gnostic all worries groups to the Rhone local the for we all around although did But this limit his he was attack the plead and
concerned factor If it
Mediterranean have
geographical heresies. ignorance. was making
in a refutation of Irenaeus movement 17 2^, it
Further,
cannot
is granted
that the Montanist between 156 and
its strongest
impact
would be dif f icult to see how Irenaeus could have ignored it unless he had already left Asia Minor at that time (he was in Rome under An i cetus, who was bishop there from 154 on ) . the But even if it. were granted that he had not known of movement while in Asia Minor, how could Montanist
Irenaeus have totally ignored the cris is that was echoed in
^K » Froehlich, 'M.ontanism and Gnosis', in The Heritage of the Early Church. Essays in Honor of the Very Reverend G.V. Florovsky, eds. D. Neiman and M. Schatkin, Koma 1973, pp. 91-111, TrTZ ^See Th. Baumeister, 'Montanismus und 87, 1978, pp. 44-60, here pp. 49-50. Gnostizismus',
3
0
Anti-Gnostic Polemics
T.yons itself a few years before he undertook the writing of Adv.haer.^ churches as witnessed by the letter he of wrote 7 the to the in Asia and Phrygia on behalf confessors
issue that occas ioned Irenaeus 1 s letter had to do with the laps i favored and the att itude to take entered toward them (Irenaeus indu lgence ) , and that th is letter does not prove the Rhone country ; it
that Montanism
as such had
only proves that the events taking place in Asia Minor were known to the churches of Vienne and Lyons.® If the geographical s ilence, can factor the alone cannot account for Irenaeus's quarter of third^. here with chronology be of some avail?
The ant i-Montan i s t literature was written during the last the second century and the first decade of the obviously alibi. leaves Montanism, At the time he at least writing early his that to Irenaeus who cannot be provided was This an
Montanism, contemporary ref u tat ion of
the Gnostics, he must have been aware
6J.a. Fischer, 'Die antimontanistichen' , p. 247 : Schon um 177/178 bezeugen die gallischen Gemeinden von Lyon und Vienne Kenntnis der "neuen Prophetie"'.
1
Nautin,
Lettres^ e^t écrivains
chrétiens
des
Ile
et " n i e
^Nautin, Lettres , p. 100 writes concerning the information that prompted the letter of the churches of Vienne and Lyons: 'La lettre que l'évêque d'Éphese lui l=Irénée] avait écrite signalait que les adversaires asiates de l'indulgence se réclamaient, en plus du titre de 'martyrs 1 , de révélations charismatiques. 11 est croyable que c'étaient celles de Montan, de Priscilla et de Maximilla; mais le nom des prophètes n'est pas donné'. See also pp. 39-43. 9p. Blanchet ière, ' Le mo n t a n i s me originel I', RevSR 52, 1978, pp. 118-134, here p. 132; Th. Baumeister, 'Montanismus', p. 52. J.A. Fischer, 'Die a n t i mon t a n i s t i s c h e n ' , p. 2 5 7 , holds that the first antimontanist synods (not identical with the earliest ant i montanist literature) met around 200.
Excursus others we re engaged wish to We leave roust it Montanists ? look more itself close ly if we at want the to nature of in debating about Montanisml^. to others to speak
37 Did he the the why
against
Montanist
movement
understand
Irenaeus remained si lent. Christian the in world. prophetism, belief in in
There is wide agreement today on continuing or reviving early prophecy of of centered the the end around of the and proximity experience
the nature of the movement : intense a program the the
Montanist
The accompanying eschatological exaltation resulted which Spirit
moral, rigorism occupied an important p l a c e ^ . s imilar generally to early been Christian eschatology p as . the movement
1
has as
characterized
or as
'conservative ,
or
' restorative 1 13f
' reactionary '
It has also been
declared archaic on the basis of formal similarities with early Christian expectations. Montanist and movement, was prophecy the But archaizing or not, the the original of a expectation for struggle in reviving
express ion
l^F. Wisse suggests that Irenaeus1 s silence may be expia ined by the fact that Justin's Syntagma (before 147 ) or a reworking of it, which he follows, was silent about Montanism. Usee K . F r o e h 1 ich , ' M o n t a n i s m ' , p. 9 2 ; Th . Baumeister, 'Montanismus', pp. 48-50 ; J.A. Fischer, ' Die antimontanistischen', pp. 241-244, 261-263 : all rely on standard presentations of Montanism in the last 100 years. l^c. Andresen, Die Kirchen der alten Christenheit, Stuttgart 1971, p. Ill : ' In gewisser Beziehung trägt der Montanismus Züge eines revolutionären Konservatismus'. 13 k . Aland, 'Bemerk ungen zum Montanismus und zur frühchristlichen Eschatologie' in Kirchengeschichtliche Entwürfe, Gütersloh 1960, pp. 105-148 ; ' . . .Versuch einer Restauration' (p. 143). Paulsen, ' Die Bedeutung des Montanismus für die Herausbildung des Kanons', VC 32, 1978, pp. 19-52, here p. 39. l^See F. Blanchetière, * Le montanisme RevSR 53, 1979, pp. 1-22, here p. 19. originel II 1 ,
3 identity in
0 remote geographical
Anti-Gnostic Polemics areas for which the
Thus eschatological exaltation, dramatic experience of the Spirit, and moral discipline seem to have characterized the the Montanist church. movement. No essential There
1
appears or
to be nothing subtraction'
in was
this picture that would be a departure from the doctrine of addition made to the beliefs held as orthodox. of the movement have generally that it was not its beliefs, but Th is is why students its eschatological block
been led to the conclus ion
attitude expressed in charismatic and ecstatic prophecy and in eth i ca1 rigorism wh ich const ituted the stumbling for the opponents of Montanism. S ince Irenaeus he would Elenchos beyond But even as and some the have Montanist noth ing to doctrine it say in was is not perce ived by The (Ref. objectionable^ 7 , Ep iphanius innovations did also
understandable the movement. of praxis f ind many
that
about
do not the
objections
realm
111.19 . 2 ;
Pan.haer. 48 .1.4) . I renaeus fail to discover
1 1
not. only
anything
theoretically objectionable in the a certain congenial ity is at conservât ive shared with contexts, threatened sympathy element view recent play. the
new prophecy ; there is the two. Irenaeus In both a might have was had
between
the Montanists, beyond the by that
di fferences due to the Church 1 s He identity pos it ively
developments.
for the Montanist eschatological message. testifies with Montanist eschatology. Epideixis for
Book V 99 and
of Adv.haer. in keeping charisma.
to millenarian views that are well Irenaeus ' s respect prophetic
Adv.haer . II.32.4 . show
Further, if one grants the archaizing nature of
16g e e Th, Baumeister, 'Montanismus', pp. 52-53. l ? See J. A. Fischer, ' Die antimontanistischen ', p. 245 and note 26. It was only decades later that their doctrine of God and of the Trinity was questioned (see pp. 263273) .
Excursus
39
Montanism, such a feature was not in itself a problem for the bishop of Lyons. also agreed with He had very strong feelings about the on the normative validity of normative character of the primitive, apostolic times, and Montanists
1 o
the written tradition-10, although they would not admit with him that the time of revelation was definitively over. Irenaeus was investigating systems of thought that put forward free interpretations and resulted of at from as the original in message of Christ ianity behavior. moral did times a reprehensible interest in matters in of
He was not concerned about refuting excesses in resulting to Irenaeus zealous In its origin at least, the Montanist view diverging
rigor ism appear
orthodox faith. not
faith, or as drawing practical demands from wrong premises. At times Irenaeus did oppose rigorism; but he saw no room for such an oppos i t ion in a work written to re f u te heresies. Be that as it may, a riddle remains. that Irenaeus knew of the Montanist noth ing object ionable in it. better, noth ing itself. it is dif f icult in could he We have assumed and found movement
We must now argue that he had Had he known it could of the have found to the doctrine How could how he
only a limited knowledge of the movement. to see some have object ionable How tenets
overlooked
challenge
church leadership embedded in ecstatic prophecy?
Tertullian, to further a church of the Spirit over against a chu rch of the bishops?! 9 was this only a later
18
See H. Paulsen, 'Die Bedeutung', pp. 51-52.
1 ^See Th . Baumeister, ' Montanismus 1 , p. 50 ; J . A . Fischer, 'Die antimontanistischen', pp. 262-263. C. Andresen, Die Kirchen, p. 115, describes the Montanists as 'radikale Kritiker allen Kirchentums 1 ; bu t if this description is correct, and given Irenaeus's ecclesiology, it is difficult to understand how Andresen can make the following categorical statement concerning the Montanists : 'Man fand in Irenaus einen Fürsprecher' (p. 111). It is the martyrs of Lyons, rather than Irenaeus, who would be in agreement with the Montanist thesis, according
3
0
Anti-Gnostic Polemics
manifestation of the movement that Irenaeus could not have observed?
to H. Kraft, ' Die lyoner Märtyrer und der Montanismus', Les martyrs de Lyon (177), Colloques internat ionaux du CNRS (20-23 septembre 1977), Paris 1978, pp. 233-247. Among both groups, Kraft speculates, we observe the same ecclesiological vis ion and the same e mp h a s i s on the superiority of charismatic ministry against institutional ministry. He says concerning the martyrs : 'Sie [waren] dem institut ioneilen Amt gegenüber zurückhaltend' (p. 2 39). However, Kraft concludes (p. 24 3) 'dass wir die Lyoner trotz ihrer Anerkennung der montanistischen Prophetic, trotz ihrem Enthusiasmus und trotz ihrem Eintreten für den Montanismus doch nicht als Montanisten ansehen können'.
II.
THE ELENCHOS AGAINST ALL HERESIES
The
study
of
the to
arguments riddle
put
forward
against the with
the name the to
Gnostics in the Elenchos3-, as any study of the Elenchos, is i nevitably become bound the Since associated to deal with here ' Hippoly tus1 . Every th ing concerning I I ippoly tus has indeed we wish
enigmatic.
Elenchos to the exclusion of any other work attributed Hippolytus, connected Who with was the authorship of and scope of the The Soon to
a brief consideration of some of the problems Elenchos work was
should be enough to justify such a limitation. the author the Elenchos? in 1851. discovered the author, in 1842 in and published of a first Jacobi, Origen in the was
Duncker, Bunsen, and others were to suggest Hippolytus as spite attribution (Origen was proposed the end of Origen's because of references Jacobi's to him
margins of the manuscript; Migne still has the Elenchos at works 2 . ) suggest ion
•'•The work is referred to in the following ways r Elenchos, taken from the title of each of the ten Books (xoO uarà ixaawv alpêaEcov êAéyxou ßi'ßAoc... ) ; Refutatio omnium haeresium (abbreviated Ref., which is generally used ) ; Philosophumena, which strictly applies only to the first four Books (see Ref. IV. 51.14; IX.8.2. ) . The crit ica1 edi t ion is by P. Wendland, Hippolytus Werke III. Refutatio omnium haeresium. GCS 26, Leipzig 1916. English translat ion by J.H. Macmahon, ANE, vo1. 5, and bet ter, by F. Legge Philosophumena, 2 vols., London 1921. French translation (almost complete) with introduction and notes by A. Siouville, Hippolyte de Rome. Philosophumena ou refutation de toutes les hérésies, 2 vols., Paris 1928 . 2 German translation by K. Preysing in BKV . Rh. 1, Bd. 40, München 1922. - References are given here to Wendland 1 s edition. Our translation takes account of the abovementioned translations into modern languages.
2 PG 16 ter. The story of the attribution of the Elenchos is r e c o u n t e d by G. P i c k e r , Stud ien zur H ippolytf rage, Leipz ig 1893, where references are found. See also Wend land in GCS 26, p. xxiii, and Legge, Philosophoumena I, ff.5.8.
4 accepted Who was 1551 on by the
fc majority on of the
x Gno s t xc Polemics historians of of ancient found in in the and
Christian literature. represented the Via
But the riddle has other dimensions. the statue and a teacher since 19 59 placed
Tiburtina
Vatican Library? martyr. writers Was from
In general it is said to be the statue of as well as father of the author Photius? of the church the many the writings diverse
H ippolytus, antipope
Hippolytus Eusebius to
mentioned on the sides of the statue or reported by later Despite character of these writings, he is generally agreed to be their author. Since has been 1947 P. Nautin by only first to by has a few faced anew the of problems Christian of a
surrounding H ippolytus. accepted in the 1iterature writings as surely
He has come up with a thesis that historians On the centuries.^ Hippolytus basis of
comparative analysis of the language and thought of various attributed written (especially he the two Elenchos and the Fragment against Noëtus, taking the latter Hippolytus), distinguishes authors 4 representing two very different types of mind and
3 From his Hippolyte et Josipc. Contribution a l'histoire de la lit térature chr¥tTen n e u trois i feme siècle, Paris 1947, through many other studies üp tö his contribution to Aufstieg und Niedergang der römischen Welt (Hg. H. Tempor i ni und W. H a a s e ) , Berlin/New York (forthcoming), P. Nautin has not modified his thesis in any significant way ; rather he has substantiated it further. For a survey of the debate about Nautin's thesis, see R. Butterworth, Hippolytus of Rome: Contra Noëtum, Heyth rop Monographs 2, London 1977, pp. 21-33, where the debate is referred to in the context of a new look at the so-called Fragmentagainst Noëtus. - Incidentally, Butterworth's analysis of the structure of Contra Noëtum leads him to the conclus ion that it 'is no concluding fragment of an otherwise lost work ' . It ' stands well on its own ' {p. 117). Formally, the work 'appears to be an outstanding ... example of the Christian adaptat ion of profane diatribe for anti-heretical and teach ing purposes 1 {p. 141) . - See also M. Richard, 'Bibliographie de la controverse', PO 27, 19 54, pp. 271-272. 4 R.A . Lipsius , Die Quellen der ältesten Ketzergeschichte neu untersucht, Leipzig 1875, pp. 117 ff.
Elenchos between divided: the Roman elected 5 statue the whom the writings we know must be
4 5 respectively was his
Josipos and Hippolytus. clergy, who became and founded a sch ismat ic
Josipos was a member of community ; it is
antipope when Callistus
that was discovered of the
on the Via Tiburtina; he died of a De Universo, etc.
after 235 and was the pretentious and superficial author of Elenchos, Synagoge, H ippolytus lived in Palestine or in a nearby province and
wrote between 222 and 250; he was the author, traditional by character, of a Syntagma against all heresies, of which we possess the the final final (this part only, of the known of as the the is, Fragment as however, Elenchos a against being Noëtus identification Fragment
sect i on
Syntagma
ques t ioned I. prooem.
by many 1), of
scholars), wh i ch it
subsequent is but a
to the
( here Nautin departs from the common opinion based on Ref. rework i ng ; of
This
H ippolytus Nautin's
was
soon had
identified the merit
with of
an
homonymous some
Roman martyr. thesis introducing plansibi1ity into the problem of authorship^. But it was
and passim, also saw many difficulties in ident ify ing the author of the Elenchos w ith H ippolytus. See already his Zur Quellenkritik des Epiphanius, Wien 1865, pp. 70 and 26, n. 3, where such an ident if ication is said to be only 'wahrscheinlich' and the name 'Pseudorigenes ' is still preferred. However, for Harnack, Zur Quellenkritik des Gnosticismus, Leipzig 1873, pp. 170 ff, the identification of the author as being H ippolytus was ' zweifellos sicher'. F i n a l l y , A. H i l g e n f e l d , Die K e t z e r g e s c h i c h t e des Urchristentums, Le ipz ig 18 84, passim, quite wisely, we think, makes a consistant distinction between Hippolytus I and Hippoly tus II, the latter being the author of the Elenchos. The Elenchos attacks Callistus mostly for his tendency toward modal ism and his softening of church discipline, and only secondarily for presumed political ambitions (see Ref.IX. 11-12). 6 ' 1 ' h e controversy around Nautin ' s work has been at times violent. Many critiques, among them M. Richard's, were thought to have administered 'den schlagenden Beweis'
5
3 generally offered of rejected
0 without 'apories any
1
Anti-Gnostic Polemics new alternative to. being Recent
to solve the
he had pointed
authors have continued the Elenchos.
to regard H ippolytus as the author fresh studies of the problem are
Until
made we shall find ourselves unfortunately bound to do the same and shall refrain from placing a question mark by the name of H ippoly tus every time we refer to the author of the Elenchos » When Elenchos without the therefore, alone. We in the following at this pages, work we write 'Hippolytus* we mean the author of the Elenchos, and of the shall other look as a whole to the considering related the writings attributed in
Hippolytus. Elenchos.
Our intention is to discover the character of against heret ics proposed But before tackling this task, we must mention
arguments
another problem that will help us determine the context of
(W. Schneemelcher, 'Notizen', ZKG 68, 1957, pp. 394-395) against his thesis. Others could not see its usefulness ('eine weitere überflüssige Auseinandersetzung', pronounced K. Beyschlag, 'Kallist und Hippolyt', TZ 20, 1964, pp. 103124, here p. 105, Anm. 11 ) . In the aftermath of the controversy, M. Richard was ready to make but one concession: The Fragment against Noëtus cannot have the same author as the Elenchos ; but he added, in agreement with some other scholars : It is the Fragment which was not written by Hippolytus, the Elenchos was his work ( 'Hippolyte de Rome', DS VII, Paris 1968, cols. 531-571, 1 here 533). Richard concludes (ibid.): Le pseudo-Josipe doit donc être éliminé de l'histoire de 1 ' ancienne littérature chrétienne'. However, the problems obviously persist. The discussions of H ippoly tu s's historical and literary identity have not yet produced any substantial unanimity among scholars, as is shown by V» Loi, 'La problematics storica-letteraria su Ippolito di Roma", in Ricerche su Ippolito, Studia Ephemeridis 'Augustinianura' 13, Roma 197 7, pp. 9-16. In the same volume ('L'identité letteraria di Ippolito di Roma', pp. 67-88), Loi reviews anew the literary witnesses and distinguishes, in a way similar to Nautin"s, two groups of works by two distinct writers; he refuses to postulate a unique author who would have gone through a profound psychological and cultural evolution as some have recently done in order to explain away the discrepancies. - In favor of Richard's position, see recently K.M. Kübner, 'Die Hauptquelle des Epiphanius (Pan.haer. 65) über Paulus von Samosata', ZKG 90, 1979, p. 57.
Elenchos our study of anti-gnostic arguments: what was the
45 real
purpose of the Elenchos? The goal of the Elenchos is no more obvious than the identity of its author. ) t As the title of each of the ten xou icata iractov
7
Books of the Elenchos recalls ( eXÉYX01J ßißAos ••• already proposed Koschorke^—that to confute Callistus occurs
aipeaewv to by was But it was
Hippolytus
certainly more
intends recently
refute all the heresies that are known to him . by d'Ales and in 1906®--and secret group. his Hippolytus's
and main purpose The attack
Callistus
against
in Book IX ; the previous Books would only 'genealogy' of that persona 1 enemy
set out the context and degradation of truth. work can c laim £ or
who appears as the final product of a long history of the Th i s view of the main purpose of the itself the formal di spos i t ion mentions, of the 'culminates' in the heresy it was a it is the to retroject contemporary
presentation of heres ies, wh ich of Callistus. well-known controvers ies deviants Moreover, polemical techn ique
as Koschorke
into heres ies of to
the past. h im and
Finally,
clear that to find a place for an enemy amounts demystifying rests sure, on magic of his repu tat ion. Elenchos , indications. however, To be
in a catalogue of d issolving weak an
Such a view of the purpose of the rather internal important Callistus was
concern of the author of the Elenchos and for that reason
7 A . Hilgenfeld, Ketzergeschichte, p. 68 saw the enterprise of the Elenchos as similar to that of Luke : 'Hippolytus II steht unter den ältesten Häresiologen ähnlich da, wie Lukas unter den synoptischen Evangelisten. Nach so manchen Vorgängern hat er es aufs Neue unternommen, allem von vorn an nachzugehen und neue Forschungen oder Erfahrungen angebracht... '.
® A. d'Ales, La théologie 1906, pp. 78, 104 and 211.
de
saint
Hippolyte,
Paris
K • KosohoirJcG^ H x p^ojly t»s Kg t z or b ^ s j câ u n n i ^ ^f u n jPo J L j sn ixk gegen die Gnostiker : Ei ne Tendenzkritische Untersuchung seiner 'Refutatio omnium haeresium', Wiesbaden 1975, pp. 60-73. Callistus would be 'der Zielpunkt der Polemik der Refutatio'.
3 0 Anti-Gnostic Polemics he is placed in the chain of heretics three times (Ref.
IX.7.Iff. ; IX.3 ; IX.12). are contemporary. said such to aim at
But the overarch ing idea of Book
IX lies in the fact that all the heresies mentioned therein The unity of the Book is broken if it is unmasking Cal 1 istus : why then wou Id (Ref. It
Hippolytus, after dealing with Noetus and Callistus, make a long report on the Elchasaites and the Jews IX.13-30, that is, more than half of the Book}, which would be completely alien to his presumed goal: whom the Elenchos wishes to unmask. C. Andresenll, for his part, thinks that Hippolytus 's refutation would have been conceived challenge logos. to Christianity of orthodox represented doctrine', Xoyos wherever and a as an answer by Celsus's therefore ) about would he have goes to the Alëthës Callistus^®? appears that Callistus remains only one among many heretics
Indeed one reads in Ref. X.34.1., at the end of the presumably Such is a he the Divine. developed beyond where his the conclud ing word { ctXriöns apparent to Andresen, particularly and of the Elenchos ;
'exposition expressing the According Vorlage, pagan one
true doctrine
Hippolytus Irenaeus, propounds But on
'Geschichtspolemik'
especially theory of
puts forward a proof for the priority of Christianity over ph ilosophy sect to is To the double from has and depravation of truth Hippolytus shown^, while again see (from philosophers both as following Elenchos entirely Irenaeus, lacks to heretics, of these as N. Brox
another) . the
accounts Logos',
' Anti-Alëthës
Hippolytus
originality
A1though, inventing a seemingly artificial link, H ippolytus does say that Alcibiades (a disciple of Elchasai) took occasion (Ref. IX,13) from the existence of the school of Callistus and disseminated his knavish tricks in the whole world. H e . Andresen, Logos und Nomos. Die Polemik des Kelsos wider das Christentum, Berlin, 1955, pp. 38 7-392.
1 1 2 N . Brox, Kelsos und Hippolytos . Zur frühchristlichen Geschichtspolemik', VC 20, 1966, pp. 150158.
10
Elenchos
4
f
5 would rest on
•geschichtstheologische [n] Hintergrund * too thin a bas is.
Moreover, why wouId the Elenchos contai n
detailed descriptions and refutations of heresies which are u tterly beyond Celsus's scope and quite indifferent to him? intended a refutation of all heresies/ i n c1u d ing the
gnostic heresies which are our main concern here. The discussions of the Elenchos. are IX, and with missing) with all just mentioned give but a slight idea of the intricate problems connected wi th the interprétât ion Some of its features, however, are clear. Books I-IV (II and III mysteries, pagan Books is It is Without problems, Gnostics? with pagan seen as ph ilosophy, For instance, its formal structure: deal
and astrology as being the ancestors of heres ies; Books Vheresies Book the X 'plagiates ' of the preceding Another concerned. argument doctrines ; expounds the clear: summarizes
orthodox point that to
doctrine. we the are
point
Hippolytus thought he was refuting heresies. latter a solution above-mentioned types of
presuppos ing can we find heretics, In order the
in this work
a central
against the
especially
against
the many
to answer th is question we must first deal with refuting heresies that are encountered
three ways of
throughout the work.
Without
4
pronouncing
definitively
on
the
validity
of
its thesis^ , we find in Koschorke's study of the Elenchos
13
Brox, 'Kelsos', p. 157.
l^Koschorke, Ketzerbekämpfung, pp. 4-5, where the thesis is found: 'Hippolyts Quellenwert zur Kenntnis der von ihm dargestellten gnostischen Gruppen ist sehr viel niedriger, und seine absichtliche Umgestaltung vorgegebener Nachrichten sehr viel we itreichender, als weithin angenommen wird. Vor allem fällt Hippolyt aus als Zeuge über Erscheinungsbild und Artikulationsweise der christlich-gnostischen Häresien'. See also p. 94.
3 0
Anti-Gnostic Polemics
a very useful clarification of Hippolytus's ways of dealing with its heretics. au thor; Koschorke each of the distingu ishes in of to in the Elenchos thes is, on the of of the that but three types of refutation, which amount to three axioms of these, content Kos chorke ' s reports or definitely little biases what he Irenaeus these
heretics, dis torts the image of the Gnostics, and shows how Hippolytus we (contrary Clement methods of for Alexandria) had to do with them. analys is, refutation. way, they to argumentation reason, had already shall Although represent found be been in cons ider they the by th ree Starting from Koschorke's three reduced and to a unique
cou Id be Elenchos
different
aspects have, These
presented used
separately. Hippolytus's
methods
predecessors,
less systematically.
A. heret ics supposed
Hippolytus ' s main have bee n recurs
concern
is the
to
show
that
the is
plagiarizing
G r e e k s
15 •
Th is
affirmation
as a leitmotiv
in the Elenchos and
to disqualify all heresies as being un-Christian. [To show] the sources from which they drew their nothing they have attempts ; to the that holy the i r theor ies nor owe have scriptures, by holding in the their wisdom
The object ive of the Elenchos is s tated at the outset :
been concocted their source
fast to the theories of the in of the
tradition of any saint ; but Greeks, would-be By carry ing out
in the systems of philosophers, mysteries, and the vagaries (Ref. I, prooem. 8)
astrologers.
th is program, Hippolytus will unveil
heretics as godless (aQeous ).
!^We find a similar at temp t in Tertullian, De praescriptione haereticorum, 7 (ed. R. Refoule, CCL 1, pp. 192-193), and only incidentally in Irenaeus (e.g. Adv. haer. II.14.1-6).
Elenchos This objective determines th g p1â n of the
4 5 Elenchos:
first, to present the pagan doctrines ; then to present the heres ies as borrowings from them. It. seems, expose our are therefore, op i n ions that these advisable, advanced doctrines the f irst by are to the of the
philosophers of readers more to that
the Greeks, and to show to
greater antiquity than these [heresiesl, and august compa re the concerning each divinity ; with of the each then, show heresy champ ion
corresponding philosophical system, so as to earliest heresy availed himself of the theories [of a philosopher], appropriated these principles, and, impelled from these into worse, constructed his own doctrine (fioyyct ). (Ref. I, prooem. 8-9) The whole work It is expected
;
to the
show
the
heretics to
as link a
plagiatores (icAeiM Aoyoi X. 34. 2 ) . doctrine to philosophy the heres iarch with shown
Ref. I, prooem. 11 ; see IV. 51.14 ; in author ' s mind, This link throws by confronting IV. 46.1); he IX. 31.1 ). I and IV the Celts, to be in
suffices,
or to astrology.
ipso facto the greatest suspicion upon it : in his miserable nakedness (see Ref. ( see Ref.
those who first held his tenets he is
has borrowed h is doctri nes from the Gent iles and viciously presented them as being from God displays much the Greeks,
Hippolytus ( Books 11 of doctrines astrologers, in the eyes
learni ng the Such
in Books the
III are and of
no longer
extant ) in present ing Indians, a presentation,
magicians. the readers.
sure, cou Id have a strik ing effect and accred it its author Hippolytus 's knowledge pagan matters, however, seems to be entirely second-hand.
He relies for Book I on a biographical compendium and on a summary of Theophrastus ' s î>uaiKÛv Ao'^ai , and in Book IV
l^on H ippoly tu s's sources in Book s I-IV, see P. Wendland ' s introduct ion to GCS 26, pp. xvii-xxi, and also
3 0 Anti-Gnostic Polemics he is merely transcribing large sect i ons from Sextus
Expiricus, from a commentary on the Timaeus, etc. Even more problematic between heresy and than the af f irmation of a link is the way Hippolytus ph ilosophy
establishes a link between each heresy and a philosophical doctrine or a pagan practice in his attempt to show that it is not derived from scripture
âïïb TSOV YPA^WV h\Xa ICAI
(see e.g.
TOUTO CCTTô
Ref. V.7.8 :
TÖV UUCTT IKWV
ou<
) .
Most of the time this link seems artificial. if the Naassenes they do teach so is that said have VII, said could the to serpent of us very on element, V.9.13). teaching from following Thaïes have to
For instance, is the humid (Ref. Miletus less
Basilides (Plato would (Ref.
copied
Aristotle's uniikely ) his views
seemed
( Ref . VI1.24.1-2). Empedocles the
Marcion would have borrowed 29-31) , a to
disappoint ing the Greeks, from to a pagan in a more be better
s tatement since Marcion owes so little to Greek philosophy. Justin Gnostic on is depend his especially Hippolytus's Semi tic 'models'? positive he adds acquainted Herodotus, (Ref. the although Is than thought,
description, w i th It
just as well be long Hippolytus with their shows
milieu
V.23-28). he re t i cs he
is di ff icult Although
to answer
this quest ion
way. to
some times the views
acquaintance with pagan doctrines Irenaeus to concerning gnostic received with cons tant systems, does
than Irenaeus does, what Gnostics into must in his
the utmost caution since Hippolytus, force not hes itäte to distort them.
effort
philosophic
Hippolytus"s
Qr>
own
1
acknowledgment,
3
e.g.
Ref.
IV,8.1.
J.
B. to " A.D. 220, London 1977, p. "414, h a s ^ a rather pos i t ive view of H ippolytu s's use of his sources; he writes concerning the summary of Platonic doctrines in Ref. 1.19: "Hippolytus* evidence (or that of his source), brief and sketchy as it is, nevertheless reveals a number of interesting formulations of doctrine of which we have no othe r evidence, and helps to round out our idea of what constituted the basic course in Platonism (at least in respect of Physics and E thics ) in the second century A.D.1
Elenchos 63
4 5
This way of establishing a relationship between heresy and philosophy might cast doubt on the Elenchos as a source of information shows Thus on heretics. the way However, the procedure just with examined heretics. emerges: way of between in which the Elenchos deals
the f irst feature of th is method the he ret ies was to es tabl ish
clearly a link
the author of the Elenchos thought refuting them and the pagan the th inkers
that a good
in a f ami ly tree of wh ich an argument
decreasing truth. f i1iat ion creates
In the mind of Hippolytus, this suspect context within
against the heretics can be launched. B. Hippolytus's effort to show the heretics as to
plagiarizing from the Greeks is an essential element of his overall program: to uncover the heretical doctrines, s trip away the ve i1 hiding their wickedness, to bring them into the full light of the day. This program itself, and
Irenaeus 1.31.3).
had
already
stated
that
to them
expose
the
doctrines of the Gnostics was to refute played a subordinate
(Adv. haer. distinct and th is We
But in Adv. haer. the "exposition1 as refutation role; it was always clearly the Gnostics What counted for Irenaeus was the and theology.
from the argumentation. discussion was based witness
'sachliche Auseinandersetzung 1 with
on reason, Bible,
in the Elenchos almost a complete disappearance of Even the presentat ion of how in a 'wicked ' way, 1.1-21). in fades out.
this type of argumentation. be sure, they interpreted
the heret ies based their doctrines on scr iptures, wh ich, to This presentat ion formed exegetical obviously the Hippolytus (see Ref. discussion have defeated origin a lengthy preamble H ippolytus's of heresy to Irenaeus's But it would to reveal
(Adv.haer.
purpose: pagan
un-Christ ian
doctrines.
is unwilling VI.37.1;
to dignify heretical views, wh i ch
are only a juggli ng wi th ph ilosoph i ca1 or astrological bits VI.52.1-2).
3 In the verb
0 Hippolytus1 s means ad 'to exposition prove, to to of
Anti-Gnostic Polemics heretical to It views prove can the by a 'to
terms cXcyxui -ê'Xeyxos are given a specific meaning. disprove, reductio impossibile, refute ' .
Usually mean
expose' only the attack. prior. performed themselves discovering
in such a context, that is, to expose with a to make a complete inventory before In the Elenchos the meaning of 'exposition' is the did readers not as well the as real for the Gnostics of thei r
view to a refutation,
Such an exposition is thought to be an unmasking, for who it. know nature
doctrines and who should be amazed and even disgusted upon The exposition is actually the refutation and Hippolytus can dispense with proper argumentation. A good example of this role of the exposition is found in the conclusion writes, I consider to the notice on the Peratae stated where their H ippolytus after that having repeatedly
dependence upon astrology: I have clearly exposed the I have up Per a tic he resy ; by many always hi di ng to [arguments] and and
brought out in the light the heresy which is itself , mixes everything everything, is one that
advance any further accusation, the opinions propounded sufficient by for [the their heretics] own being condemn at ion.
1
(Ref. V.18 emphasis added) Four elements constitutive of Hippoly tus 1 s refutation' are present here; we shall look at them more closely. 1) precisely, Elenchos, part icular doctrines. elements absurd or To refute, for Hippoly tus, is to expose; more
it is to expose some tenets of a heresy and to In the of pagan most 1-10 ; th is usually elements by is that accompanied can be are e.g. by a select ion with the
point to its dependence upon non-Christian sources. linked
Often th i s amounts to a twisting; at least, the Hippolytus ones (see sometimes Ref. repulsive V.14,
presented
E I X 6ncho s 17 f i 13< • m m ) by which, disgusted. 2) To re f u te is to unveil a secret doctrine. it is expected, the readers will be
Hippolytus makes much of the secret character of heretical doctrines and his program consists in breaking that secret (see Ref.I.prooem.2-5; 8). doctrines writings actually that were were It is hard to imagine that the since the heretics public and possessed even to to secret
available
inimical readers like Hippolytus. secret which Hippolytus intends
But more important, the to reveal resides in the
(presumably unsuspected) dependence of heretical teachings upon pagan views, a dependence that has been revealed to us through a problematic procedure. is found we ought A typical divulging of VII.30.1 : to them 'When, such a pseudo-secret the Demiurge, ... in Ref . to say
therefore, Marcion or some one of his hounds barks against that Empedocles announced such (tenets).* To point to Empedocles should automatically refute Marcion, 3) major Th is of type The the of expos it ion of At obviates the end further is of a an
â u^umentci t ion. feature
avoidance Elenchos.
argumentation
exposition, precisely when the reader expects a discussion of what had just been presented, he is regularly confronted with the affirmation: r i ' v [ KOtvais fiXeyxÖoti i 3 £ j > " uywv 'I think we have sufficiently exposed doctrine...' (Ref. VII.31.8; see V.ll; V.18; V.28 etc.). in Ref. argumentation and a refutation. IX.31.102 the i r follows that from borrowed presented doctrines The concluding format : Gentiles and
1
voy { Çu . this VI.37.1? statement have then have
(refuted)
The expos it ion, in Hippolytus's eyes, amounts to an heretics
them as divine teachings, as Books V to IX are
thought to have demonstrated and refuted. 4) doctrines show how Finally, Marcion such by took an his expos i t ion their ideas from of heretical To Empedocles
constitutes
itself
denunciation.
3 0
Anti-Gnostic Polemics
(Ref.VII.30.1) is all that the refutation is about; nothing else is necessary, discredited and Marcion 's the doctrine is thereby in
u tterly
(see
similar
procedure
Ref. V.7.8; 9.13; VI.29.1; 27.1; etc.). Could the heret ies It recognize doubtful themselves in such an the exposition^ 7 ? Gnostics That
is very by to
that the image of the real
presented seems
Hippolytus have been
fits
Gnostics.
image
concocted
in Hippolytus ' s
s tudy as a commod ity for the polemist. we miss here something a t temp t at gaining that we found some insight
Put more important, in Irenaeus : into the a real gnos t i c A rare
'hypothes is' through
a direct encounter with them.
insight of th is kind is found in Ref. the Basilidians; Their of and the whole theory turns
VI1.27.11 concerning
around
the
mixing
universal
seed,
its
discrimination, into
the restoration of the mixed parts
their original place. (oXn yap auTÛSv n 0to9 eat s . . . auvKUai s . . . I^UXokp i vriai s . . . àvaKaïaaTacr i s . . . But th is sort of insight plays only a limited role )
in the
section on the Basilidians, and no role at all in the rest of the Elenchos. the
1
Hippolytus rule' or not a
does
not
seem
interested
in
perceiving To do so
' hypothes is' of gnostic necessarily of gnostic have a amounted ideas
thought. to into a a the
wou Id or to
have
distortion systematic necessary
reduction it
s c h e m e ^ ; rather hermeneut ical
wou Id of
fulfilled
condition
real
discussion
based on the understanding of gnostic thought-processes and vision of the world.
C.
The third way of refuting an adversary
according
to the Elenchos is to place him in the long chain of known
question with a clear no.
~
^
18j find quest ionable the way Kos c h o r k e , Ketzerbekämpfung, pp. 22-5 5, regularly and w i thou t qualification, equates 'régula' with 'system'.
Elenchos heretics the For Elenchos, H ippolytus ancestors (successio two and has are haereticorum) th is is
1
55 . is to Already present in in the way. when him:
£ irst
ways,
method applied thought
recurrent be
is sometimes Callistus
1
in an antonomous refu ted for forefathers
instance,
invented
heretical
Noëtus and ... Heraclitus (Ref.IX,7-12). always pagan astrologists. explicit and all
Ultimately, these mystagogues, effort to Such is the
philosophers,
Here again we find a systematic of the Elenchos good to (Ref. show of
deprive heresies of their Christian content. objective means are deemed in the the
I. prooem. 8-9) un-Christian pure by
nature of heresy: partial select ion assertions, dénigration, Koschorke). The this truth truth wasted that first of
artificial construction of genealogies, teachings tw is t i ng (mos t heretics, of studied information,
stereotyping, innuendos
techniques
heretics
are assumed
to be closer
to the In
truth than those nearer or contemporary to Hippolytus. theory the degradat ion of revelation. in Judaism,
truth, Christ ian truth Some but of this clearly and original more was The
(or what H ippolytus holds for such) is ident ical with the of the primeval was already the lost among
pagans—both
barbarian
Greek.
heretics borrowed from the pagans and so lost even more of
t r u t h ^ O ,
And so, from one heresy
to the other ; the
"^Th 6 me t t i od is âlrBâdy £o u r ic i in Jud6 's Epistle ^ according to F. Wisse, 'The Epistle of Jude in the History of Heresiology', in Essays on the Nag Hammadi Texts in Honour of Alexander Böhl ig (ed. M.Krause )~ Leiden 1972, pp. 133-143. It also found an important place in Irenaeus, Adv. haer. I, and, without doubt, in Justin's Syntagma as far as his argumentation can be reconstructed.
20 I n Ref. I.prooem. 8-9 this view is clearly stated ; it is said that Greek philosophers propounded more ancient and more august doctrines than those of the heretics (see also ¥11.36.2: ' It has been proved that those philosophers of Greece who have talked about the divine, have done it with much more reverence than these [heretics] 1 ) ; but they also contained falsehood and errors to which the heresiarchs further added.
3 phenomenon is
0 seen as a descending
Anti-Gnostic Polemics genealogy, in which in
truth kept being degraded and lost. The three ways of refutation were already present Irenaeus. they did not replace ways argumentation to th e i r to of and refutation, haer. heretics, have made But in Adv. haer. they had a subordinate role ; wh ich Hippolytus for from the the and
occupies four of the five Books of Adv. develops looks borrowings sources. those for pagan these for precursors icXe^iAoyoi indications the
utmost ; he would
systematically
philosophers,
their pagan
origins
The refutat ion stops with this denunciat ion wh ich
is typical of the polemics found in the Elenchos. 2. The Basic Disagreement with the Gnostics These polemical techniques give the Elenchos its form and reveal a specif ic understanding, is required goes to refute beyond an to on the part of the We in author, in this which and of what study the heretics. is explicit what
want now to raise the question which concerns us primarily and which the what what of H ippolytus's expos it ion. permeate expressed especially gnostic heresy, he Can we f ind in these techniques , idea be heresy, mos t is? Has the author perceived central and Has he stated what
Elenchos, takes
offensive in the gnostic
'hypothesis'?
is wrong with the philosophers plagiarized by heretics, and if so does he counter his opponents with an interpretation of Christianity Writing 21—and the Does haer.? Elcnchos that really takes account of the gnostic heresy? in the first half of the third century, us ing especially Adv. haer. I.1in scope, the author of with role find Irenaeus. to the important comparison a work similar a some Irenaeus's Adversus haereses, — producing Elenchos the cannot escape assign
Elenchos
instances which functioned as norms and authorities in Adv. We have already the broad said that we do not and theological in the exegetical discussion
which we encountered in Adv. haer.
We do not even find any
Elenchos
57
real information on how the heretics intended to base their views on scripture. any real connect ion we he denounce Elenchos, The author systematically denies them with the Bible in the of his effort of to the
them as being heathens. might were have also expected the author author the if Apostolic
Further,
Tradition, not only to see the tradition of the apostles as normative, but to make it so. makes it totally unnecessary. But the Elenchos betrays no An important element of such a view, and the thesis of the alien origin of heresy Irenaeus's argumentât ion against the Gnostics is therefore missing here 2 1 ; there have departed from i t >• is that heretics it is no explicit attempt to establish that th e G nos tics do no t pos s es s the apostolic trad i t ion or The âu thoi» of the Elenchos does not His only comment about trad it ion a pagan tradition; they turn when they Christianity know of such a criterion. clothe
represent
in a Christian garment,
into an extravagant philosophical game 2 2 . As
truth '
does
( TOV
Irenaeus,
xns CUN0ETA£
Hippolytus
icavova. . .
knows
REF.
of
a
'rule
wh ich
of
he
X.5.2),
wants
to
'demonstrate' is
in
his
last in
Book. its
However,
the
demons trat ion
disappointing
sketch iness
(Ref.
2 ^A poss ible, but indirect, express ion of that view of tradition might be found in Ref. VIII.18.2 where it is said that the Quartodecimans, 'on all other points [bes ide the date of Easter] , agree with all that has been transmitted to the church by the Apostles'. Elsewhere (Ref. I.prooem.6) the author of the Elenchos seems to claim some authority for himself on the bas is of his being a successor of the Apostles. K. Baus in Handbuch der Kirchengeschichte, I (Hg. H. Jedin ), Freiburg 1963, p. 283, states : 'Der S icherung der apostolischen Überlieferung i n der Lehre dienen die dogmatisch-antihäretischen Schriften Hippolyts 1 . Such a Statement not only pays too little attent ion to the problems of authorship, but obviously reads into the Elenchos concerns that are not
see) at his word when he saw the gnostic interpretation as an acute Hellenization of the Christian message. See h is Lehrbuch der Dogmengeschichte, I, Tubingen 1931, p. 250.
3 0 Anti-Gnostic Polemics X. 3 0 - 3 4 ) , the faith. a condition Moreover, the role of which rule in the the we may attribute appears to to have
H ippolytus's lack of interest in a positive vindication of itself played no significant the foregoing refutations. in
The rule seems to be an abstract entity in the Elenchos; it never has Adv. which is character 'rule of faith' found haer. — i.e. represented in to the and concrete, universal, apostolic, a rule faith at as the a 'Truth'
a real insight Elenchos to that
into the Christian 'hypothesis'. been is complete it used of
and was contrasted with the gnostic said have of beginning, criterion heresies. Should we then conclude that Irenaeus used to only to the
extent
measure
degree
non-value
recent
that the points of the truth of
reference his faith
establish
( scripture, tradition, rule of faith ) are all absent from the Elenchos? Irenaeus knew how to find These points, Some for and him, became formed the in all qnost ic systems the the core of the gnostic of does his not 'blasphemia creatoris' and the implicit many-sided dualism. 'hypothes is' attack. privileqed target
two generat ions
later, Hippolytus
show the same sensitivity. Basilidian system this the potentially creator, (Ref.
We have seen that he complains 'emanationist circle ' of the element. little On the But specific at one in
once about what could be the interesting has
VI1.27.11); but he never develops
issue of the demiurge and the accompanying denigration of Hippolytus to say. point (Ref. V.27. 5-6) he does not hide his indignation at left to the God-Creator of of the Old Testament and Elohim appears to be at his degradat ion remains on its howe ver, Peratae, a and
the role the
the system of Justin the Gnostic; center upsets the Hippolytus. in the This section (Ref.
the sect ion on Justin point, on the
undeveloped. demiurge on implications. report omission,
He does not comment on the central place of The blasphemia creatoris is absent from the VII. would 29-30 ) , surpris ing there, and its mention be expected
Marcion since
Elenchos because sometimes say that the of the affirmation of the identity with the
59 of God church, the no the
and the Creator seems important to Hippolytus. Montanists, things ; again ' in agreement they this also acknowledge Creator X.31.6)? Let us
Does he not
the Father to be the God of the universe, and all But ask acknowledge (Ref. what Christ' VI11.19 .2 ; see rece ives most in
gospel testif ies concerning significant development. Hippolytus
aff i rmat ion upsets him
what
gnostic teachings. to say about its
Of course, he is impressed by the alien he might have had something relevant with astrology, about its V.6.6; connection
character of heresy ; proximity ( see
to anthroposophy and theosophy (see Ref. But two statements
V.8.38 ) , and something on its link with Orphic Ref . V.20.4-5).
literature
in part icular,
because of their content and place in the work, might help answer our question. The first statement we should examine is found in Book X and probably doctrine find that (Ref. of conta ins the X,30-34) the and 'last word' of looking back the at Elenchos• his by long the S tart i ng from Hippolytus's final expos ition of the orthodox presentation 'many-headed was heresy' (Ref. V.11), we
Hippolytus
part icularly
struck
d ispersal and fragmentation of the divine which he thought he was encountering in all heretical doctrines. the heretics their the in own new, borrowed Whether posited the the the the philosophers summarized heres iarchs, beginning Ref. X. in (e.g. 12 ) or from whom Ref .
one or many principles at the beginning of the universe (as X.6-7), to itself, 9-19). their heresies 'olagiarizers', multiplicity is not the Pleroma by Ref » at (see tended the increase
f ire
for Simon,
simple :
scattering
recapitulation, Ref. X. heresiarchs together 'Others fancied many concocted existing or of
In addition to th is, some heresy (e.g. The combining X. from 29.1: all of the
introduced beings
someth ing
borrowing
heresies...') multiplication
philosophumena. doctrinal tenets,
multiplication and the be due to
in the heretical systems, might
ensuing
3
0
Anti-Gnostic Polemics but this is The Ref.
work of an underly ing emanationist principle ;
neither made explicit nor worked out in the Elenchos. to counter the heretical atomization of the divine. X.32.1 created contains this major, This and poss ibly truth was counter-statement; everything. it says: God
Elenchos stresses the point the author th inks most suited pass ionate, ignored by
is one, he was a lone, he
essential
the philosophies which the heresiarchs followed. The second statement to be considered is similar in
nature and is found in Book I (Ref. 1.26.3; it reappears in a summary in Ref. IV.43.2). All [these at It concludes the exposition of being the doctrines of the philosophers. philosophers] ..., the magnitude of astonished itself. creation,
thought it [=the magnitude] to be the Divine They gave preference to th is or to that portion of the universe, but failed to recognize the God of these and Demiurge. This same view is echoed in Ref. X.32.5, where we read : I consider that, for the moment, subtle ignoring these, of vi ews I have words the the the suff iciently the parts of exposed the points with while the ignored by
the Greeks who glorified creat ion disguised Creator. Taking occasion
from
heres iarchs
Greeks under similar expressions and framed ridiculous heresies. It is clearly stated here that the 'Greek vice' upon which heretics built their systems elements cosmos. Both Surely, statements, because of th e i r place in the Elenchos, frame the entire work and mark out its main line. it can be said of that they constitute but a slight central concern. was To us they indication in his Hippolytus's and the failure is the divinization of cosmic the author of the to recognize
express Hippolytus's dismay at that which he found lacking gnos tic by counterpart. Irenaeus of particularly incensed the dualistic ou tlook the gnostic heresy ;
Elenchos but Hippolytus
61 is most offended by its divinization of the it, and by the ensuing of God is the dispersal of in The doctrinal consequence amalgamation encountered is no V. 2 0 .1 ) the they
universe or of parts of of this heretical tenets view view (see
and fragmentation of the divine. d isparate The Ref •
heretical systems and described at length in the Elenchos. heret ical the concerning to atheism. that show divine For, in small the in God mistake ; it pagans, amounts heretics following
'are without
Kotxà
their thinking, in their character, and in their behavior'
(Ref. I. p r o o e m . 8 : àôeous. . . K a x à yvwyriv kou xpoirov
KClV KOtTCX epyov ) . Does H ippolytus movement Or does th is bear from it displacement witness second to to the in the the vis ion changes third from in Irenaeus the when to it of
gnostic
century,
was increasingly divided reflect of by to gnostic sources (e.g. gnostic gnostic
into a plurality of small groups? increased so-called closer some to knowledge to the of the due Seth ians), pagan in the shift
Hippolytus's
especially philosophy? tactics bears used witness
1 i tera ture teachers. have the
Perhaps
it was
change his
Or perhaps from he virtually Church a
Hippolytus's already real when still knew time
distance
gnostic his
opponents Irenaeus refutation engaged but to
who at
might a
disappeared. undertook was indeed a
himself
Gnostics ; Great
in the process of overcoming the gnostic movement ; him Gnostics This represented threat—and is no longer the case two generations doctrine ; they to might his eventually case and, the church:
personal one. later. be a serve threat the
The gnostic opponents of Hippolytus have ceased to to orthodox as author and his a pretext if But further accepts
Possibly, Callistus d Ales's
1
to disqualify suggestion
23
an enemy well within one the Gnostics
group, .
Koschorke 's or themselves
23 A . d 'Ales, Theologie, p. 78 states : ' .. . destine' à confondre l'Église catholique comme secte callistienne, [ 1 ' Elenchos ] semble avoir été' surtout, dans la pensée de
3 0
Anti-Gnostic Polemics This would deals with alien from
represent no concrete and immediate challenge, explain the the abstract and saying may way his in which derive from the the author on their Gnostics insistence
their doctrines in
character, But we
that also
they see,
pagan masters. change attitude present being encountered in the Elenchos that there is an evolution in
the style of Christian polemics. does not pretend to attack He
In retrojecting
controvers ies into heresies of the past, the heres iologist those past expresses a heres ies as new thems elves heresies aspect. continue present. concept ion :
are already to
anci ent, but a
they have a cumulative 'classical' and in in a succession, tradition Such possibility
Heresies of the past have become represent per man ent
interpretation; and, if seen as elements they also constitute a view already more clarity the between
a tradit ion, the heretical
which, as such, is alive in the particular heretics. from Epiphanius ' s and the work. Panarion The might
imp 1 icit in Justin, will emerge with sti 11 connection be only
Elenchos
i ndi rect.
But both share the view of a heret ical tradition
to Christian polemics.
1'auteur, une mach ine de guerre, savamment adaptée à ce but secret.'
III.
EPIPHANIUS'S PANARION
The fact that the Elenchos very soon circulated under the use name it of Origen might explain why later polemists— o250 ) and c.380In particularly Epiphanius (310/320-402)—were not inclined to when fighting against adversus heretics. haereses, While before Tertullian Fi , lastrius of 3 90) order show to ignores Brescia (Diversarum of of hereseon liber, some find we a knowledge if one real wait use the the Elenchos, Elenchos Theodore t c.453) who, Epiphanius by of later Cyrus (Libellus
it totally,
is to follow Hilgenfeld.1 until
polemists2, (Haereticarum
must
fabularum
compendium,
however,
ignores Epiphanius 3 ; for Theodoret was too liberal to join Epiphanius in collusion against Origen. reference to Irenaeus. has from They
4
All the polemists in general took to his
from the middle of the third century, though, share their also refer Hippolytus 's Syntagma. Lipsius documentation
s h o w n
how
Epiphanius heres iological
sources
1A. Hilgenfeld, Die Ketzergeschichte Urchristentums, Leipzig 1888, p. 73.
des
2 The many references to Ref. given by K. Holl in the GCS edition of Epiphanius (see note 8 below), it should be recalled, do not intend to indicate Epiphanius's sources, but parallels. That Epiphanius parallels the Elenchos is no indicat ion that he is quot ing it ; he might as well be following the same sources as the Elenchos. In that qualified sense we say that there is no 'real' use of the Elenchos by Epiphanius. 3 On these relationships, Ketzergeschichte, pp. 7 3-83, and Introduction above.
see our
Hilgenfeld, chart in the
4 R.A. Lipsius, Zur Quellenqeschichte des Epiphanios, Wien 18 65, p. 37. Lipsius's work Is a study of Pan.haer. 13 to 57.
3 (Hippolytus ' s sometimes preserving, orig inal for
0 Syntagma, instance, haer. 31 Irenaeus, them and word 34, large
Anti-Gnostic Polemics indi rectly for word of of Just in^), and thus Irenaeus 's
transcribing (Pan.
sections etc.), so on.
Hippolytus ' s
Syntagma (e.g. Pan, haer. 5 7 ) a n d
Epiphanius drew
from heretical sources as well, and he sometimes expressly mentions his own reading, investigation or experience; we might have not like the caustic to reject we have Samari tan so on tone of his narrative, but we the thank information for on he is to first-hand gnostic, no reason a priori him and
providing ; information
Jewi sh
sects,
Jewish-Christian, Montanist, Marcionite, Manichaean, Arian groups 7 , and important sections of their own literatures. The Panarion® represents an intensive piece of work if
^There is, however, no explicit reference in Panarion to Justin*s Syntagma. See Hilgenfeld, Ketzergeschiehte, p. 73. % e e further on this point P. Nautin, 'Saint Épiphane de Salamine' in DHGE XV, Paris 1963, cols. 617-66631, esp. 627. ?See Hilgenfeld, Ketzergeschichte, pp. 80-82. ^Panarion ( ' E T Tic f > a v i ou ÉITICTICOTTOU Kara ai peaeuv fÔYÔonKOVTa ] TO e7TiicXn9ev iravapiov E ixouv KI B UTIOV) éd. K. Holl, GCS 25, 31 and 37, Leipzig 1915, 1922 and 19 3 3. Also ^G 41 and 4 2. It is re f erred to as Pan.epistola and Pan.prooem. for the f irst sections ; Pan.haer. for the sections on heresies; Pan.christ• for the sect ion on Christianity ; Pan. de f ide for the concluding exposition of the orthodox faith. Only very short passages have been translated into modern languages (e.g. by J. Horrmann in BKV 2 , Rh. 1, Bd. 38, München 1919, which contains Pan.prooem., Pan.christ., the recapitulations, probably not written by Epiphanius himself, and Pan.de fide 13.2-18.6 . G. A. Koch, ' A Critical Investigation of Epiphanius' Knowledge of the Ebionites: A Translation and Critical Discuss ion of Panarion 30. Dissertation, University of Pennsylvania, 1976). Very little» has been published on Epiphanius1 s heresiology; see P. Fraenkel, 'Histoire sainte et hérésie chez saint Épiphane de Salamine d'après le tome I du Panarion' (=Pan.haer. 1-20), RThPh 12, 196 2, pp. 175-191, esp. 176. A comprehensive study of Epiphanius's heresiology is expected to be offered by Mme A. Pourkier, maître-assistant at the University of
Epiphanius we consider that it that
65 it was written between 374 and 377^ and 1,361 pages in Holl's edition. But
covers
Epiphanius had thought of the plan of the Panarion for some t imelO: the one Songs to describe and refute the eigh ty heresies facing truth like the eigh ty concubines of the Song of bride,
6:8-9, who surround and celebrate the unique no part with her. 8 0.10 and
but have is
The image of the concubines, (one exception in th e the Panarion 11), is developed of
while absent from the sections on heresies Pan. haer • introductory and concluding sections
(Pan.prooem., Pan.de fide); here the multiplicity of these ambiguous figures is contrasted with the one 'perfect dove' who and represents is called 'our holy mother and the church, (Pan.de its fide holy 21,1 doctrine, the one holy faith in truth' (Pan.de fide, 2,8), innocent simple •guileless') as opposed to the intricate forms of heresy. The image of the concubines recedes in the sections on heres ies, esp. in Pan.haer. 21 to 80, where it is replaced by that of serpents and reptiles to qualify the various is as heresies-'--'-. as much 1ikened their As a matter of fact, Epiphanius seems to know as about heresies ; each The image of heresy
about serpents
to one species of serpent and these are called by names 1 2 . the serpent
'scientific1
Dijon. - The translations given here are mine. A edit ion and translation of Panarion is being prepared SC under the direction of P~ Nautin.
new for
9p. Nautin, 'S. Épiphane ', col. 626, assigns the dates 374-376. Photius, Bibliotheca, cod. 122 (PG 103, col. 4 04), remarks that Epiphanius's work is more comprehensive t h a n all those written ti 11 then against heretics.
10 Ancoratus 12-13 (ed. K. Holl, GCS 25) , which already enumerates the eighty heres ies wi th which Pan.haer. is to deal.
l^For an (unconvincing) at tempt to explain the transition from one image to the other, see C. Riggi, ' II termine 'hairesis' nell' accezione di Epifanio di Salamina (Panarion t. I; De Fide)', Salesianum 29, 1967, pp. 3-27, esp. 16-17. 12As his techn ica1 source on serpents, Epiphanius i ndicates a certain Nikandros of Colophon who wrote on
3 0
Anti-Gnostic Polemics
the symbol of a being in contact with the devil might have been suggested to Epiphanius by Genesis 3,
3
where and
the
serpent sect of
is related the and
to the or ig in of gnosis^ (see the Pan.haer • 37 ) who serpent-devi 1
is the the of
spokesman of the devil; or by Luke 10:19; or by the gnostic Oph ites saw in revered origin s erpent the
knowledge; or by his heresiological sources.
At any event,
Epiphanius saw the tide of serpent-heres ies as originating in Mesopotamia and, through Egypt, reaching Greece and the whole Med iterranean has world. Obviously character, the analogy while of the serpents a discredit ing it provides
the various sections with a unifying theme. It is with a view to this second image that Epiphanius gave which h is was work the t i tie with of Panarion. aga i ns t In the common bite. usage, a 'panarion' des ignated a box used by an apothecary, filled remed ies snake
serpents and reptiles, while others wrote on the properties of roots and herbs to cure their bites : Pan.prooem, II.3.1-5. He also refers to the works of the 'physiologists' ( oi «uoioXoyoi ) (Pan.haer. 64.72.6). He K.M. Grant ('Eusebius and Gnostic Origins', Melanges Simon. Paganisme, juda'isme, christianisme, Paris 1978, pp. 19 5-20 5) has drawn attention not only to earlier authors who att ributed heres ies to the devil, but also to the rather rare comparison between heres ies and snakes (pp. 19 6-197) made before the Panarion• How Epiphanius took his informat ion on serpents, reptiles, and antidotes from some form of 'Fachliteratur', is shown by J. Dummer, 'Ein naturwissenschaftliches Handbuch als Quelle für Epiphanius von Cons tantia ', Klio. Beiträge zur alten Geschichte 55, 1973, pp. 289-299, here p. 293. He suggests further that Epiphanius found his information already collected in a single scientific work, 'ein zoologisch-pharmazeutisches Handbuch' (p. 296). But the author of such a hand-book is said not to be Nikandros of Colophon, for Epiphanius says much more on serpents than Nikandros ' s ©hp i c t K O c . The au thor of Epiphanius ' s immediate sou rce would then be unknown ; he would have wri tten a compendium based on Nikandros and other physiologists. l^See C. Riggi, 'La figura di Epifanio nel IV secolo', Studia Patristica VIII, TU 93, Berlin 1966, pp. 86-107, here p. 104-105.
Epiphanius
67
Epiphanius 1 s Panarion is thought to contai n the medications for all illnesses aids' threatening each are the of true faith. fide These found in 1-25 of a and 'medicinal accompany the sections in Pan.de in the
Pan. haer. 21 to 80 and which, returning of the commentary again, on venerable unicity is
summarized of Christ with
to the first image, sketches the features spouse form columba mea, perf ecta mea ' ; in it
'una est
contrasted
multiplicity,
polemical rejoinders season the exposition of the faith. The Panarion 1-20): with In this to opens which with a first group of or heresies protoin Col the to with stand to
(Pan.haer. heresies, continuity. Scyth ism, 3:11, some only
pre-Christian Christian group Judaism, the named
heresies, heresies four in f irst 8.3.3)
(Barbarism,
Hellenism,
reference
according alien
Pan.haer. thought
represent given birth
primordial religious
condit ions of mankind-^ and des ignate to have or, They are 1.5.2.?
influences
Christian heresies, especially gnostic heresies. s ome t i m e s called heresies Samaritanism, see Pan•haer. heresies mother-heresies (Pan.prooem. 1.3.2?
together
80. 10.4)15? the rest of th is sect ion reviews The second group of composed of gnostic and arranged 21-56) is ch iefly
Hellenic, Samaritan, and Jewish sects. (Pan.haer. sects, presented
more or less chronologically The last group Christians
in some kind of f i1iat ion. divisions among the orthodox
(Pan.haer. 57Two
8 0) presents more recent heres ies, some of wh ich represent themselves.
^Scc
E.
Moutsoulas^
'Der
Begriff
'Härene'
bei
Berlin 196 6, pp. 362-371. 1 5 These designations, as will be seen below, raise difficult problems as to Epiphanius's specific concept of heresy. One point is clear, however: while the Elenchos deals with philosophical schools only to the extent that they form the background necessary to the understanding of Christian heresies which depend on them, the Panarion clearly starts with pre-Christian groups, called, and treated as, heresies. Epiphanius 's concept of heresy encompasses pre-Christian philosophical schools as well as Christian groups.
3 0 are emphasized: groups, while a Origen and spanning negative
Anti-Gnostic Polemics the Arians. the course These of of heretical history, s a l v a t ion
geographically cover the whole oikoumene. constitutes history
Their succession
(Unheilsgeschichte), a counterpoint to the Heilsgeschichte; it is not without an eschatological overtone, suggested by the of fact that th e In number of e igh ty both heresies, histories long are predicted, has now been completed, and we stand at the end history. Pan.haer. 1-20 characterized by the symbols of Jerusalem and Babylon, but th is designation is not expressly carried through. general view of history seems to the have been the had Such a basic to be
presupposition of Epiphanius and to have provided him with a general pressed. Each of the eighty heresiesl6 ( arrived at, sometimes, rather 8 0, artificially presented by compressing according to many a heresies reçu rre nt or subd ividing some), especially is those found in Pan.haer. 21 to scheme framework into which information
(illustrated in the Appendix below) which generally goes as follows.
l^We
will
see
la ter
in what
sense
the
first
twenty
To be su re, Epiphanius is fond of numbers, but his computations are not without confusion. Thus in Pan.haer. 80,10.4, wish ing to be more precise, he says the Panarion is about seventy-five heresies, of which there are five mothers ; he mentions, however, only four (Hellenism, Judaism, Samaritanism, Christianity) from which individual heresies developed, and it is curious to include Christianity at th is point. But we have to look at the preceding passage where Epiphanius, more correctly, lists Barbarism, Hellenism, Scythism, Judaism, Samaritanism. On the problematic number of 80 heresies, see S. Le Nain de Tillemont, Mémoires pour servir à 1'histoire ecclésiastique des six premiers siècles X, Paris 1705, p7 507 : 'Le P. Petau remarque qu'il {Épiphane) fait une faute dans cette supputation, en ce qu'il compte comme des espèces particulières de sectes les p a y e n s , les Samaritains, et les Juifs, qu 1 il met en même temps comme des genres qui en comprennent plusieurs ; et sans cela il ne trouverait pas son nombre de 80 hérésies'.
Epiphanius 1) back Introduction of the heresy by name. heresiarch, of the the author heresy, these heresy
69 When it goes
to a known 2)
asks who he was, its doctrine are and
where he came from, where he was active, what he taught. Expos ition First practices. 3) 4) refuted abusing invectives : this The and tenets lies, or to is be fictions, distortions.... Refutation ; by truth. the sane the of a heretics refutes contains by see way of the itself refutation is ending will apostrophes dilemmas, statement : the
questions with
thought
embarrass ing ; expressions "Whoever 5) 6) As the has
reasoning judgment
indignation,
that. . . " and
corresponding article of the orthodox faith. Further invective and analogy with one spec ies of Transition is clear to the next heresy with imploration unlike between true for serpents injecting the venom of heresy. for divine help. from th is outline, a cons istent the Panarion, distinct ion Elenchos, maintains
the expos it ion and the refutation, although the exposition itself is slr6d dy biased. Th is is part icularly the gnostic heresies to wh ich we give special attention in the following pages.
1.
Epiphanius ' s Objective and Method Why did Epiphanius bother to establish a catalogue of
eighty the see
interrelated
heresies
originating
far back
in the 39.1.1.;
pre-Christian era, many of which had long disappeared from scene, also as he himself and 4) ? knows threat. confesses For that as not (Pan.haer. well all as of that the 20.3.1 author of be Is the
Elenchos represent Epiphanius Elenchos?
Epiphanius an actual then
these
sects
Both know the
it would of
pointless to â 11 cick past heresies for their own sake. merely paralleling procedure
3 0 Anti-Gnostic Polemics Epiphanius them there and more presents recent and refutes He heresies is that no as that
longer exist because, first of all, he sees a link between heresies. therefore interested as the author of the Elenchos in showing
is a successio haereticorum; the cumulative it would not have if isolated. The
process
of heres ies following density
upon each other gives each heresy a connection
between heresies might be at times loose, but it is firmly stated by Epiphanius: heresies, originating sections be tween (Pan.haer.9.1.1), between pre-Christian and Christian Hellenic and Christian heres ies beyond, heresies themselves Epiphanius Christian
between
from the Samaritans and Simon. to Bardesanes and
Throughout the
from S imon
stresses the genealogy of error : from Nicholas to the Barborites Valentinians (without certain could and Archontics how, arose ou t but as where to with the one showing Tat ian not f ind 58.1.1: (e.g. with
from S imon to Satornilus, and the Ophites, from the Cerdon a and Marcion; of came these from we then 'a he men1 (e.g. forced are which strong convict ion) that
successor rarely heretic 46.1.8;
(Pan.haer. Pan.haer. filiation confronted
46.1.1).
Epiphanius Vales).
complains cases of 55.1.1)
Despite many
Pan.haer.
a global
genealogy
of heres ies, of
the cumulative character clearly emerges. and those who came after, each heresy 'inanities ' (Pan.haer.
Christian heresy upon the thus
started with Simon, grew with Satornilus (Pan.haer. 23.2.1) following preceding building 37.1.1; 38.2.3 ),
up not only a mere success ion of heres ies, but a
real traditio haereticorum. Such
1
an
interest
in the history
1
of
heresy
not only the It
bears witness to the fact that Ketzergeschichte' , from the as second development
Ketzerpolemik1 third
has become
Hilgenfeld to the
formulated century^ 7 .
also shows that
the very
idea of a tradition of heretics
has become a polemical weapon.
l^Hilgenfeld, Ketzergeschichte, p. 2 and passim.
Epiphanius The origin of this development can be seen
71 in
Irenaeus 's source for the section of his work dealing with heretics from Simon to Tatian (Adv.haer. I.23-28, to which 1.11-12 should be added i8 ). the Elenchos where the the obscures the polemics between Bereits das The trend becomes manifest in in the history Elenchos sometimes and the Harnack of even saw here the interest author
itself.
difference
previous heresiologists. in dem Werke des Hippolyt überragt ganzen Während bekämpfen liegt eine und der des in the geschichtliche Irenaeus Interesse und an der
Bewegung bei Weitem das polemische. Justin, es Tertullian am
und nur darstellen, um zu bekämpfen, Hippolyt, weit mehr Herzen, zu geben sachlich beleuchtete, genetisch
erklärte,
vollständige vor Allem der
Ketzerliste Widerlegung in eine
wahrend die Bestreitungen der früheren Väter irgendeiner dienen, Bestreitung apparent gnostischen Hippolyt's For Harnack this Hauptrichtungen Werk lauft
Noetus und Callistus aus 1 difference, clearly Elenchos, was a sign that Gnosticism for the church. More than a century was after even the Elenchos, th is in 'h istorical' tendency more clearly already evident in the first decades
of the third century had ceased to be a disruptive factor
Epiphanius's work, with The tradition of heresy
the difference now forms
mentioned, to the One
that the Panarion gives more room to the refutation itself. a counterpart history of salvation since the beginning of mankind.
function of this history of heretics in the Panarion, as in
18gee f # Wisse, 'The Nag-Hammadi Heresiologists*, VC 25, 1971, p. 213. l^A. Harnack, Zur Quellenkritik Gnosticismus, Leipzig 1873, p. 82.
Library
and
the
der
Geschichte
des
3 0 Anti-Gnostic Polemics the Elenchos, has been a to provide of bad Epiphanius's companions, personal thereby Moreover, with its
enemies (Origen, Cyril of Jerusalem, Rufinus, even Bas il of Cesaraea 2 ^) discrediting through the with cohort them in the eyes of the orthodox. image of the eighty concubines
by interpreting
the whole tradition of the eighty heres ies
eschatological resonance, Epiphanius is stress ing how alien the heretical tradition is to the faith of the church and, for that reason, how firmly it must be opposed. emergence implies condemned serpents the devil. While such appears to have been Epiphanius's heresies readers things
1
Wh i1e the if
of
the
eighty
heresies same
had
to
be
expected had
scripture was to be fulfilled Epiphanius), them all. adds : all the
(now we have seen them all, scripture already by
To th is condemnation the analogy wi th those heres ies have been inspired
implicit these his these It is hate still
intention, he frequently states his goal in studying these throughout
1
the for
Panarion. it, shaming
He
enumerates and to who give do
'abominations a
both
to overth row
heresy
distaste to make
those
(Pan.haer.26.14.5; see Pan.prooem. intelligent To people those
1.2.3). conce i ve who might
in
order
(
cme'xöe\av
) for the heretics and abominate their wicked
activity'
(Pan.haer.25.3.3).
entertain doubts as to his intent ions in describing at such length reprehensible acts and ideas, he says: 'Although I am truly ashamed to speak of their disgusting practices... still I am not ashamed to say what they are not ashamed to do, { with the intention, by all means, of causing horror Such away is the that < J > p i Ç iv ) in those who hear of the obscenities they dare (Pan.haer.26,4, ; see effect of scandalous the 2 6,3,9; etc.). to frighten Epiphanius's the long catalogue of peculiar for all
to perform' thoughts readers, departs and from
the anticipated
practices : truth.
to horrify
th em, to
cause disgust
catholic
efforts
20
S e e P. Nautin, 'S. Épiphane', col. 627.
Epiphanius would have This been in vain if they do not produce method. look
73 this The
Abschreckung^!. objective- determines the author play. invect ive, Epiphanius's
Epiphanius, model of persiflage, regularly 24.1.6; opinions obscene.
of the Elenchos might Epiphanius abusive is a past language,
like a in are Their conduct
fair called are
master
Heretics
foolish, their
insane, wretched talk babbling,
(see Pan,haer. their
24.2.1; 28.1.1; 44.3.1; 46.2.2 et passim.) silly, '0 foolish and vain fables I
For nobody who has
one ounce of judgment, would dare invent such th ings about man nor about god. been more 2} . Indeed even Homer appears to me to have (Pan.haer. 33.2.1-2; see 42.15.1heretical (e.g. intelligent1
Epiphanius has no equal in the h is tory of heres iology insulting. Pan.haer. with His descriptions of 47.1.6), gir1 :
for the art of out of virtue: had
sects give much room to slander (e.g. Encratites are not so ins inuat ions Pan.haer. women: Marcion corrupted travel a young 42.1.4; Pan.haer. 64.2.1 ff., even
Encratites who
disreputable [ airiare and one
47,3.1), calumny is called Christian: that
(e.g. of Origen: 64.66.1 he upon
Pan.haer. 5). devoted
unbeliever Pan.haer.
J in the sense of unEpiphanius to an in a section onanist
plays on ambiguities : immediately
introduces Origen
follows
2iThat Epiphanius aims at Abschreckung (deterrence) was assumed by Hilgenfeld, Ketzergeschichte, p. 2 : 'Da nun die bereits mehr oder weniger veralteten Häres ien in den Ketzerbestreitungen mindestens zur Abschreckung fortgeführt wurden, musste die Ketzerpolemik mehr und mehr zu einer Art Ketzergeschichte werden'. J. Dummer, 'Die Angaben über die gnos t ische Literatur be i Epiphanius, Pan.haer. 2 6', Koptologische Studien in der DDR, Halle 1965, pp. 191-219, writing on Epiphanius's gnostic sources, remarks (p. 209): 'Wir erfahren zwar eine Reihe von Titeln, aber sehr wenig Über den Inhalt der Schriften. Was Epiphanius weitaus mehr an Herzen liegt, ist die Schilderung der kultischen Veranstaltungen und Veranschaulichung der Gedankengänge, die diesen zu Grunde lagen - be ides zum Zwecke der Abschreckung'. Epiphanius's intent ion of causing horror is obviously not limited to Pan.haer. 26.
3 0 group (Pan.haer. 63) and nothing Beyond (e.g. on on
Anti-Gnostic Polemics is done to dissipate the
ambigu ity. of irony
all the use he makes of Irenaeus, what 32.6.7; scripture, and 24.8.1). he Where he
Epiphanius appreciates most in his work, is Irenaeus's use Pan.haer. use of their immoral reports gnostic insists almost
exclusively
erotic
interpretations. (Epiphanius Pan.haer. 26.3.4For how could the by such scabrous
The lengthy descriptions of scandalous behavior claims that he does not delight in them: are believed innocent heretics? to form a sure argument. fail to be disgusted
6) are thought to constitute an uncovering of evil and thus readers
To be su re, it may be difficult to remain serene abou t Epiphanius ' s means and method. formulated of his harsh judgments unfairness has been punished heres iology. There Indignant historians and sty le ; a have his on his person is,
by a lack of attentive study however, pert inent Nautin, which is worth pas juges de sa Il ses force
portrait of Epiphanius drawn by P. Nous ne nous rendrons
quoting at this point for its well-balanced character. sainteté. a va i t Du moins était-il un ascète. la psych olog i e, ardente, avec la
en ava i t le phys ique impress ionnant....Il en aussi la quai i tés, conviction
jugements
sommaires la
et
définitifs, a s'aveugler
les sur
p a rt i s pris,
facilité
soi et sur les autres, au point de mettre au compte de l'amour de la vérité ce qui était pour une grande part du ressentiment, et de se tromper entre un Théophile et un Jean Chrysostome 22 .
22
P . Nautin, 'S. Épiphane', cols. 625-626.
Epiphanius 2. The Gnostic Heresies Epiphanius describes Pan.haer* 21 to 56. and refutes gnostic heresies
75
in
How does he conceive of them? of heresy since
It is
not poss ible to answer th is question without first looking at his general the Panarion heresies. After the middle of the second century, the concept of 'heresy ' underwent saw heresy Ebionites Cerynth, a process of increasing complexity 23 . Irenaeus the Ebionites, Hippoly tus's For Justin heresy was almost exclusively gnostic. as primarily (i.e. Tatian 2 ^) Jewish-Christian among the deviants : concept 'heresy' f inds in the gnostic an application that goes beyond
gnostic, but he also counted heretics.
Syntagma had, along with the Gnos tics, the patripassiani, and along with the the Ebionites, two groups of Montanists 2 ^. on come of the 'zu other einem and hand the Elchasaites; gave birth to Abschluss'. complexity As is rather he calls The Elenchos added on the one hand groups like the Docetes, Callistians; divisions and we development the f inally A a new within the church heresy of itself an
heresies,
gewissen
added
starts with the emergence of Manichaeism and Arianism. result, deals concept heresy in Epiphanius groups which
broad, if not diffuse, above all when one considers that he also with pre-Christian
As has been mentioned, of pre-Christian 'errors' rise to the problematic
it is precisely among the
the inclus ion that gives concept
heresies
character of Epiphanius's
23 Hilgenfeld, to this process.
Ketzergeschichte, passim draws
attention
24see Hilgenfeld, Ketzergeschichte, p. 342; see p. 162. 25 S
2
ee Hilgenfeld, Ketzergeschichte, p. 163. ®Hilgenfeld, Ketzergeschichte, p. 453.
3 of heresy. explained Hellenismus Häresien] contrasting there This are is
0 Following exactly und bezeichnet auch
Anti-Gnostic Polemics upon P. Fraenkel 27 , 'warum werden, als E. Moutsoulas 2 ® Scyth ismu s , solche [i.e. Then, that has of no one. "historische with
states the problem in the following way: Judaismus manchmal manchmal of "religiöse concept
Nobody has so far als als
Barbarismus,
Perioden"...oder
Zustände"'. heresy
Epiphanius's cases in but for
Irenaeus and of the author of the Elenchos, he argues that Epiphanius rather the a first where four 'heresy' negative meaning, the case 'neutral-objective' religious
stages of
mankind ; when these are called of
1
"heresies', it is not in the
sense of 'Irrlehre' of a particular group, but in the sense Entfremdung von der Wahrheit' 29 of Christianity without The hypothesis has been of a neutral by sense C. of 'heresy' for in connection with any group or school. Epiphanius only means makes which, wide challenged Riggi 3 0 whom
'heresy' is always negative in the Panarion, even where it 'Entfremdung von der Wahrheit'. 31 use of to the images the the eighty eighty heresies, concubines His argument 'serpents' have a remain 'concubines ' and
applied
always always
negat ive
meaning :
27
P . Fraenkel, 'Histoire sainte'.
28e. Moutsoulas, 'Der Begriff "Häresie"', p. 362.
29
E . Moutsoulas, "Der Begriff "Häresie"', p. 368.
30 C . Riggi, ' Il termine ", pp. 3-29. His thesis is found on p. 5: 'L'accezione è in Epifanio sempre negativa, sia come deviaz ione dalla condotta cristiana che come deviazione dalla retta dottrina, sia come errore dottrinale implicito (ne 1lo sei sma) che come errore dogmat i co esplicito (nell' eresia comunemente intesa), sia come male d ilagante per i 1 mondo in maniera confusa che come organizzazione diabolica di gruppo'. 31c. Riggi, 'Il termine', p. 5, n. 5: "Or questa non ci sembra più accezione neutra!". On the contrary, Riggi emphasizes that (according to Epiphanius) each heresy is a monstrous and venemous product conceived through a contact with the devil (pp. 6-7).
Epiphanius alien to the spouse, linked the eighty serpents are
77 always
monstrous and reference analogy
wi th the devil. with
Even th is point is
not without difficulty ; to serpents of the serpents
the exception of a general to heres ies from individually
in Pan.haer. 13.2.2 and 20.3.3, the is only appl ied It is not applied
Simon on (Pan.haer. 21 on).
to the first twenty heresies, much less to the first four. Thus, while the analogy of the concubines might generally apply to the eighty heresies, that of the serpents does not and is reserved seems for to the use heresies the term 21 to 80. in Therefore a double Riggi a Epiphanius sense. For this when he negative narrow such a reason we are more willing with in the in says point range of 'heresy1 sense 3 2 . when mark he the the to follow of a distinguishes, meaning generally between 8.9.1; hints at 9.1; 'heresy '
Panarion,
and a broad distinction stages)
Epiphanius himself ( Pan, haer. where his
see 2,3) that the Samaritans religious begins
(heirs of the four previous exposition
to deal with heresy proper, since they are at the If we consider broad or the that is
origin of all heresies based on scripture. poss ible any to say: for Epiphanius, of is Thus law wherever and is the life. revealed natural fide it
this statement, in spite of some remaining confus ion, it is 'heresy ' in the encountered, it is transmitted in its from sense means any fragmentation departure, truth truth with primeval primeval identical God's will sense means the primeval unity ; understood
truth
orally, turn,
wh i ch,
identical with
'Christianity before Christianity' and with 6,8); primeval based truth on a became wrong moral The are 'Heresy' in the strict its of accompanying
(see Pan.de any of
manifest with the advent of Christ. erroneous scripture interpretation f irst with
doctrine
aberrations; this sense applies to gnostic heresies. four religious conditions mankind, however,
32
C . Riggi, 'Il termine', pp. 12 and 15.
3 0 Anti-Gnostic Polemics called Judaism 'heresies' in the broad sense, so that Hellenism and are counted as heresies only insofar as they have
been contaminated by the Babylonian virus and fragmentated. The first four religious conditions are affected by a
genera 1ly negative character, always being contrasted 'Christianity f ide 6,8), which existed since the beginning' fight with
with
Pan.de between and
3 4
thus
illustrating
3
the permanent they coexisted
1 ight and natural
d a r k n e s s
3 ;
while
faith
law,
they
never
completely
coi ncided
w i t h
t h e m
and, by comparison, were always found lacking. How then does the character of the gnostic heres ies, description? of a
in the strict sense, emerge from Epiphanius's He operates accord ing to a fairly clear
definition s imply
gnostic
heresies.
First,
they
never
represent
false doctrine » but £ 3 1 s o incluci e wrong conduct. is always connected with heteropraxis. completely Elenchos. of heresy absent either
Heterodoxy
Such a view was not from the side of
from Irenaeus's work or
But only
in the Panarion emphasized. f ide will
is the practical The have
systematically faith on in Pan.de and
exposition same
orthodox emphas is
the
double to
faith The
practice, of
the
latter as
tending
asceticism.
conception
heterodoxy
including
33
S e e E. Moutsoulas, 'Der Begriff "Häresie"', p. 370.
In general, 'heresy' seems to be synonymous with diversity itself, multiplicty, division: e.g. when it is said that, at the stage of Scythism, there was 'no heresy, no diversity of opinion' (ou* aîpeots, c& yvdm èxépa. xal èxêpa: Pan.haer. 2.3; see also 1.9; 3.9). In Pan, de fide 9-12, the emphasis is put on the multiplicity of sects and practices in India, Egypt, Greece, etc., providing an illustration of the view that outside the Christian Church the original unity is fractured. However, the state of affairs is not always so straightforward. E.g., after dealing with the first twenty 'heresies ', Epiphanius says (Pan.christ• 4.7): 'I have talked up to now about eleven heres ies *, meaning the divis ions among Samaritans and Jews only, and refusing in this case to call the first stages Th e s t>udy of Ep îph ân m s s concsp t of heresy / s i s one can see, is a frustrating one; Epiphanius's views were not always consistent, and his conflicting statements are d i f f i c u l t — i f not sometimes impossible—to reconcile.
34
Epiphanius heteropraxis personal might have with been prompted by
79 Epiphanius's or by better
experience
gnostic
behavior
information concerning mere polemical device.
gnostic rituals;
it can also be a
However questionable his view was
that heres ies originate in moral failure 35 , Epiphanius saw the essential connection between religious belief and moral conduct ; to his mind a doctrine would hardly be false if it were not accompanied by a wrong practice. Moreover, understand heresy. Gnos tics from the sections arbitrarily about on the Gnostics, we can scriptural divine passages, cosmic. what Epiphanius sees as the content of gnostic among genealogies, and
Selecting speculate they
They talk about heavens and archons, Hebdomad about which rein to vainglory fraudulent ( their curios ity, to their 24.10.6; (Pan.haer. love
and Ogdoad, and
imagine all sorts of myths to give free for disputes Through the 35.2.1-2). 26.1.2) their
(Pan.haer.
myth-making
Gnostics
deceive people.
They say that the world was made by angels
;
KO c r yo i r oi en. àyyéXox
this feature seems to be peculiar
to the Gnostics), not by the true God, so that the material world is seen as evil. On that basis some them ) abstain practice licentious advocating from immorality In it the seeds of light (or of the Soul ) have been scattered and must now be gathered again. (the Encratites and those who resemble the on world far is an and its elements. they the Others that soul, device the same from basis ; think
conduct, immorality
polluting
contributes to its liberation; these people know well that appealing propaganda (Pan.haer. 24.3.8; 25.2.1). Concerning Christ, they teach Finally they proclaim
one form or the other of docetism.
35 See C. Riggi, ' Il termine', p. 25. The connection established between heresy and libertinism is as old as Christian heresiology. See F. Wisse, 'The Epistle of Jude in the History of Heresiology', Essays on the Nag Hammadi Texts in Honour of Alexander Böhlig; ed. M. Krause, Leiden 1972, pp. 133-143, esp. 137 and 143; and 'The "Opponents" in the New Testament in Light of the Nag Hammadi Writings', Actes du Colloque international sur les textes de Nag Hammadi d'août 1978 (ed. B. Bare), Québec (forthcoming).
3 0
Anti-Gnostic Polemics (or of the
that there will be no resurrection of the dead body),
This picture of the Gnostics might be the product of much fantasy on the part of Epiphanius and might result from a systematic generalization. Gnostics thus understood It does show, however,
that he had a clear picture of them and it is against the that he thought he had to launch his attack ; for, like Irenaeus, he perceived the disrupt ive character of those be1i e f s and was aware that Gnostics were discrediting the church 27.3.3-5). To Ep iphanius, gnostic heres ies are rooted in grounds alien shaped false bad sided false to the catholic he of saw faith. the Building views upon as previous been astrology, heres iologi sts, reading gnostic the having in the eyes of pagans incapable of (Pan.haer. distinguishing between true and false Christians
by a series of bad
influences:
magic,
scripture, he gave of the an
devil1 s
inspiration, to Greek the they
intellectual sickness, moral failure. influences, and the as ph ilosophy secular author of educa t ion.
Among the immediate role
important
Without be ing as oneEpiphanius with saw
the Elenchos, philosophers,
false doctrines of the Gnostics as running parallel to the doctrines which entertain some connection. 'their
TMÔETAE,
For instance, for him the root 32.3.8) lies in arts and in the
).
of the heresy of the Secundians (Pan.haer. excess ive
£YKUKAI'CTJ
educat ion
TE KOL
1ibera1
The
sciences, and in Platonic thought ' ( 6C mepßoX/jv 6c xfic éxeiîvou
ntowvuifis clearest
s tatements Origen 3 ^.
of
th is view wi 11 be found you have been
in the bit by
sect ion on a wicked
'You, Origen,
3 ^Strangely enough, Origen is seen as the father of all heresies and the instigator of Arianism (see Pan.haer. 64.4.2; 76.3.5; Epiphanius's letter to John of Jerusalem, in Jerome, Epistola 51, PL 22, cols. 517-526, or PG 43, cols. 379-390 ; see also D. Amand, Fatalisme et liberte~dans l'antiquité grecque, Louvain/Paris, 19 4 5, p. 451). Ep iphanius's relation to Origenism has been studied by J.F. Dechow, 'Dogma and Mysticism in Early Christianity. E p i p h a n i u s of Cypru s and the Lega cy of O r i g e n ' .
Epiphanius viper, I mean your wordly instruct ion. . . 1 ( ïïaiôeias education ( ) (Pan.haer. 64.72.5). 'EXXrvi i c r i s iron 5e iocs
81 Koay i i c f j s rpo-
'You too, the Hellenic ) has made you blind for
the truth...' (Pan.haer. 64.72.9). The thesis of the dependence of heresy upon Hellenic philosophy (Adv.haer. was 11 . only pointed who to marginally by Irenaeus to 14.1-6) positively resorted
philosophy throughout his Book II. received by Epiphanius. the Arians. Epiphanius devil's gnostic having
After being generalized
by the author of the Elenchos, the thes is is now willingly The thesis is now applied not only to the Gnostics, but beyond them, to Origen as well as to Without distinguishing between use and abuse, is not are far from counting philosophy Philosophic called divis ions rejected, and heres ies among but schools, men. all as Not links (Pan.haer. among well 5-8) only the as for is
inventions. sects, introduced thereby
philosophy Christian
also
between
though t
the
ancient
3
philosophical to attenuate
tradition 37 .
The ascetic Epiphanius can only see a sharp
Christentum' **;
opposition between 'Antike und
Dissertation, University of Pennsylvania, 1975. Origen embodies mos t of the errors reviewed in Panarion. Epiphanius had been so manipulated by Theoph ilus of Alexandria that he conceived a most violent hostility against Origen and devoted to him his longest notice (122 pages in Holl ' s edition, more than are reserved to the Arians or to the Manichaeans). Accumulating massive distortions upon formulations taken out of their contexts, Pan. haer. 64 on Origen i s but an ampl i fication and aggravation of the De resurrectione of Methodius of Olympus, from which large sect ions are quoted ( 64 .12-62). On this see M. Villain, 'Ruf in d 1 Aquile'e. La querelle autour d'Origène', RechSr. 27, 1937, pp. 5-37, esp. p. 8. On Epiphanius's sources for the biography of Origen, see P . Nautin, Origène I. Sa vie et son oeuvre, Paris 1977, pp. 202-217.
37 Epiphanius's negative attitude toward images consistent with th is view. See Jerome, Epistola 51.9.
is
3 ®See W. Schneemelcher, 'Epiphanius von Salamis', RAC V, Stuttgart 1960, pp. 909-927, esp. pp. 910, 923-926. F. Wisse has proposed a further explanation for Epiphanius's nervousness toward Hellenism. He s u g g e s t s that
3
0
Anti-Gnostic Polemics
this opposition would amount to an illegitimate compromise. Such on an intransigent concrete attitude who, is in not the limited mind of to the a i ntellectual the spheres of heretical doctrine ; it also bears 'heretics1 no such have rights (Epiphanius is the one who
heresiologist,
denounced a group of Gnostics in Egypt and had them driven out of the city: Pan.haer. 26.17.9 39 . Did some Does our discussion he really understand lead us to think that Epiphanius He surely perceived
had a clear insight into gnosis and gnostic doctrine? gnos is? important tenets of the gnostic sects.
But it is difficult
at th is point to say whether he was able to perceive their unifying principle, to relate the tenets to each other, and especially to a fundamental gnostic insight. freely applies For these tenets to different about He sometimes just in sects,
order to round out the picture and make it as repulsive as poss ible. instance, talking the Nicholaitans, but short of information on th em, he explicitly borrows the lacking informat ion from other sects This kind of of extrapolation of ten in character abstractions, (Pan.haer. 25.2.1-5). his of reports the the spite heavy gives
accumulation of odd sayings and scandalous details. At first glance, it seems that Ep iphanius is not
looking for a real insight gnostic views are utterly
into gnosis, the way irrational. He
Irenaeus, repeatedly
or even the author of the Elenchos, was.
To him, indeed,
Epiphanius's attitude might very well reflect the horror of the Christians in face of Emperor Julian's attempt at making paganism, or Hellenism in the religious sense, into the state religion (361-363). The revival of paganism that had taken place some fifteen years before the writing of the Panarion would have left vivid traces in the psyche of church leaders. Gregory of Nazianzus exemplifies how a Christian of a different character from Ep iphanius's could also ^ remain haunted by the f igure ^ of Julian and his Les invectives contre Julien de Grégoire de Naz ianze in L'Empereur Julien. De 1'histoire à la légende (eds. R. Braun and J. Richer), Paris 1978, pp. 89-98. 39see S. Le Nain de Tillemont, Mémoires X, p. 488.
Epiphanius returns feminine solid things to this judgment. The devi1 'always appeals
83 to
imaginations, pleasure and lust, that (Pan.haer. introducing 37.2.5). Gnostics
is to say, 'think the they )
to the feminine ignorance which reason...' they are
is in men, and not to the although
are mysteries,
are nothing but mockeries of mimes ( full of absurdity and nonsense. (Pan.haer. 37.3.1-2). of indicating
ylyoXoyriuaTa
For these are truly myths"
Such judgments obviously fall short
the precise points to which Epiphanius took
exception in the gnostic doctrines. We can best discover what, for Epiphanius, constitutes the unacceptable core of gnostic teachings by following an indirect path. Instead of looking at his exposition of gnostic systems, we may find the points that offended him
by examining his refutation.
3.
The Core of the Refutation In most sections is refute way : But - truth but is dealing by with two gnostic 'these refutes heresies opinions itself, the are and
refutation, 'These the
introduced
recurrent
formulae.
opinions
themselves ' , Madness
refuted by truth'. following
The first formula can be spelled out in
wickedness is broken in itself internally, turning into its own overthrow. need for aid, is always steadfast; it has no and is confirmed self-confirming,
2 6.3.2; 31.34.1; etc.).
The second formula is encountered ' 11 is evident, confidence; Each the
in many variations of the following form: the truth itself see time also the 21.5.1; formula is your refutat ion *. 24.8.8; not only 24.9.1; introduces
Bardesanes, how badly you have misplaced your
(Pan.haer. 56.2.11; and qualif ies
56.2.12; etc.).
refutation, but also amounts to a summary of it. consider them separately.
Although
the contents of both formulae at times overlap, we should
3 0 'Heresy refutes itself' What need say, for no overthrow
Anti-Gnostic Polemics
is meant by such a formula, is that there is no any use sophisticated for any argumentation person to in order to the opinions just presented. intelligent 'There is, I dare refute these
things from scripture, or from examples, or any other fact. Their foolish fiction and adulterous action is obvious and easily detected by right reason' appeal to sound judgment argument by reason. especially when this the (Pan.haer. 26.3.2). The is but a weak echo of Irenaeus's
In Epiphanius's mind it is superfluous has just been presented it sufficient of the connected or with in an In
to develop a real argument in order to refute the heresy, the heresy inimical way that does part of the work of refutation. instance Epiphanius quality and are of of the deems opinions, to on be a of author of them. reading called
to point to these The of an
opinions, opinions laughable ; scripture. the like.
the behavior rest
said The The
incons istent, wrong these opinions is
contradictory, is
they
partial
author related
impostor, a fraud, a deceiver, a fanatic, a sycophant, and behavior corrupt, obscene, filthy, insane.... Is it not right, then, to compare such
a doctrine to the spite of an evil serpent? 'Truth is your refutation' The formula, content of the second to grasp on of formula the meaning is is of less th is on a
straightforward. presupposition.
In order His attack
it is useful
to recall
Epiphanius's heresies
fundamental based The
vision of a universal history whole tradition of truth—truth
truth and error 4 ^.
that has existed since the
beginning as the truth of the Christian Church 43 -—is called
4
^See P. Fraenkel, 'Histoire sainte', pp. 188-191.
41gnly rarely does Ep iphanius mean rat ional or philosophical truth. As example, we might quote the notice
Epiphanius upon after to give the lie to the is tradition of error.
85 Thus, the (see the
saying the
'Truth
you r that
refutation', is, to
Epiphanius
regularly gospels,
refers
to Moses,
the prophets,
the Savior, scripture
apostles ; of is by by
Pan.haer. 44.4.3). The first meaning truth such the a of scr iptu re refutation refu tat ion text, Irenaeus's the formula you r scripture. Epiphanius is a further is best from is therefore: thus refutation, will text 42). echoing a
The spelling out of then follow
right
here
against
their by the
interpretation. takes the
The procedure of quoting
illustrated
long section of Marcion trouble scripture Marcion,
(Pan.haer.
Here Epiphanius mutilated from Paul ) . Ep iphanius wh i ch
Marcion's
(78 passages from Luke, 40 passages or to 'from va r i a nts the very when needed, of
Then after pointing demonstrates, stands: creator,
to the changes made in these texts by remnants scripture
Marcion retains'
(42.9.5), the truth
for wh ich the ch u r ch
incarnation, agreement of the two Testaments, Godinspiration of the prophets, divinity of Christ.
The texts retai ned by Marcion suff ice to refute him; truth itself refutes him. When s ide with Of course, the rest of scripture which with the Gnostics is rather a of is rejected by Marcion confirms Epiphanius's position. the disagreement the li teral matter of interpretation of scripture, Ep iphanius tends to and "simple" interpretation
on Stoicism (Pan.haer. 5) which has been studied by D. Amand, Fatalisme, pp. 440-460. Amand shows how weak Epiphanius is when he engages on philosophical matters. His exposition is summary and inexact: Zeno of Elea is confused with Zeno of Citium; Stoicism appears to be prior to Platonism; the Stoics would believe in metempsychosis ( D. Petau has an indignant note on this mis representation in PG 41, col. 201, n. 46). Epiphanius is following textbooks, and bad ones. His refutation is made of arguments 'd'une incroyable banalité' (p. 4 58), peppered with heavy irony and cheap shots. His stronger arguments, such as the moral arguments against fatalism, go back, as Amand has shown, to Carneades and seem to have reached Epiphanius as fossilized commonplaces (pp. 458-460).
3 0
Anti-Gnostic Polemics V „ \ For truth speaks in plain words (old y.iupcûv vox. Pan.haer. that the clearly 35.3.2); angels says are that for they instance, or his say Even do not are To does to not and say it creators,
scripture. scripture opposed more
cotAwv AÔYCOV f\ àAnôeL'A): God, but
administrators the allegorical condemnation stick by
servants
{Pan.haer.
40.4.2).
or otherwise (see
is to imagine a fraudulent myth. is included Pan.haer. sense 64.4.11). Those who
interpretation
in Epiphanius 's impious
the plain
of scripture abandon 69.71.1). says that of
are called
( Origen is called fi.TH.CTXE, as we saw), demons, prestidigitators , miserable.... Thus when They the simplicity of the truth itself refutes Holy Spirit (see Pan.haer. Epiphanius heres ies, he means
a statement
the faith based on the
literal sense of scripture. The council of Nicaea had as
Moreover, this truth was found produced Fraenkel such a creed, in and the
in the creeds which had been composed by Epiphanius's time. Epiphanius was an ardent defender of Nicaea 69.11.1). Furthermore, (see Pan.haer.
mentioned 4 2 ,
Äncoratus Epiphanius had already explained the faith on the basis of a creed that is one of the sources of the creed of Constantinople, attacks gnostic (TTILAXIC dAnôeiac 381 (see or D 42-45). states the Moreover, 'fai th when of he tenets
:
truth'
Pan.haer. 24.10.7), Epiphanius repro(the 'parts' of the faith: TO T C S L V in of uépoç the God, birth (see sometimes and even and
duces the articles of the faith Trie order TtCcrcetüC; found of Pan .haer. in the all beings, 9.2.3),
ÂAACÙV uepœv TFIC NCOTEXOG : Pan.de fide 21.1; creeds : trini ty
unicity of
creation e.g.
Christ 's divinity the resurrection
his
from Mary,
the church,
the dead
Pan.haer . with
36.6.3-5; Pan.de fide 14-18). the articles of the creeds that will
It is the combination, then, of a literal reading of scripture confound heresy. The venomous doctrine can be stopped with
P . Fraenkel, 'Histoire sainte', p. 178. This had already been noticed by S. Le Nain de Tillemont, Mémoires X, p. 505 (on the two cr66cls conc 1 uding the Ancoratus).
42
Epiphanius the ant idote of Christ's teachings stated by the Church. and cast
87 (Pan.haer. 23.7.1-3) as spelled out
This contrast hardly amounts to an
argument; it is, rather, an assertion briefly in a mold of invectives 43 .
The preceding analysis should have made clear how the style of refutation has changed since Irenaeus. by means of philosophical, scriptural, and Epiphanius theological to determine no longer carries on any serious debate with the Gnostics arguments. There is no longer any wrestling
where the authentic tradition is.
What we find, aside from To is
a virulent attack on all opponents, is a dogmatic appeal to a static truth formulated in the articles of the creed. point out an inadequacy by means of the creed, it thought, is ipso facto to perform a refutation. we have here the answer what Ep iphanius was fou nd low teachings explicit doctrines 1 ies in the mos t esteem of from the the his offens ive in own which faith. in His hardly as the
We think above : gnostic held the in
to the question we raised they
formulations different reasserting
interest goes
beyond by the
indicating them in a reproachful tone. one truth Church. such from From Epiphanius does so with pronouncements and
His real interest formulated
the firmness of one who with the Church that It is to the that
completely identif ies himself with the Church that has made official receives derive these pronouncements strong same feeling at of times with submissiveness. that his Epiphanius the sty le representing arrogant
this double the the
identification identif ication
seems
majority.
receives character
authoritarian,
even
singles him out among Christian heresiologists.
43 F o r an illustration of Ep iphanius 1 s style argumentation, see the Appendix to this Chapter.
of
3 Append ix :
0
Anti-Gnostic Polemics
The Style of Argumentation in Pan.haer.27
We shall illustrate our comments on Epiphanius1 s style of argumentation by present ing a paraphrase and summary of the It section should be 25 on the Carpocratians that, wh ich (Pan.haer. 27.1.1-8.4). are of recalled and 26, for Epiphanius, heresies deal with many groups
cumulative and build upon one another. Pan haer.
Thus, coming after
herctics (Nicholas and those connected with him) the heresy of t.ie Carpocratians The refutation is is seen as a climax ; it is presented too, in the sense against that it as having the most deceitful beliefs and immoral practices. cumulative, already refers to arguments advanced other sects
and still valid in the present case. The 1) section, reduced to its essential structure a and
content, goes as follows: Introduction appeared. (27.1.1-2) . He established Then an certain Carpocrates illegitimate His ways are
school in which to teach his pseudo-doctrine. the worst of all. heresy. 2) Exposition of
He contributed his share to the gnostic the doctrine and practices of the Carpocrates splits the world
Carpocratians world.1
(27.2.1-6.11).
above into an unnameable Father and angels who created the Jesus was born like all other men, i.e. from Mary In and Joseph, and was essentially like them, but his sou 1 had more power since it remembered what it had seen above. order to show his power and to escape the angels who made the world, he underwent all earthly experiences, including lawless ones 2 . These experiences liberated his soul which
the Gnost ies, the world is evil (and the angels as well). But this is not the Carpocratian interpretation, according to which nothing is evil in itself as Epiphanius himself reports below: Pan.haer. 27.5.8.
2 Coming after Pan.haer. 26, th is passage seems to mean that Jesus taught the same licentious practices as those attributed to the 'Gnostics-Borborites'.
Appendix reascended to the unknown Father.
89
Other souls will have the same destiny if they also go through all experiences. the Jews Jesus strength magical and of and perform their did, they might souls If they despise the practices of actions through To that Because than the end, these even thus rise above him even more sacrilegious manifes ted. are welcome.
occult practices the Church and pe rform with
instruments of Satan call themselves Christians, they heap scandal upon the pagans. debauchery and discredit her in the eyes of they spend of of their the time body, in and all all every kinds homosexual Furthermore,
heterosexua i action
member
kinds of filth and unnameable crime, thinking that if one performs all these act ions during th is 1 i f e and leaves no deed undone, his soul will not have to be reincarnated ; it will will by escape from the body-prison and be free. The body not be saved. nature. They They dare to base such use They painted worship pictures them and and teach ings on statues of
Jesus' words.
Nothing is evil to them since no act is evil
philosophers, and claim to have portraits of Jesus made by Pontius rites. 3) of such Invective impostors. foolish? foolish by (27.7.1). Some I say: We must resist these people are But not not these only are teachings fools led are astray Pilate. perform heathen
by all means and refuse to pay attention to the teachings evidently seduced 4) The of that agree.
things; even
wise men
unless their minds are established in truth. Refutation (27.7 .1-8 . 3 ) . 4a) arguments creation such p ract ices Reasoning angels. They are refuted by themselves. (27.7.1-8) already again angels by opposed here. is to full Simon of and his the magical doct r ine affirms than the Moreover,
apply through
inconsistencies. weaker
a chain of dilemmas, Epiphanius the true God (The truth
a doctrine makes
Th i s is myth and fable.
is that God
himself created all things, visible and invisible.)
3 They evil. say
0 that the world
Anti-Gnostic Polemics and a X X it contains is the
But again they contradict themselves.
For since a
part of the world, be saved, can the comes-3.
i.e. the soul, attains salvation,
whole cannot be said to be utterly evil. angels themselves be bad, from
If the soul can whom the soul
it cannot be bad, though created by angels ; nor
3 As D. Petau remarked (PG 1, cols. 375-376, n. 92), the refutation is not clearly in line with the exposition. From the refutation it appears that Epiphanius attributes two doctrines to the Carpocratians: 1. The world and all created th ings have been made by angels, not by the good supreme God. 2. The world and all that is contained in it are counted among the evils. - Epiphanius refutes both points. First, th is would make God weaker than angels ; s econd, since a part of the whole universe attains salvation, the whole cannot entirely be excluded from the good. - But the second statement contradicts the exposition according to which Carpocratians hold that nothing is evi 1 by nature. This shows that this refutation is thought to apply to other groups as well. Epiphanius attacks elements he has not clearly stated ; similarly, he fails to attack many elements he has presented. The inconvenience implied in such a procedure loses some of its substance if we keep in mind the cumulative character of both heres ies and refutations. However, th is procedure obviously does not lend itself
groups
Epiphanius
is
describing.
Further,
Epiphanius1 s
incons istencies that we are often confronted with the impossibility of unders tanding what he is say ing. Some of the problems connected with Epiphanius ' s method are analysed by R.M. Hübner, 'Die Hauptquelle des Epiphanius (Pan.haer. 65) über Paulus von Samosata', ZKG 90, 1979, pp. 5 5-74. Hübner states, after comparing Epiphanius and his main source for Pan.haer. 65, Pseudo-Athanasius: 'Diese Gegenüberstellung [von Epiphanius und Ps-Athanasius] zeigen immerhin, warum man den Epiphanius an vielen Stellen nicht verstehen kann. Das dürfte auch für andere Kapitel des Panarions ... lehrreich sein' (p. 69 ). The result of Hübner's analys is, a contribution to 'eine umfassende Quellenanalyse ' (p. 58), is that Pan.haer. 65 is 'ohne Quellenwert 1 (pp. 58, 71). ' Au f die Berichte des Epiphanius [ist] kein Verlass, solange er uns seine Quelle nicht nennt ' (p. 72 ) . Hübner even thinks he has caught Epiphanius in the act of 1 Fälschung' (p. 72) of documents, thus concluding a severe analysis with a negative verdict.
App© Il cl X3£ 4b) The case that same of They are refuted by truth is illustrated has more by (27.8,1-3), and by
91
argument Jesus. is lies.
scripture mind
the
Whoever noth ing For
a solid foolish was
must
recognize
there of
than
Carpocrates 1 and
factory
if Jesus
born
from Joseph
Mary, as they only must
say, and and is
if he attained Mary the themselves angel for who
salvation, be saved,
then not but the no
Joseph
demiurge longer
also--that be calied from
made
them—can them
def icient;
through
Jesus
proceeded f rom the
the Father. th is
If it is said that Jesus came is reduced to the same
angels,
theory
absurdity as was shown above. born of the virgin Mary, etc.) 5) Invective and
(The truth is that Jesus was
analogy
with
serpents
(27.8.3). It is
Such mythmaking
( ôpctuaTUpyriya
) will not stand up.
f ilied with spite and poisonous (\w6ous ) doctrine. 6) Transition to the next heresy (27.8.4). We will
return to this heresy again xatei, like the head of a dragon wi en
After throwing it down of the stick of
the help
their destruction as promised
(with the help of God).
CONCLUSION: CHRISTIAN POLEMICS AND THE EMERGENCE OF ORTHODOXY The writers angry Great idea that the polemical works of early resulted We in the overth row that of Christian wouId some of
heresies had
doubtless betray an inflated confidence in words -- and in words. can concede refused, name Irenaeus in influence on relegating Church ; he to grant counter and even Justin, used the to the Gnostics to the margin of the probably of imitation 'Christians '. at times But he quite his that is,
them the his
enjoyed such sophisticated broad attack. was
inf luence less by the force of the arguments opponents he (arguments in the difficult to appreciate fully) than by course of
theology
developed
Furthermore, even before he started his work the in the victorious process of a main stream. That to the triumph
Church was already engaged to lead
orthodoxy did not develop directly or exclusively from the polemics against heresies. If we look at the period and the situation in perspective, we can state that orthodoxy developed out of a network of concrete decisions which the Church made in s i tu at ions of conf1i ct Harnack 1 s itself such as the confrontation with the Gnostics. With the this we subscribe to judgment the that book
gnostic
movement
erased
from
of historyl.
Assuredly our judgment, as well as Harnack's,
1A. Harnack, Zur Quellenkritik des Gnosticismus, Leipzig 1873, p. 81: 1 Diese (= die gnostische Spekulation) hat sich selbst, freilich einem zwingenden Entwicklungsgesetze folgend, ausges trichen aus dem Buche
G îne h same e view was expressed by E. Schwartz in 1908: 'Diese (antignostische) Polemik ist es nicht gewesen, was ihr (der Kirche) den Sieg brachte, sie setzt sogar, wenn die spärliche und chronologisch unsichere Überlieferung nicht täuscht, mit vol 1er Kraft erst ein, nachdem der Kampf entsch ieden ist' (quoted by K. Koschorke, Hippolyts Ketzerbekämpfung und die Polemik gegen die Gnostiker, Wiesbaden 1975, p. 93). deC
Conclusion is to some extent conditioned heresiologists. whether Gnostic incorporated
93 by the presentation of the in genera 1, howeve r,
1iterature
in patristic writ ings or encountered
in the Nag Hammadi library, lends support to that judgment. gnostic movement did not appeal to large segments of the population. retrospect, center. of its It was incapable of—and perhaps uninterested a ma i ns tream of for position. was Seen in a principle f ragmentation too active a rallying and because to its was in — r e p r e s e n t i n g with in the gnostic message
movement its
it to become stages, an gnostic
Because Christianity was aware of the universality from earliest as the perceived Gnosticism religion, obstacle
Christianity becoming forced own
a universal
movement
to recede. It is
But the movement did not wane without with this point that the following
having allowed Christian polemics to find and develop their style. 1. observations are concerned. The first observation that our analys is sugges ts concerns the evolution of the style of Christian polemics. In considering the sequence of three centuries of polemics, represented here by our three Irenaeus authors, had for one cannot the help being struck by the decline of argumentation (of 'sachliche Auseinandersetzung1). pattern of f ixed his and general of exposition-refutation of views The to be discussion was
compendium somewhat
criticized
already developed
bi ased.
refutation,
however, was
for its own sake and had a broad basis anchored in rational and scriptural elements, the conjunction of which resulted in to theological serve a argumentat ion. problemat ic views in the to In the Elenchos that but of the the A as than refutation is included in the expos it ion which is thus made highly of does thes is, pagan reducibility refutation invective—the reminder of gnostic appear philosophy. either
Panarion,
content of which Church doctrine
is no more rational formulated in
the heretical doctrines just exposed by Epiphanius—or as a as offical
3 0 Anti-Gnostic Polemics pronouncements conform. However fail to We to wh ich the can therefore such 'stubborn' heretics refused to say that the might the write development be, it of
Christian polemics is marked by dialectical impoverishment. disappointing be a result It mirrors heresiologist who were cannot in the instructive. made the changes an
situation that occurred i ncreasingly attack against enemies
from the time of Irenaeus and that impressive and less more abs tract
threatening, rather than ponder over an argument and put it to use in an actual debate. 2. s tudied their of There is no doubt the not, that the of sure, three works in the we have battle of the cut to We reflect is emergence to be cliches an
against heretics. views
The caricaturing of one's opponents and invention But in their works it receives a role The polemists maliciously they are not of averse manifestations heresy. context ;
Christian polemists. increasing on s tatements focussing from
importance. their questionable
shall not enumerate all the cliches thus encountered ; most of them can easily be gathered from the previous chapters. But we do wish (and been to emphasize here of by the the the portrait heretics) beginning of as of the it the heresiarch 11 has consequently argued that,
appears at the end of three centuries of polemics. second century, the tendency of Christian polemists was to identify the heresiarch with the traditional picture of the eschatological the second false prophet 2 . century, and this Starting with the end of aspect of eschatological keeps his
progressively fades out. devi 1 remains false prophet
But a certain connection with the character This dark side will be Increasingly, in
the heresiarch
and false teacher.
developed to its ultimate possibilities.
the writings we have considered, the heresiarch is regarded
2 F . Wisse, "The Epistle of Jude in the History of Heres iology" in Essays on the Nag Hammadi Texts in Honour of Alexander Böhlig, ed. M. Krause, Leiden 1972, pp. 133143.
Conclusion
95
as demented, anxious to make himself conspicuous by his odd ideas. give by He is filled rein to It with evil intentions etc.). In (to break this way the the he the unity of the community, to make other people sick also, to free their pride, heresiarch does the devil 's will; he is inspired, possessed the devil 3 . This heresiarch debased complete; to the is not with surprising the devi 1 then that when explains why speaks, he can only utter blasphemies, connection is not only The from of a a mentally immorality advocating 'morbid' sick person ; through a is declared of the sexual a morally is even heresiarch licence, and
•procès d'intention' the heresiarch being. it goes
full-fledged libertinism, human sacrifice and ritual crime, furthering rigorism encratism. itself Heresy is always the product of contamination of the soul soul expresses of the with The features been heresiarch the same
by the devil, and this contaminated in endless deviant ways. followers alike; they
are shared in varying degrees by both innocent and vicious have injected contagious virus. The portrait of the heretic thus becomes a caricature of darkness and evil. 1 imb. Difference unity, understood increasingly He must be removed like an unhealthy is taken to be a break of uniformity Church was a of the as in opinions by
as uniformity; doctrinal leaders
perceived
strength amidst the vanishing Roman institutions. The connection of heresy with moral failure heresy by is it) only and with mental or weakness suspected 4 . will (whether be a is born out of moral failure or merely accompanied henceforth this
permanent feature of Christian polemics, though at times it insinuated While feature
3 'Organa satanae', as Justin already to Irenaeus, Adv. haer. V.26.2.
said,
according
4 Such cliches were not invented by the heresiologists. They had already been alleged against the Christians by their first opponents (accusing them of atheism, impiety, debauchery, p r o m i s c u ity, child-murder.. . ) , The heresiologists only received those categories.
3 0
Anti-Gnostic Polemics
reflects the decline of argumentation in the works we have studied, it will remain characteristic of Christ ian polemics even when argumentation reappears in the 12th-13th and in the 16th century. 3. polemics, One s igni f icant characteristic of early Christ ian with a cons iderable that had the import for the following In it, against against against others.
centuries as well, appears clearly in the Elenchos. the very weapons known heretics with in brothers been developed are now bluntly in the for use turned among or pagans,
Church :
Elenchos,
Callistus;
in the Panarion
against Origen,
The use of such a heavy arsenal against brothers who merely differed with an author on unsettled matters seems to have been an irresistible those temptation weak for some authors, Once could to particularly initially eas ily be who were of in argumentation. as Christians, who it
this arsenal had been used against thought turned themselves any against
'gnostic brothers' who happened
colleague
This phenomenon has its corollary: of Ketzerpolemik into Ketzergeschichte heresies ; these
5
the transformation during be the third easily
century. refuting
Heresiologists become less interested in properly individual can most
d isqualif ied if by some way they can be ass igned a place in the traditio haereticorum. par excellence. and of Then polemics as such tend to the polemical weapon recede and deterrent history becomes
This peculiar kind of history subordinates them this from type an following of heretics. freely The uses
every th ing to the goal of scaring people away from heresy dissuading who writes to polemist history
anachronisms
establish
impress ive
genealogy :
P. Nautin has promised a study on the sources of heres iology, wh ich he th inks will be found in the 1iterature {primarily ph ilosophical ) of compendia and epitomae. As we saw above, these are the terms used by A. Hilgenfeld, Ketzergeschichte des Urchristentums, Leipzig 1888.
5
Conclusion attributions authors. 4. can look of recent opinions to ancient authors the
97 and,
vice versa, attributions of past positions to contemporary That done, an essential part of refutation itself is viewed as complete. As a consequence of the change just mentioned the 'heresy' is broadened to an extreme degree. field represented in this monograph as having a triple We at the polemical in the three front— concept of
works studied
against sectarian Christians, against Jewish sects, against pagans (to which the Elenchos and the Panarion add a fourth front against some fellow orthodox Christians). 6 distinguished all Christian deserve counted these three fronts most clearly : then some Chris tian Jewish the Gnostics sects, that are but called finally In the as well. heretics, only Elenchos, Irenaeus first of Jewishare diss idents sects 'heresy ' is 'heresy'
name.
among heresies
The concept
then extended the concept
in the Panarion to pagans (although is used to embrace any departures
has here a double sense as we indicated) ; that is to say, whatsoever Such a Likewise (as in f rom the position of the author and his fellows. process made it necessary revelation Christ's always further revelation back is in history. back
to postulate the event of God's pu shed
itself
Epiphanius) in order to make Adam the first Christ ian from whom all heretics, past and present, stand in a position of departure. 5. A few remarks of each can be made here on the different The accounts in a certain measure temperaments of the three polemists we have studied. temperament we cannot author for the differences claim in their polemica1 styles.
Obviously
to be exhaustive
on this topic, but wish
6we could visualize the situation as follows : is called heretic?' Irenaeus Elenchos Epiphanius pagans Jewish sects sectarian Christians X X X X X X
3 0 Anti-Gnostic Polemics to emphasize We shall the not differences repeat what that we appear said to be most
telling. about Irenaeus, except that he tended to be, if not clearly by temperament, at least decidedly moderate. 'conservative1 He knew how In
to oppose the extremes of moral and disciplinary rigorism, as well as of doctrinal and speculative free-for-al1. bring the Christian movement to ru in. Because of the problems of attribution mentioned above, the author of the Elenchos remains more elusive, for we have to rely exclusively his character. pope his and against and moral on internal evidence to reach He hardly to hides his his But clearly he has a grudge against a lax compromise. in has order both extremes he suspected a subversive element that could
ecclesiastical ambit ions and does not hes i täte to show off learning S imilarly, have an extreme virtue legitimize feelings authority. Epiphanius dislike s trong against The Origen and the Origenists. In the Panarion he appears to for doctrinal compromise.
bishop of Salamis was very respected in his life-time; when he writes against all those who disagree with the official Church, he cannot hide a patriarchal view of himself. strongly his side feels that he has the majority and shows that he wou Id not tolerate He any in the Church on
challenge.
He seems to be more interested in crushing his lead to the futile our polemists. that from It is to a
opponents than in persuading them. To cont inue th is reflect ion would e xercise a role of in the analyzing the area the of psyche of
might be fruitful to see how such temperaments came to play ecclesiastical of normative politics called emergence of orthodoxy. However, we want
consider
the
emergence
Christianity
the dialectics of emerging orthodoxy. use the content of the preceding departure.
These remarks will as a point of
chapters
Conclusion We said above that orthodoxy that wouId historian the developed shape with out
99 of
s ituations of conflict in which the Church was called upon to make concrete decisions If we try the percept ions we the f irst of the Christianity. combine of the running did face and an those themes to say more about these decisions 7 and can mention four seen in following
participant, through these A.
centuries.
Christ ianity the
conf1icts; of the
retrospect, of
crucial with
underlying issues can be formulated as follows: Some situations conflict, accompanying challenge for Christianity, were . encounter with Judaism: danger of remaining encounter wi th the Gent iles : danger of a sect ; its its danger of losing its Christological distinctiveness; losing monotheistic dist inctiveness; . encounter with identity gnostic groups : danger of losing religion; danger of as historical becoming
elitist and esoteric; . encounter with Graeco-Roman cults: danger of idolatry and syncretism; . encounter down with the Roman Empire: religious danger of play ing its distinctive with characte r; danger of danger of
overadaptation; . encounter Hellenistic ph ilosophy: being dissolved into philosophic doctrines; danger of losing its historical character; . encounter with Roman law: danger of losing its prophetic and eschatological character; danger of structural assimilation. B. Some of the concrete decisions that had to be made
pertained to . membership: . discipline: limitation or universality; rigorism or 'indulgence1;
The decisions we have in mind here are those that the Church was forced to take before being able to account for them in a fully rational way.
7
3 0 . authority : . doctrine :
Anti-Gnostic Polemics scripture-tradition or Spirit ; hierarchy, college or people ; positivism or free speculation; esoteric or exoteric ; elitist or popular; . adaptation: partial or total ; rejection of Zeitgeist or coming to
terms with it. The option of for universality, present the in the original to the
message (women,
Christianity, civil
determined of f icers, Such
miss ion
Gentiles and provided an impetus for the admission of all slaves, nationals, a movement illiterate might have became a people, philosophers, 2nd etc.).
been favoured by the denationalizing and 3rd centuries.8
of the Empire in the truly
But Christianity
mass-movement when the Church decided, around the middle of the 3rd century, to re-admit the lapsi9. the second half in was of the to could the 4 th it (e.g. 3rd century, a s tate-church universality prevail. groups extent, reactionary the Encratites, enthusiastic Montan ists), wh ich century. have been If It triumphed in becoming drive had the to the toward before
succeed,
'centrist ' mood otherwise? had to and, drive
How
Extremist ret reat: to (e.g. some the
jeopardized Montanists ) ,
th i s
groups
Judeo-Christ ians rigor i s t
groups
those opposing extravagant
the readmission of the lapsi ), enthusiasts or (e.g. the speculators 'pessimistic
radicals or optimistic
enthusiasts' (e.g. the G n o s t i c s —
in a word, all groups
8 S e e F . W . Kantzenbach , Christentum in der Gesellschaft, Bd. 1, Alte Kirche und Mittelalter, Hamburg 1975, p. 90.
9see Kantzenbach, Christentum, pp. 74, 85-87. l^The Gnostics are characterized in this way by F. Wisse,'"The Opponents" in the New Testament in the Light of the Nag Hammad i W r i t i n g s ' , in A c t e s du Colloque international sur les t e x t e s d e Nag Hammadi d'août 1978, ed. BT Bare, Québec (forthcoming).
Conclusion opting concerns umbrella for some be in form of elitism. 11 re-admitted so But under that even the
101 elitist Church 's no
will (e.g.
slowly
monasticism)
practically
barrier will be put to universality once effective control is established. century, that language.^ It was not only the drive toward universality that led to the building of a wide centrist position. base is always toward concerned necessitated some form of with both by the society those This broad of any To development Progressively, as can be seen in the 4 th control becomes reducible to a control of
institutional stability, upon universality and stability,
which the continued existence of that society depends. plural ism is intolerable. agreement is a necessity. The considered the crisis development of a centrist pos i t ion can
The formation of a wide basis of be
from a different perspective. of existence, arbitrarily, the crisis dates of
In the life of a relevance, the these
social group, three moments of cris is can be distinguished: crisis of could, of early Christianity, we to each of
i d e n t i t y . T h i n k i n g
rather
assign
U s e e Kantzenbach, Christentum, p. 52. l 2 ït would be instructive to compare the emergence of orthodoxy to the contemporary formation of unanimous communities and art if ical societies, such as the communist party, the societies of psychoanalysis, etc. This idea is suggested to us by V. Descombes, Le même et l'autre. Quarante ans de philosophie française, Paris 1979, pp. 124130. In such societies the function of a common language is decisive to the point that the ascendency of the institution over the individuals can be reduced to the domination of a language. The social bond is so grounded in language that altering the language is perceived as a subversion of the community. l^These moments are suggested by J. Moltmann's analysis of the contemporary scene in terms of relevance and identity in The Cr u c i f ied God, London 1974, pp. 7-21, and by Th. Baumeister*s application of Moltmann's analysis to early Christianity in Montanismus und Gnostizismus , TrTZ 87, 1978, pp. 44-60. Baumeister thinks (pp. 44-45) that our time of rap id social change presents many similarities with the beginnings of Christianity.
3 0 moments: the time
Anti-Gnostic Polemics the year 70 (the loss of the home-base), the year of Hadrian), from the year 150 on (need for
13 5 (adaptation to the Zeitgeist and ecumenical momentum at strengthening cohesion brought about by an adaptation that might go too far, accompanied by the temptation to form a ghetto against the dangers of dispersion in the surrounding world). them as Instead of thinking and of these moments in of an the historical sequence, however, it seems more accurate to see complementary permanent features That Christian movement in the f irst centuries. is, the
existence of the Christian movement is always threatened by persecutions ; the need for adaptation is present as soon as the movement turns to the Gentiles and becomes aware of its universal character ; the awareness of being different
1
(as
well as being most 'ancient ) is expressed in the original message and will be constantly affirmed. While the drive toward orthodoxy is realized through Once
moments determined by both the crisis of relevance and that of identity, it seems to stand closer to the latter. the Christian movement succeeded in establishing once it reached a large social basis in maintaining itself and in the Roman world, certainty distinctiveness
a degree of self-confidence and
about its future, the need to af f irm this was felt in a renewed way. spec if ic difference and
Excessive concern for relevance
had to be tempered by an insistence on what constitutes the the unique character of movement. In other words, the drive toward relevance and universality is limited by the drive toward identity. In movement Total the second rejecting to and its third centuries, through to be and felt the a too Christian series into of the extreme. asserted difference elements the world
exclus ions,
assimilation
retreat
ghetto were both seen as threats to the very existence of the movement. A number of possibilities on both extremes had to be ruled out; again, but this time for the sake of the movement 's distinctiveness, a centrist position had to be developed.
Conclusion In order not to yield to a spatial view of
103 the
emergence of orthodoxy, we may express the same idea «Dre accurately in other words. a
1
Identity is constituted through whereby from it is affirmed 'truth* that from 'dream',
series
of
partitions to be 'old' from it
reality' has
separated
'arbitrariness', those
'new', ' reason' from have talk reached about a
'folly * — stage of and
in short, 'we' from 'others'. partitions, which must the development allows
In order for a group to make majority
consent to have meaning and to correspond to truth-claims. This is not the case in early stages of a movement, nor are these stages the times in which concerns for orthodoxy prevai1. impressed of slowly Such concerns do prevail when the group, which its relevance upon masses and is in the process becoming depends an on institution. The strength of the the strength of its social basis;
might have had charismatic features in its beginnings, has
institution
its authority truth is
is expressed on
and enhanced by appeals to its agreements. It its truth But resides the all in
'antiquity' as well as to the 'majority' it represents; its founded is as doctr inal institution d isagreement also hypersens it ive. Since interprets
oppos ition.
consent, dissent forms obstruction to truth. to disappear. Naturally of the criterion of doctrinal
If the truth
is to recover its integrity, the deviant has to recant, or
agreement
and
consent is the ground of intolerance.
Dissenters, because
the vital threat they represent to the integrity, even They are in league with the arch-enemy who of the movement ; they sell out the
to the very existence of truth, have to be depicted in the blackest terms. wants the ruin
distinctive character of Christianity to the pagans and the surrounding world ; they exclude the majority subtle, and so on. Because of the by being too of the seriousness
threat they represent, dissenters have lost their rights to exist in the Church, even to exist at all. Orthodoxy was thus born in the wake of Christianity's search for its difference and identity. Heavy sacrifices
3 0 Anti-Gnostic Polemics had to be accepted as well as unfortunate losses. not easily cease even after the search had achieving normative self-definition. They did in
succeeded
In this light, it is
but a slight consolation to assert that it is indeed a curious quirk of history that western Rome was destined the determinative to begin to exert influence upon a religion
which had its cradle in the Orient, so as to give it that form in wh ich it was to ach ieve worldwide inflexibly recogn i t ion. orders life But in as an otherwith a wordly religion that despises this world and accord superhuman standard religious and as a tide that has descended from
heaven, or as a complicated mystery cult for intellectual connoisseurs, or fanatical have enthusiasm ach ieved that su c h of
swells today and ebbs tomorrow, Christianity never could recognition. The more conf idence it of to a rigid orthodoxy appears, the more it loses to for the it is that the all
that new for of
the movement opening to the
is sufficiently world. and The
powerful concern in
maintain itself,3-5 u nt il renewed search for relevance cal Is orthodoxy thus appears as a dialectical moment (the moment care distinctiveness identity ) or development In a social movement ; sooner the temptation form of consists later
accompanied by another moment, that of care for relevance. this process and in thinking achieved to wh ich orthodoxy, perfect in one historical final situât ion, has realization
subsequent forms have to be measured and reduced. lure.
Needless
to say, such a temptation has never failed to exercise its
1 . Bauer, Orthodoxy and Heresy Christianity, Philadelphia 1971, p. 240.
15
in
Earliest
S e e Moltmann, Crucified God, p. 19.
BIBLIOGRAPHY Aland, B. , ed. Gnosis. Festschrift für Hans Jonas,
Aland, K. Kirchengeschichtliche Entwürfe, Gütersloh 1960. d 1 Aies, A. La théologie de saint Hippolyte, Paris 1906. Amand, D. Fatalisme et liberté dans 1 'antiquité' grecque,
Louvain/Paris 1945. Andresen, C. Die Kirchen der alten Christenheit, Stuttgart 1971. . Logos und Nomos. Die Polemik des Kelsos wider das
Christentum, Berlin 1955. Audet, T.A. 'Orientât ions théologiques chez saint Irénée',
Traditio 1, 1943, pp. 15-54. Barnes, T.D. ' The Chronology of Montanism ' , JTS 21, 1970,
pp. 403-408. Baumeister, Th. 'Montanismus und Gnostiz ismus 1 , TrTZ 87,
1978, pp. 44-60. Bauer, W. Orthodoxy and Heresy in Earliest Christianity, ET eds. R. Kraf t and G. Krodel, London/Philadelph ia 1972/1971. Benoit, A. Saint Irénée. Introduction à l'étude de sa
théologie, Paris 1960.
3 0
Anti-Gnostic Polemics
Julien de Grégoire de Nazianze', L'Empereur Julien. De l'histoire à la légende, eds. R. Braun and J. Richer, Paris 1978, pp. 89-98, Blanchetière, F. 'Le montanisme originel I', RevSR 52,
1978, pp. 118-134. . 'Le montanisme originel II', RevSR 53, 1979, pp. 1-22.
Bousset, W. Jüdisch-christlicher Schulbetrieb in Alexandria und Rom, Göttingen 1915, Brox, N. 'Antignostische Polemik bei Christen und Heiden',
MTZ 18, 1967, pp. 265-291. — . rvwciTiKOi als häres iologischer Terminus ' , ZNW 57, 1966,
pp. 105-114. . ' Der einfache Glaube und die Theologie. Zur
altkirchlichen Geschichte eines Dauerproblems', Kairos 14, 19 7 2, pp. 161-187. —. 'Juden und He iden bei Irenaus', MTZ 16, 19 6 5, pp. 89106. — . ' Kelsos und Hippolytos. Zur frühchr istli chen
Geschichtspolemik', VC 20, 1966, pp. 150-158. —. Offenbarung, Gnosis und gnostischer Mythos bei Irenaus von Lyon, Salzburg/München 1966. —. 'Offenbarung -- gnostisch und christlich'. Stimmen
der Zeit 182, 1968, pp. 105-117. Butterworth, R. Hippolytus of Rome: Contra Noetum,
Heythrop Monographs 2, London 1977.
Bibliography Carpenter, H.J. 'Popular Christianity and
107 the Theologians
in the Early Centuries', JTS 14, 1963, pp. 294-310. Daniélou, J. Origène, Paris 1948. Dechow, J. F. 'Dogma of and Mysticism and in Early the Legacy Christianity. of Origen',
Epiphanius
Cyprus
Dissertation, University of Pennsylvania, 1975. Descombes, V. Le même et 1 'autre. Quarante ans de
philosophie française, Paris 19 79. Dillon, J. The Middle Platonists. A Study of Platonism 80
B.C. to A.D. 220, London 1977.
Duchesne-Gu illemin, J. 'Dualismus', RAC 4, Stuttgart 1959, cols. 334-350. Dummer, J. ' Die Angaben Uber die gnostische Literatur bei
Epiphanius, Pan.haer. 26', Koptologische Studien in der DDR, Halle 19 6 5, pp. 191-219. —. 'Ein naturwissenschaftliches Handbuch als Quelle für
Epiphanius von Cons tant ia 1 , Klio. Beiträge zur alten Geschichte 55, 1973, pp. 289-299. Picker, G. Studien zur Hippolytfraqe, Leipzig 1893 . F ischer, J.A. 'Die
1
antimontanistischen
Synoden
des
2./3.
Jahrhunderts , AHC 6, .1974, pp. 241-273. Fraenkel, P. "Histoire sainte et hérésie chez saint
Epiphane de Salami ne d'après le tome I du Panarion 1 , RThPh 12, 1962, pp. 175-191.
3 0 Frend, W.H.C. ' The
Anti-Gnostic Polemics Gnos tic-Manichaean Tradition in Roman
North Africa', JEH 4, 1953, pp. 13-26. —. 'The Gnostic Sects and the Roman Empire 1 , JEH 5, 19 54, pp. 25-37. —. 'Heresy and Schism as Social and National Movements', Studies in Church 37-56. and Gnos is', The Heritage of the eds. D. Neiman s i n d M • S ch c i tk in § History, ed. D. Baker, vol. 9, 1972, pp. Froehlich, K. G.V.
'Montanism Florovsky,
Early Church.
Essays in Honor of the Very Reverend
Gibson,
E.
'Montanism
and
its
Monuments',
Dissertation,
H a r v a r d University, 1974. Grant, R.M. 'Irenaeus and Hellenistic Culture', HTR 42,
1949, pp. 41-51. —. 'Eusebius and Gnostic Origins', Melanges Simon. Paganisme, judaïsme, christianisme, Paris 1978, pp. 195-205. Green, H.A. 'Gnos i s and Gnosticism. A Study in
Methodology', Numen 24, 1977, pp. 9 5-134. Greenslade, S.L. 'Heresy and Schism in the Later Roman
Emp ire', Studies
in Church
History,
ed. D. Baker,
vol. 9, 1972, pp. 1-20. Hägglund, B. 'Die Bedeutung der "régula StTh f idei" 12, als 1958,
Grundlage pp. 1-44. Harnack, A.
theologischer
Aussagen 1 ,
Lehrbuch
der
Dogmengeschichte
I,
Tübingen
1931.
Bibliography —• Zur Quellenkritik des Gnosticismus, Leipzig 1873. G. and Nolte, J. Formen kirchlicher
109
Hasenhüttl,
Ketzer-
bewältigung, Düsseldorf 1976. Hefner, P. 'Theological Methodology and St. Irenaeus', JR
44, 1964, pp. 294-309. H i Igen feId, A. Ketzergeschichte des Urchr istentumus,
Leipzig 1888. H'übner, R.M. 74. Jedin, H. , ed. Handbuch der Kirchengeschichte 1963. Jonas, H. 'A Retrospective View', on Proceedings Gnosticism, of the I, Freiburg 'Die Hauptquelle des Epiphanius (Pan, haer.
65) über Paulus von Samosata', ZKG 90, 1979, pp. 55-
International
Colloquium
Stockholm
August 20-25, 1973, Stockholm 1977, pp. 1-15. Jones, A.H.M. 'Were Ancient Heresies National or Social
Movements in Disguise?', JTS 10, 1959, pp. 280-298. Kantzenbach, F.W. Christentum in der Gesellschaft, Bd. 1,
Alte Kirche und Mittelalter, Hamburg 1975. Koch, G . A. 'A Cr i t ica1 of the Investigat ion A of Epiphan ius ' and
Knowledge
Ebionites:
Translation
Critical Discussion of Panarion 30 ' . University of Pennsylvania, 1976.
Dissertation,
Korschorke, K. Hippolyts Ketzerbekämpfung und Polemik gegen die Gnostiker: seiner 1975. Eine Tendenzkritische Untersuchung 'Refutatio omnium haeresium', Wiesbaden
3 —. Die Polemik
0 der Gnostiker
Anti-Gnostic Polemics gegen das kirchliche
Christentum, Leiden, 1978. Kraft, H. CNRS 247. Lebreton, J.
1
'Die
lyoner Märtyrer und der Montanismus', (177), Colloques 1977), September Paris 1978, pp.
Les 233-
martyrs de Lyon (20-23
internationaux du
Le désaccord
de
la foi populaire
et de la
siècle', RHE 19, 1923, pp. 481-506 and RHE 20, 19 24, pp. 5-37.
—.
'Le désaccord savante', and U.
entre la foi populaire et la the'ologie de l'Eglise 2, eds. A. Fliehe
Histoire
Martin, Paris 1948, pp. 361-374. A. 'Y a-t-il des traces dans de le la Péri ed. M. polémique Archon Krause,
Le
Boulluec,
antignostique
d *Irénée
d'Origène?', Gnosis and Gnosticism, Leiden 1977, pp. 138-147. Le Goff, ed. Hérésies et sociétés dans
1 1 Europe
pré-
industrielle lle-18e siècle, Paris/La Haye 1968. Le Nain de Tillemont, S. ecclésiastique 1705. Lipsius, R.A. Die Quellen der ältesten Ketzergeschichte neu untersucht, Leipzig 1875. —Loi, Zur Quellenkritik des Epiphanius, Wien 1865. V.
1
Mémoires pour servir a l'histoire six premiers siècles. X, Paris
des
L'identité su
letteraria
di
Ippolito
di
Roma',
Ricerche
Ippolito
(collab.), Stud ia Ephemeridis
Augustianianum 13, Roma 1977, pp. 67-88 .
Bibliography —. 'La problematica storica-letteraria su Ippolito
111 di
Roma', Ricerche su Ippolito, pp. 9-16. Markus, R.A. Studies 'Christianity Changing in Church 21-36. The S igni f icance of History to Gnosticism', VC 8, and Dissent in ed. in Roman North Work', Vol. 9,
Africa: 1972, pp. .
Perspectives History,
Recent
D. Baker,
'Pleroma and Fulf ilment. in St. 1954, pp. 193-224.
Irenaeus' Opposition
Moltmann, J. The Crucified God, London 1974. Momigliano, A. 'Popular Religious Beliefs and the Late
Roman Historians', Studies
in Church History, eds.
G.J. Cuming and D. Baker, vol. 8, 19 72, pp. 1-18. Moutsoulas, E. 'Der Begriff Studia 86-107. "Häresie" bei Epiphanius TU 93, von
Salamis', 1966, pp.
Patristica
VIII,
Berlin
Nautin, P. ' Les fragments de Basilide sur la souffrance ', Mélanges d'histoire des religions offerts a H.-C. Puech, Paris 1974, pp. 393-403. —. Hippolyte et Josipe. littérature 1947. —. 'Histoire Problèmes des et dogmes et des sacrements des chrétiens', religions, Sciences Contribution à l'histoire de la du troisième siècle, Paris
chrétienne
méthodes
d'histoire
École pratique des Hautes Études, Section religieuses, Paris 1968, pp. 177-191. —. Lettres et écrivains chrétiens des Ile
et
Ille
siècles, Paris 1961.
112
Anti-Gnostic Polemics Origine I. Sa vie et son oeuvre, Paris 1977.
—.
'Saint
Épiphane
de
Salamine',
DHGE
XV,
Paris
1963,
cols. 617-631. Pagels, E.H. '"The Demiurge and His Archons" - A Gnostic
View of the Bishop and Presbyters?' , HTR 69, 1976, pp. 301-324. Paulsen, H. 'Die Bedeutung des Montanismus für die
Herausbildung des Kanons', VC 32, 1978, pp. 19-52.
Composition
in Adversus Haereses Book One', VC 30,
1976, pp. 193-200. Peterson, E. Der Monotheismus als politisches Problem,
Leipzig 1935. Powell, D. 'Tertullianists and Cataphrygians', VC 29, 1975, pp. 33-54. Puech, H.-C. En quête de la gnose, 2 vols., Paris 1978. Reynders, D.B. Lexique latine, comparé du texte et grec et des de
versions
arménienne
syriaque
1'"Adversus haereses" de saint Irénée, CSCO 141142, Louvain 1954. —. 'Optimisme et théocentrisme 8, 1936, pp. 225-252. —. 'La polémique de saint Irénée. RTAM 7, 1935, pp. 5-27. Richard, M. 'Bibliographie de la controverse', PO 27, 1954, pp. 271-272. Méthode et principes', chez saint Irénée', RTAM
Bibliography 125 —. 'Hippolyte de Rome', DS VII, Paris 1968 , cols. 531571. Riggi, C. ' La figura di Epifanio nel IV secolo', Studia
Patristica VIII, TU 93, Berlin 1966, pp. 86-107. —. 'Il termine "ha i res is" nel1' accezione di Epifanio di Salamina (Panarion t. I; De Fide)', Sales ianum 29, 1967, pp. 3-27. —. Epifanio Contro Mani, Roma 19 67. K. DieGnosis. Wesen und Geschichte einer
Rudolph,
spätantiken Religion, Göttingen 1977. Sagnard, F.-M.-M. La gnose valentinienne et le témoignage
de saint Irénée, Paris 1974. Sanders, E.P., ed. Jewish and Christian Self-Definition, in the Second
Vol. I :
The Shaping of Christianity
Schenke,
H.-M.
'Die
Relevanz
der
Kirchenväter
die
Erschliessung der Nag-Hammadi Texte', Das Korpus der griechischen christlichen Schriftsteller. Gegenwart, Zukunft, eds. TU 120, Berlin 1977, pp. Schneemelcher, W. 'Epiphanius 209-218. von Salamis ' , RAC 5, Historie, J. Irmscher and K. Treu,
Stuttgart, 1960, pp. 909-927. 'Notizen', ZKG 68, 19 57, pp. 394-395. Schoedel, W.R. 'Philosophy and Rhetoric in the Adversus
Haereses of Irenaeus', VC 13, 1959, pp. 22-32. Thouzellier, C. Catharisme et valdéisme au Languedoc, Paris 1969.
3 0 —,
Anti-Gnostic Polemics ed. Le livre des deux principes, SC 198, Paris 1973.
1
Tröger, K.W. Actes
The Attitude of the Gnostic Religion Toward as Viewed in a Variety of sur Perspectives', les textes de Collogue international R. Bare, Québec
Judaism du
Nag Hammadi, ed.
(forthcoming).
Ullmann, W. 'Gnostische und politische Häresie bei Celsus', Theologische Versuche II, eds. J. Rogge and G. Schille, Berlin 1970, pp. 153-18 5.
van Unnik, W.C,
'An Interesting Document of Second Century
Theological Discussion', VC 31, 1977, pp. 196-228.
—.
'De la règle
Mnte
Trpoaö é " i va i
prixe
à<f>sXei v
dans
l'histoire du canon', VC 3, 1949, pp. 1-36.
Villain,
M.
'Ruf in
d'Aquilée.
La
querelle
autour
d'Origène', RechSR 27, 1937, pp. 5-37. Widmann, M. 'Irenaus und se ine theologischen Väter', ZTK
54, 1957, pp. 156-173.
Wisse,
F. Honour
' The
Epi stle Essays
of on
Jude the
in Nag
the
History Texts
of in
Heresiology 1 , 1972, pp.
Hammadi
of Alexander 133-143.
Böhlig,
ed. M.
Krause,
Leiden
—.
'The Nag Hammadi Library 25, 1971, pp. 205-223.
and the Heresiologists ', VC
— .
'The 'Opponents' in the New Testament in the Light of the Nag Hammad i sur Writings', les Actes du Colloque d'août
international
textes de Nag Hammadi (forthcoming).
1978, ed. B. Bare, Québec
This action might not be possible to undo. Are you sure you want to continue? | https://www.scribd.com/doc/155617651/Gerard-Vallee-A-study-In-Anti-Gnostic-Polemics-Irenaeus-Hippolytus-epiphanius-127-Pg | CC-MAIN-2015-48 | en | refinedweb |
KTextEditor
#include <messageinterface.h>
Detailed Description
Message interface for posting interactive Messages to a Document and its Views.
This interface allows to post Messages to a Document. The Message then is shown either the specified View if Message::setView() was called, or in all Views of the Document.
Working with Messages
To post a message, you first have to cast the Document to this interface, and then create a Message. Example:
Definition at line 399 of file messageinterface.h.
Constructor & Destructor Documentation
Default constructor, for internal use.
Definition at line 179 of file messageinterface.cpp.
Destructor, for internal use.
Definition at line 184 of file messageinterface.cpp.
Member Function Documentation
message to the Document and its Views.
If multiple Messages are posted, the one with the highest priority is shown first.
Usually, you can simply forget the pointer, as the Message is deleted automatically, once it is processed or the document gets closed.
If the Document does not have a View yet, the Message is queued and shown, once a View for the Document is created.
- Parameters
-
- Returns
- true, if
messagewas posted. false, if message == 0.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2015 The KDE developers.
Generated on Tue Nov 24 2015 23:10:25 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | http://api.kde.org/4.x-api/kdelibs-apidocs/interfaces/ktexteditor/html/classKTextEditor_1_1MessageInterface.html | CC-MAIN-2015-48 | en | refinedweb |
This chapter discusses Oracle Streams Advanced Queuing (AQ) and the requirements for complex information handling in an integrated environment.
This chapter contains the following topics:
Overview of Oracle Streams AQ
Oracle Streams AQ in Integrated Application Environments
Oracle Streams AQ Client/Server Communication
Multiconsumer Dequeuing of the Same Message
Oracle Streams AQ Implementation of Workflows
Oracle Streams AQ Implementation of Publish/Subscribe
Message Format Transformation
Internet Integration and Internet Data Access Presentation
Interfaces to Oracle Streams AQ
Oracle Streams AQ Features. An agent program or application may act as both a producer and a consumer.
Producers can enqueue messages in any sequence. Messages are not necessarily dequeued in the order in which they are enqueued. Messages can be enqueued without being dequeued.
At a slightly higher level of complexity, many producers enqueue messages into a queue, all of which are processed by one consumer. Or many producers enqueue messages, each message being processed by a different consumer depending on type and correlation identifier.. Also queue tables can be imported and exported. You can use database development and management tools such as Oracle Enterprise Manager to monitor queues.
Requests for service must be decoupled from supply of services to increase efficiency and provide the infrastructure for complex scheduling. Oracle Streams AQ exhibits high performance characteristics as measured by the following metrics:
Number of messages enqueued/dequeued each second
Time to evaluate a complex query on a message warehouse
Time to recover that do not have the resources to handle multiple unprocessed messages arriving simultaneously from external clients or from programs internal to the application.
Communication links between databases that are not available all the time or are reserved for other purposes. If the system falls short in its capacity to deal with these messages immediately, then the application must be able to store the messages until they can be processed.
External clients or internal programs that are not ready to receive messages that have been processed.
Queuing systems need message persistence so they can deal with priorities: messages arriving later can be of higher priority than messages arriving earlier; messages arriving earlier may wait for messages arriving later before actions are executed; the same message may be accessed by different processes; and so on. Priorities also change. Messages in a specific queue can become more important, and so must crucial for business and legal reasons. With the persistence features of Oracle Streams AQ, you can analyze periods of greatest demand or evaluate the lag between receiving and completing an order.
Oracle Streams AQ provides the message management and Database servers..
Figure 1-2 Client/Server Communication Using Oracle Streams AQ
Application A enqueues a request into the request queue. Application B dequeues and processes the request. Application B enqueues the result in the response queue, and can be associated with a queue as subscribers. This causes Streams AQ administrative package.
It cannot be known which subscriber will dequeue which message first, second, and so on, because there is no priority among subscribers. More formally, the order of dequeuing by subscribers is undetermined.
Every message will eventually be dequeued by every subscriber..
Figure 1-8 illustrates the use of Oracle Streams AQ for implementing a publish/subscribe relationship between publisher Application A and subscriber Applications B, C, and D. Application B subscribes with rule "priority=1", application C subscribes with rule "priority > 1", and application D subscribes with rule "priority = 3".
Figure 1-8 Implementing Publish/Subscribe using Oracle Streams AQ
Application A enqueues 3 messages with differing priorities. Application B receives a single message (priority 1), application C receives two messages (priority 2, 3) and application D receives a single message (priority 3). Message recipients are computed dynamically based on message properties and content.
A combination of Oracle Streams AQ features allows publish/subscribe messaging between applications. These features, described later in this guide, include rule-based subscribers, message propagation, the listen feature, and notification capabilities.
Enqueued messages are said to be propagated when they are reproduced on another queue.
This section contains these topics:
In Oracle Streams AQ, message recipients can be either consumers or other queues. If the message recipient is a queue, then message recipients include all subscribers to the queue (one or more of which can be other queues). Thus it is possible to fan out messages to a large number of recipients without requiring them all to dequeue messages from a single queue.
For example, imagine a queue named
Source with subscriber queues
dispatch1@dest1 and
dispatch2@dest2. Queue
dispatch1@dest1 has subscriber queues
outerreach1@dest3 and
outerreach2@dest4, while queue
dispatch2@dest2 has subscriber queues
outerreach3@dest21 and
outerreach4@dest4. Messages enqueued in
Source are propagated to all the subscribers of four different queues.
Messages from different queues can be combined into a single queue. This is also known as funneling. For example, if queue
composite@endpoint is a subscriber to both
funnel1@source1 and
funnel2@source2, then subscribers to
composite@endpoint get all messages enqueued in those queues as well as messages enqueued directly to
composite@endpoint.. Messages sent locally (on the same node) and messages sent remotely (on a different node) all go in the outbox. Similarly, an application dequeues messages from its inbox no matter where the message originates. Oracle Streams AQ facilitates such interchanges, treating all messages on the same basis.
Figure 1-9 Message Propagation in Oracle Streams AQ
Applications often use data in different formats. A transformation defines a mapping from one Oracle.
As Figure 1-10.
Figure 1-10 Transformations in Application Integration-structured message is transmitted over the Internet using HTTP(S).
This section contains these topics:
Internet Message Payloads
Propagation over the Internet Using HTTP
Internet Data Access Presentation (IDAP)
Oracle Streams AQ supports messages of three types: RAW, Oracle object, and Java Message Service (JMS). All these message types can be accessed using SOAP and Web Services. If the queue holds messages in RAW, Oracle object, or JMS format, then XML payloads are transformed to the appropriate internal format during enqueue and stored in the queue. During dequeue, when messages are obtained from queues containing messages in any of the preceding formats, they are converted to XML before being sent to the client.
The message payload type depends on the queue type on which the operation is being performed:
The contents of RAW queues are raw bytes. You must supply the hex representation of the message payload in the XML message. For example,
<raw>023f4523</raw>.
For Oracle object type.
Example 1-1 A Queue Type and its XML Equivalent XML elements, depending on the JMS type. IDAP supports queues or topics with the following JMS types:
TextMessage
MapMessage
BytesMessage
ObjectMessage
JMS queues with payload type
StreamMessage are not supported through IDAP., for example Web browsers, can be used. The Web server/Servlet Runner hosting the Oracle Streams AQ servlet interprets the incoming XML messages. Examples include Apache/Jserv or Tomcat. The Oracle Streams AQ servlet connects to the Oracle Database server and performs operations on the users' queues.
Figure 1-11 Architecture for Performing Oracle Streams AQ Operations Using HTTP
Internet Data Access Presentation (IDAP) uses the Content-Type of
text/xml to specify the body of the SOAP request. XML provides the presentation for IDAP request and response messages as follows:
All request and response tags are scoped in the SOAP namespace.
Oracle Streams AQ operations are scoped in the IDAP namespace.
The sender includes namespaces in IDAP elements and attributes in the SOAP body.
The receiver processes IDAP messages that have correct namespaces. For requests with incorrect namespaces, the receiver returns an invalid request error.
The SOAP namespace has the value
The IDAP namespace has the value)
This section contains these topics:
Other Oracle Streams AQ Features
The following features apply to enqueuing messages:
Enqueue an Array of Messages
Subscription and Recipient Lists
Priority and Ordering of Messages in Enqueuing
Time Specification and Scheduling
Asynchronous Notification.
You can also specify a default list of recipients who can retrieve all the messages from a specific queue. These implicit recipients become subscribers to the queue by being specified in the default list. If a message is enqueued without specifying any explicit recipients, then the message is delivered to all the designated subscribers.
A rule-based subscriber is one that has a rule associated with it in the default recipient list. A rule-based subscriber. positions a message in relation to other messages. Further, if several consumers act on the same queue, then a consumer gets the first message that is available for immediate consumption. A message that is in the process of being consumed by another consumer is skipped.
Messages belonging to one queue can be grouped to form a set that can only be consumed by one user at a time. This requires that the queue be created in a queue table that is enabled for message grouping. All messages belonging to a group must.
This feature enables applications to communicate with each other without having to be connected to the same database or the same queue. Messages can be propagated from one Oracle Streams AQ to another, irrespective of whether the queues are local or remote. Propagation is accomplished using database links and Oracle Net Services.
Applications can mark the messages they send with a custom identification. Oracle Streams AQ also automatically identifies the queue from which a message was dequeued. This allows applications to track the pathway of a propagated message or a string message within the same database.
Delay interval or expiration intervals can be specified for an enqueued message, thereby providing windows of execution. A message can be marked as available for processing only after a specified time elapses (a delay time) and must.
The asynchronous notification feature allows clients to receive notification of a message of interest. The client can use it to monitor multiple subscriptions. The client need not be connected to the database to receive notifications regarding its subscriptions.
Clients can use the Oracle Call Interface (OCI) function
OCISubscriptionRegister or the PL/SQL procedure
DBMS_AQ.REGISTER to register interest in messages in a queue.
The following features apply to dequeuing messages:
Dequeue an Array of Messages
Navigation of Messages in Dequeuing
Optimization of Waiting for the Arrival of Messages
Optional Transaction Protection
Listen Capability (Wait on Multiple Queues)
Dequeue Message Header with No Payload.
A message can be retrieved by multiple recipients without the need for multiple copies of the same message. Designated recipients can be located locally or at remote sites..
A dequeue request can either browse or remove a message. If a message is browsed, then it remains available for further processing. If a message is removed, then it is not available more for dequeue requests. Depending on the queue properties, a removed message can be retained in the queue table.
A dequeue request can be applied against an empty queue. To avoid polling for the arrival of a new message, a user can specify if and for how long the request is allowed to wait for the arrival of a message.
A message must be consumed exactly once. If an attempt to dequeue a message fails and the transaction is rolled back, then the message is made available for reprocessing after some user-specified delay elapses. Reprocessing is attempted up to the user-specified limit..
A message may not be consumed within given constraints, such as within the window of execution or within the limits of the retries. If such a condition arises, then the message is moved to a user-specified exception queue., then a dequeue must be used to retrieve the message.
The dequeue mode
REMOVE_NODATA can be used to remove a message from a queue without retrieving the payload. Use this mode to delete a message with a large payload whose content is irrelevant.
The following features apply to propagating messages:
Automatic Coordination of Enqueuing and Dequeuing
Propagation of Messages with LOBs
Enhanced Propagation Scheduling Capabilities
Recipients can be local or remote. Because Oracle Database does not support distributed object types, remote enqueuing or dequeuing using a standard database link does not work. However, you can use Oracle Streams AQ message propagation to enqueue to a remote queue. For example, you can connect to database X and enqueue the message in a queue,
DROPBOX, located in database X. You can configure Oracle Streams AQ so that all messages enqueued in
DROPBOX are automatically propagated to another queue in database Y, regardless of whether database Y is local or remote. Oracle Streams AQ automatically checks if the type of the remote queue in database Y is structurally equivalent to the type of the local queue in database X and propagates the message.
Recipients of propagated messages can be applications or queues. If the recipient is a queue, then the actual recipients are determined by the subscription list associated with the recipient queue. If the queues are remote, then messages are propagated using the specified database link. AQ-to-AQ message propagation is directly supported; propagation between Oracle Streams AQ and other message systems, such as WebSphere MQ and TIB/Rendezvous, is supported through Messaging Gateway.
Propagation handles payloads.
Oracle Streams AQ allows messages to be enqueued in queues that can then be propagated to different messaging systems by third-party propagators. If the protocol number for a recipient is in the range 128 - 255, then the address of the recipient is not interpreted by Oracle Streams AQ and so the message is not propagated by the Oracle Streams Oracle Streams AQ to propagate messages to and from third-party messaging systems is through Messaging Gateway, an Enterprise Edition feature. Messaging Gateway dequeues messages from an Oracle Streams AQ queue and guarantees delivery to a third-party messaging system such as Websphere MQ (MQSeries). Messaging Gateway can also dequeue messages from third-party messaging systems and enqueue them to an Oracle Streams AQ queue.
This section contains these topics:
Queue Monitor Coordinator
Oracle Internet Directory
Oracle Enterprise Manager Integration
Support for Statistics Views
Structured and XMLType Payloads
Retention and Message History
Tracking and Event Journals
Queue-Level Access Control
Support for Oracle Real Application Clusters
Before release 10.1, the Oracle Streams AQ time manager process was called queue monitor (
QMNn), a background process controlled by setting the dynamic
init.ora parameter
AQ_TM_PROCESSES. Beginning with release buffered queues and other Oracle Streams tasks, however, are not affected by this parameter.
Oracle Internet Directory is a native LDAPv3 directory service built Oracle Enterprise Manager to do the following:
Create and manage queues, queue tables, propagation schedules, and transformations
Monitor your Oracle Streams AQ environment using its topology at the database and queue levels, and by viewing queue errors and queue and session statistics. operators
ExistsNode() and
SchemaMatch()
Specify the operators in subscriber rules or dequeue conditions
The systems administrator specifies the retention duration to retain messages after consumption. Oracle Streams AQ.
If messages are retained, then they can be related to each other. For example, if a message
m2 is produced as a result of the consumption of message
m1,
then m1 is related to
m2. This allows users to track sequences of related messages. These sequences represent event journals, which are often constructed by applications. Oracle Streams AQ is designed to let applications create event journals automatically.
When an online order is placed, multiple messages are generated by the various applications involved in processing the order. Oracle Streams AQ offers features to track interrelated messages independent of the applications that generated them. You can determine who enqueued and dequeued messages, who the users are, and who did what operations.
With Oracle Streams AQ tracking features, you can use SQL
SELECT and
JOIN statements to get order information from
AQ$
queuetablename and the views
ENQ_TRAN_ID,
DEQ_TRAN_ID,
USER_DATA (the payload),
CORR_ID, and
MSG_ID. These views contain the following data used for tracking:
Transaction IDs from
ENQ_TRAN_ID and
DEQ_TRAN_ID, captured during enqueuing and dequeuing.
Correlation IDs from
CORR_ID, part of the message properties
USER_DATA message content that can be used for tracking
The owner of an 8.1-compatible queue can grant or revoke queue-level privileges on the queue. Database administrators can grant or revoke new Oracle Streams AQ system-level privileges to any database user. Database administrators can also make any database user an Oracle Streams AQ administrator.
Oracle Streams AQ can deliver nonpersistent messages asynchronously to subscribers. These messages can be event-driven and do not persist beyond the failure of the system (or instance). Oracle Streams AQ supports persistent and nonpersistent messages with a common API.
An application can specify the instance affinity for a queue table. When Oracle Streams AQ is used with Real Application Clusters and multiple instances, this information is used to partition the queue tables between instances for queue-monitor scheduling. The queue table is monitored by the queue monitors of the instance specified by the user. If an instance affinity is not specified, then the queue tables Oracle Streams AQ propagation jobs running in different instances. If compatibility is set to Oracle8i release 8.1.5 or higher, then an instance affinity (primary and secondary) can be specified for a queue table. When Oracle Streams AQ, then the secondary instance or some available instance takes over the ownership for the queue table.
Oracle Streams AQ.
The following information is kept at enqueue for nonrepudiation of the enqueuer:
Oracle Streams AQ agent doing the enqueue
Database user doing the enqueue
Enqueue time
Transaction ID of the transaction doing the enqueue
The following information is kept at dequeue for nonrepudiation of the dequeuer:
Oracle Streams AQ agent doing dequeue
Database user doing dequeue
Dequeue time
Transaction ID of the transaction doing dequeue. | http://docs.oracle.com/cd/B14117_01/server.101/b10785/aq_intro.htm | CC-MAIN-2015-48 | en | refinedweb |
For:
>> If you want to download the whole article in a Word document, here’s the link. <<.
Once you have installed everything, you can create your first .NET Gadgeteer application. You are then presented with a “designer” view of your project: at this time there’s only one motherboard in the designer. Using the toolbox, you can add the modules you are going to use. Once you have all the modules you need, you can either click on the connectors to wire them manually or right-click anywhere on the designer and select “Connect all modules”. If you don’t know anything about hardware, this will show you how to connect modules to the motherboard. Once this is done, you’re ready to start coding!
Click on Program.cs to dive into the source code. At this point there’s only the main entry point, and a debug instruction. However all the objects needed to control the hardware have already been instantiated (in the Program.Generated.Cs file) and can be used right away. For example, if you have a temperature sensor, you can directly subscribe to the MeasurementComplete event, that will fire whenever a new measure is ready.
In our case, we are going to use a timer to regularly measure the meteorological parameters (temperature, pressure, and humidity). Everytime the timer fires, we will ask the different sensors to start measuring and they will notify us through their respective MeasurementComplete event that a new measure is ready.
First register for sensor events:
temperatureHumidity.MeasurementComplete += new TemperatureHumidity.MeasurementCompleteEventHandler(temperatureHumidity_MeasurementComplete);
barometer.MeasurementComplete += new Barometer.MeasurementCompleteEventHandler(barometer_MeasurementComplete);
Then instantiate the timer and start it:
GT.Timer SensorTimer = new GT.Timer(TimeSpan.FromTicks(TimeSpan.TicksPerMinute));
SensorTimer.Tick += new GT.Timer.TickEventHandler(timer_Tick);
SensorTimer.Start();
void timer_Tick(GT.Timer timer)
{
barometer.RequestMeasurement();
temperatureHumidity.RequestMeasurement();
}
At this point, you already have an object that regularly measure meteorological parameters. You can compile and run it directly on the .NET Gadgeteer kit by connecting it to a USB port on your PC. It’s not connected though; we will handle that in the next part.:
DateTime.
Once you’re connected to a network you need to send (and eventually, receive) data from Cosm (Pachube). It’s using a simple REST API and JSON serialization… you just have to pass your API key in the headers. Here’s how to send a simple webrequest posting data to your feed:
public void Post(IFeedItem item)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(this.AssociatedAccount.EventsUrl + this.Id.ToString());
req.Method = "PUT";
req.ContentType = "application/json";
req.UserAgent = "NetduinoPlus";
req.Headers.Add("X-PachubeApiKey", this.AssociatedAccount.ApiKey);
req.Timeout = 1000;
if (this.AssociatedAccount.HttpProxy != null)
req.Proxy = this.AssociatedAccount.HttpProxy;
string content = item.ToJson(this.Id);
Debug.Print(content);
byte[] postdata = System.Text.Encoding.UTF8.GetBytes(content);
req.ContentLength = postdata.Length;
try
{
using (Stream s = req.GetRequestStream())
{
s.Write(postdata, 0, postdata.Length);
}
using (WebResponse resp = req.GetResponse())
{
using (Stream respStream = resp.GetResponseStream())
{
byte[] respBytes = new byte[respStream.Length];
int length = respStream.Read(respBytes, 0, respBytes.Length);
string respString = new string(System.Text.Encoding.UTF8.GetChars(respBytes));
Debug.Print(respString);
}
}
}
catch (Exception ex)
{
Debug.Print("exception : " + ex.Message);
}
finally
{
req.Dispose();
}
}
And the sources are on codeplex:
The use of this package is really easy: first, instantiate an Account object and pass it your API key. Then you can create as many feeds as you want (I only have one here, with 3 datasets), and use the Post() method to send directly data to it:
Account pachube = new Account("miQnUMICT-KjWhDwOxWQuycKTiuSAKxCWGgrdTNyaUNEND0g");
Feed meteoFeed = new Feed(44741, pachube);
meteoFeed.Post(new DoubleFeedItem("barometer", sensorData.Pressure));
There you have it – a connected meteo station, in a few lines of code. Now, let’s make use of that screen!.
The first step of the creation of the user interface was to work with a designer (Michel, who’s part of my team) to create a bitmap of the finished interface. Then, Michel generated all the images necessary to compose the interfaces: all the numbers, the different meteorological symbols, the fake copper plate, etc.
I will store the graphical assets on an SD Card, and load them at startup : here’s how to check that the SD Card is present and mounted:
if (sdCard.IsCardInserted)
{
if (!sdCard.IsCardMounted)
sdCard.MountSDCard();
Debug.Print("sd card mounted");
}
And the files are accessed this way:
var storage = sdCard.GetStorageDevice();
for (int i = 0; i < 10; i++)
{
Numbers[i] = storage.LoadBitmap(i.ToString() + ".bmp", Bitmap.BitmapImageType.Bmp);
}
Then there is a very simple API to display an image:
display.SimpleGraphics.DisplayImage(MetalCover, 0, 0);
The only thing I have to do to build my user interface is to combine them in the right order to draw the bitmaps on top of one another. The problem is that, as you have correctly read, I’m using… bitmaps. .NET Micro Framework only supports JPEG, GIF and Bitmap images . Luckily for me the Bitmap class has a method that I can call to specify that a specific color should be treated as transparent… so I can lay a square bitmaps on top of one another and using a specific color (a really ugly pink color in my case) give them a square look: here’s an example of a clock number, and the corresponding transparency API:
Color Transparent = ColorUtility.ColorFromRGB(255, 0, 255);
MetalCover = storage.LoadBitmap("FondTransp.bmp", Bitmap.BitmapImageType.Bmp);
MetalCover.MakeTransparent(Transparent);
The trick with such a technique (which is not very elegant, but quit effective) to build an interface is to have pixel perfect assets and placements. Work with your designer and be precise!
The last thing is to draw the graph. It’s not very complicated, either algorithmically or technically: I just store a list of measures in memory, discarding them as new ones come in, much like a queue with a fixed length. Then I go through every element, drawing them backwards from the latest one, and making sure I never get out of the rectangle in which I’m allowed to draw. Here’s the code for the whole Graph class:
class Graph
{
public ArrayList Values;
public double Max;
public double Min;
public Graph(int Capacity)
{
Values = new ArrayList();
Values.Capacity = Capacity;
}
public void Add(double measure)
{
if (Values.Count == Values.Capacity)
{
Values.RemoveAt(0);
}
Values.Add(measure);
UpdateMinMax();
}
private void UpdateMinMax()
{
int index = Values.Count - 1;
this.Min = (double)Values[index] - 3.0;
this.Max = (double)Values[index] + 3.0;
}
public void Draw(Display_T35 display, uint x, uint y, uint height) // width == capacity
{
double a,b;
uint offset;
if (Max > Min)
a = -(height / (Max - Min));
else
a = 0;
b = y - a * Max;
for (int i = 0; i < Values.Count; i++)
{
int index = Values.Count - 1 - i;
if (a == 0)
offset = y + (height / 2);
else
offset = (uint)System.Math.Round(a * (double)Values[index] + b);
DrawPoint(display, (uint)(x + Values.Capacity - i), offset);
}
}
public void DrawPoint(Display_T35 display, uint x, uint y)
{
display.SimpleGraphics.SetPixel(Gadgeteer.Color.Black, x, y);
display.SimpleGraphics.SetPixel(Gadgeteer.Color.Black, x + 1, y);
display.SimpleGraphics.SetPixel(Gadgeteer.Color.Black, x, y + 1);
display.SimpleGraphics.SetPixel(Gadgeteer.Color.Black, x + 1, y + 1);
}
public double Mean(int Count)
{
if (Count > 0 && Values.Count > 0)
{
var max = (Count > Values.Count) ? Values.Count : Count;
double tmp = 0.0;
for (int i = 0; i < max; i++)
{
tmp += (double)Values[max - i - 1];
}
tmp /= max;
return tmp;
}
else
return 0;
}
}
int pixel = (int)(System.Math.Round((double)(40 - Status.Temperature) * 16.3)) - 38;
if (pixel < 0)
pixel = 0;
Bitmap tmpCopy = new Bitmap(TemperatureIndicator.Width, TemperatureIndicator.Height);
tmpCopy.DrawImage(0, 0, TemperatureIndicator, 0, pixel, 34, 109);
display.SimpleGraphics.DisplayImage(tmpCopy, 137, 120);
There we have the user interface for a connected meteo station… if you look at connected object however, you will guess that most of them need some configuration (network, usernames…). The next part is about how to build a small webserver within the meteo station, with a small form to configure its parameters (timezone, etc)..
void webevt_WebEventReceived(string path, WebServer.HttpMethod method, Responder responder)
{
responder.Respond(LoadHtmlFile(path), "text/html");
}
The Responder class makes it really easy to send an HTTP response, and takes a byte array as the first argument of its Respond method. This byte array is generated from an HTML file stored in the SD Card:
private byte[] LoadHtmlFile(string path)
{
byte[] result = null;
var storage = sdCard.GetStorageDevice();
using (var stream = storage.OpenRead(wwwRoot + path))
{
using (StreamReader sr = new StreamReader(stream))
{
string content = sr.ReadToEnd();
result = new System.Text.UTF8Encoding().GetBytes(content);
}
}
return result;
}
Once the form is submitted with the right parameters, it’s caught with another WebEvent, and the configuration is saved to the SD Card before sending the response:
void setparams_WebEventReceived(string path, WebServer.HttpMethod method, Responder responder)
{
Config c = new Config();
c.Name = (string)responder.UrlParameters["Name"];
c.UtcOffset = int.Parse(HttpUtility.UrlDecode((string)responder.UrlParameters["UtcOffset"]));
SaveConfigToSdCard(c);
responder.Respond(LoadHtmlFile(path), "text/html");
}
The HttpUtility class that I’m using is from the community and can be found here:
The SaveConfigToSdCard method is very simple:
private void SaveConfigToSdCard(Config c)
{
var storage = sdCard.GetStorageDevice();
using (var stream = storage.OpenWrite(configFilePath))
{
using (StreamWriter sw = new StreamWriter(stream))
{
sw.Write(c.ToString());
}
}
}
The Config object stores all the parameters :
public class Config
{
public string Name { get; set; }
public int UtcOffset { get; set; }
public static Config FromString(string str)
{
Config res = new Config();
string[] settings = str.Split('\n');
res.Name = settings[0].Split(':')[1];
res.UtcOffset = int.Parse(settings[1].Split(':')[1]);
return res;
}
public override string ToString()
{
return "name:" + Name + "\n utcoffset:" + UtcOffset.ToString();
}
}
As you see, the serialization/deserialization mechanism is as simple as possible. Since we’re on an embedded system with limited capabilities, the simpler the code, the faster it runs: no need for fancy XML or JSON serialization here.
You could use the same kind of mechanism to build an API for your object so that other devices could connect directly to it. In the present case, it’s not necessary since we publish feeds to Cosm: we can use Cosm APIs!
As anything with a configuration, one has to think of a mechanism for resetting to factory defaults. We could do that with a simple button and a timer, it’s not really interesting to detail it here, but you’ll have to think about it!
There are two ways to build a “physical” object around an electronic kit such as the gadgeteer: either by trial and error, at the risk of using a lot more material than what’s needed, or by being modeling it either on paper or using 3D tools. The pen & paper is a simple approach so it works for simple objects, but the gadgeteer teams contributed 3D models of most parts of the kits, and even if you can’t afford professional tools such as SolidWorks (personally, I can’t) there are some free tools that work really great. While researching the subject to prepare this article I’ve used 123D, but I also gave a shot at FreeCad, SketchUp, and even had Blender installed (useful for 3D format transformation). These tools manipulate different formats, and for example SketchUp doesn’t support STEP files natively (this is the format used for the gadgeteer models). So you want to be careful about the tool you choose depending on the input format for models, and the output format you wish to have. Especially if you want to model some parts using a 3D printer!
Getting used to a CAD tool takes some time. You’re not going to make complex parts and assemblies overnight, and you’ll spend quite a few hours learning keyboard shortcuts, navigation, and good practices to be efficient. What I found out is that it’s more efficient to follow a couple of tutorials, rather than try to prototype directly the object you need. Then work on individual parts. Then assemble them. Don’t try to do the whole thing in one project directly. Here’s an example of the work in progress for the internals of my meteo station:
It’s your call to see what method you prefer. For most of my projects I’ve been experimenting directly with hardware, but in this case, since I didn’t want to damage the case, and since I wanted to have a reuseable design, I decided to go with the CAD tools.
In this article, we’ve gone through the process of prototyping every function of a connected object, from the sensors, to the network, and configuration. Hopefully you now understand:!
J’ai souvent l’occasion de croiser des développeurs qui ont envie de coder mais pas vraiment d’idée d’applications – on dirait même que c’est une tendance car la plupart des hackathons auxquels j’ai assisté récemment commencent par des pitchs d’idées, avant de se lancer dans le code, pour constituer des équipes.
On ressort souvent les grands classiques du genre “branche toi sur une source opendata!” mais la vérité c’est que ces sources sont rarement dynamiques (paye ton bon gros fichier CSV statique) et que ce n’est pas évident d’en faire quelque chose d’utile (une énième copie d’iToilettes?).
Il existe pourtant sur le web quelques “annuaires” d’APIs, officielles ou pas, payantes ou pas, mais qui sont des mines d’idées, surtout quand on les combine entre elles!
Presque sur le même principe, certains sites proposent des outils pour créer des APIs: ils ont souvent une liste de références intéressantes de gens qui utilisent leur backend: cet article de ProgrammableWeb en liste 11…
Enfin, n’hésitez pas à chercher dans les petites lignes en bas de vos sites favoris… il se pourraient bien qu’il y ait une API accessible sur laquelle le site ne fait pas vraiment de pub…
Dernier point (et pas des moindres) : toutes les APIs ont des conditions d’utilisation, et ce n’est pas parce qu’une API est ouverte sur le net, qu’elle est automatiquement libre de droit et utilisable par n’importe qui n’importe comment… si vous voulez évitez les lettres de type “cease & desist” des ayant-droits des données… lisez les petites lignes!
Happy summer coding! | http://blogs.msdn.com/b/pierreca/archive/2012/07.aspx?PostSortBy=MostComments&PageIndex=1 | CC-MAIN-2015-48 | en | refinedweb |
{-# LANGUAGE NoImplicitPrelude , CPP #-} ----------------------------------------------------------------------------- -- | -- Module : Math.Combinatorics.Species.Util.Interval -- Copyright : (c) Brent Yorgey 2010 -- License : BSD-style (see LICENSE) -- Maintainer : byorgey@cis.upenn.edu -- Stability : experimental -- -- A simple implementation of intervals of natural numbers, for use in -- tracking the possible sizes of structures of a species. For -- example, the species @x + x^2 + x^3@ will correspond to the -- interval [1,3]. -- ----------------------------------------------------------------------------- module Math.Combinatorics.Species.Util.Interval ( -- * The 'NatO' type NatO, omega, natO -- * The 'Interval' type , Interval, iLow, iHigh -- * Interval operations , decrI, union, intersect, elem, toList -- * Constructing intervals , natsI, fromI, emptyI, omegaI ) where #if MIN_VERSION_numeric_prelude(0,2,0) import NumericPrelude hiding (min, max, elem) import Prelude (min, max) #else import NumericPrelude import PreludeBase hiding (elem) #endif import qualified Algebra.Additive as Additive import qualified Algebra.Ring as Ring -- | . data NatO = Nat Integer | Omega deriving (Eq, Ord, Show) -- | The infinite 'NatO' value. omega :: NatO omega = Omega -- | Eliminator for 'NatO' values. natO :: (Integer -> a) -> a -> NatO -> a natO _ o Omega = o natO f _ (Nat n) = f n -- | Decrement a possibly infinite natural. TZero and omega are both -- fixed points of 'decr'. decr :: NatO -> NatO decr (Nat 0) = Nat 0 decr (Nat n) = Nat (n-1) decr Omega = Omega -- | 'NatO' forms an additive monoid, with zero as the identity. This -- doesn't quite fit since Additive.C is supposed to be for groups, -- so the 'negate' method just throws an error. But we'll never use -- it and 'NatO' won't be directly exposed to users of the species -- library anyway. instance Additive.C NatO where zero = Nat 0 Nat m + Nat n = Nat (m + n) _ + _ = Omega negate = error "naturals with omega only form a semiring" -- | In fact, 'NatO' forms a semiring, with 1 as the multiplicative -- unit. instance Ring.C NatO where one = Nat 1 Nat 0 * _ = Nat 0 _ * Nat 0 = Nat 0 Nat m * Nat n = Nat (m * n) _ * _ = Omega fromInteger = Nat -- |. data Interval = I { iLow :: NatO -- ^ Get the lower endpoint of an 'Interval' , iHigh :: NatO -- ^ Get the upper endpoint of an 'Interval' } deriving Show -- | Decrement both endpoints of an interval. decrI :: Interval -> Interval decrI (I l h) = I (decr l) (decr h) -- | The union of two intervals is the smallest interval containing -- both. union :: Interval -> Interval -> Interval union (I l1 h1) (I l2 h2) = I (min l1 l2) (max h1 h2) -- | The intersection of two intervals is the largest interval -- contained in both. intersect :: Interval -> Interval -> Interval intersect (I l1 h1) (I l2 h2) = I (max l1 l2) (min h1 h2) -- | Intervals can be added by adding their endpoints pointwise. instance Additive.C Interval where zero = I zero zero (I l1 h1) + (I l2 h2) = I (l1 + l2) (h1 + h2) negate = error "Interval negation: intervals only form a semiring" -- | Intervals form a semiring, with the multiplication operation -- being pointwise multiplication of their endpoints. instance Ring.C Interval where one = I one one (I l1 h1) * (I l2 h2) = I (l1 * l2) (h1 * h2) fromInteger n = I (Nat n) (Nat n) -- | Test a given integer for interval membership. elem :: Integer -> Interval -> Bool elem n (I lo Omega) = lo <= fromInteger n elem n (I lo (Nat hi)) = lo <= fromInteger n && n <= hi -- | Convert an interval to a list of Integers. toList :: Interval -> [Integer] toList (I Omega Omega) = [] toList (I lo hi) | lo > hi = [] toList (I (Nat lo) Omega) = [lo..] toList (I (Nat lo) (Nat hi)) = [lo..hi] -- | The range [0,omega] containing all natural numbers. natsI :: Interval natsI = I zero Omega -- | Construct an open range [n,omega]. fromI :: NatO -> Interval fromI n = I n Omega -- | The empty interval. emptyI :: Interval emptyI = I one zero -- | The interval which contains only omega. omegaI :: Interval omegaI = I Omega Omega | http://hackage.haskell.org/package/species-0.3.2.3/docs/src/Math-Combinatorics-Species-Util-Interval.html | CC-MAIN-2015-48 | en | refinedweb |
« Using AutoCAD 2009's new transient graphics API to show point clouds from F# | Main | A simple taxonomy of programming languages »
Getting the full path of the active drawing in AutoCAD using .NET
I had a question come in by email about how to find out the full path of a drawing open inside AutoCAD. Here's some C# code that does just this:
using Autodesk.AutoCAD.Runtime;
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.DatabaseServices;
namespace PathTest
{
public class Commands
{
[CommandMethod("PTH")]
public void DrawingPath()
{
Document doc =
Application.DocumentManager.MdiActiveDocument;
HostApplicationServices hs =
HostApplicationServices.Current;
string path =
hs.FindFile(
doc.Name,
doc.Database,
FindFileHint.Default
);
doc.Editor.WriteMessage(
"\nFile was found in: " + path
);
}
}
}
Here's what happens when you run the PTH command, with a file opened from an obscure location, just to be sure:
Command: pth
File was found in: C:\Temp\test.dwg
Update:
Tony Tanzillo wrote in a comment:
I think you could also read the Filename property of the Database to get the full path to the drawing, keeping in mind that if the drawing is a new, unsaved document, the Filename property returns the .DWT file used to create the new drawing.
Thanks, Tony - shame on me for missing the obvious. In my defence, I wrote the post in between meetings in Las Vegas, if that's any excuse. :-)
March 13, 2008 in AutoCAD, AutoCAD .NET | Permalink
TrackBack
TrackBack URL for this entry:
Listed below are links to weblogs that reference Getting the full path of the active drawing in AutoCAD using .NET:
Hello Sir,
I am trying to link AutoCAD LT 2008 with Excel spreadsheet via VB.Net .
using data link manager option provided in the Tools.
Can u sggest me how can I link an excel sheet with autocad so that
wat changes we made in excel sheet it will reflect in AutoCAD automatically.
Programming Language: VB.net
OS: XP
Machine: 32bit
Please revert me back, I wil be thankful to you.
Posted by: Prabhav Mishra | Mar 14, 2008 9:28:15 AM
Hi Kean.
I think you could also read the Filename property of the Database to get the full path to the drawing, keeping in mind that if the drawing is a new, unsaved document, the Filename property returns the .DWT file used to create the new drawing.
Posted by: Tony Tanzillo | Mar 14, 2008 8:05:48 PM
Thanks, Tony - that would certainly be much easier. :-)
Kean
Posted by: Kean | Mar 14, 2008 8:15:51 PM
Hey - What happens in Vegas stays in Vegas :P
Posted by: Tony Tanzillo | Mar 15, 2008 3:49:52 AM
Maybe what's coded in Vegas should also stay in Vegas? ;-)
Posted by: Kean | Mar 15, 2008 6:02:59 AM
Kean,
Is it possible to determine the number of drawings open?
Posted by: Rob | Mar 18, 2008 9:15:47 AM
This should get you what you need:
Autodesk.AutoCAD.ApplicationServices.Application.DocumentManager.Count
Kean
Posted by: Kean | Mar 18, 2008 4:13:57 PM | http://through-the-interface.typepad.com/through_the_interface/2008/03/getting-the-ful.html | crawl-001 | en | refinedweb |
« December 2007 | Main | February 2008 »
Adding aliases for custom AutoCAD commands
I had an interesting question come in by email and thought I'd share it via this post. To summarise the request, the developer needed to allow command aliasing for their custom commands inside AutoCAD. The good news is that there's a standard mechanism that works for both built-in and custom commands: acad.pgp.
acad.pgp is now found in this location on my system:
C:\Documents and Settings\walmslk\Application Data\Autodesk\AutoCAD 2008\R17.1\enu\Support\acad.pgp
We can edit this text file to add our own command aliases at the end:
NL, *NETLOAD
MCC, *MYCUSTOMCOMMAND
Here we've simply created an alias for the standard NETLOAD command (a simple enough change, but one that saves me lots of time when developing .NET modules) and for a custom command called MYCUSTOMCOMMAND. If I type NL and then MCC at the AutoCAD command-line, after saving the file and re-starting AutoCAD, I see:
Command: NL
NETLOAD
Command: MCC
Unknown command "MYCUSTOMCOMMAND". Press F1 for help.
Now the fact that MCC displays an error is fine - I don't actually have a module loaded that implements the MYCUSTOMCOMMAND command - but we can see that it found the alias and used it (and the MYCUSTOMCOMMAND name could also be used to demand-load an application, for instance).
January 31, 2008 in AutoCAD, Commands | Permalink | Comments (7) | TrackBack
Using F# to simulate hardware behaviour
This post has nothing whatsoever to do with Autodesk software: I just thought some of you might be interested in an old project I worked on during my University studies. I've already mentioned the project briefly in a couple of previous posts.
So, after dusting off the 3.5 floppies I found in the attic, and working out how to extract the code from the gzipped tarballs they contained (thankfully WinZIP took care of that), I started the work to port the code from Miranda to F#. Miranda is still available for many OS platforms, although it has apparently largely been succeeded by the open, committee-defined (originally, at least) functional language, Haskell. But the main point of this exercise was not as much to get the code working as it was for me to become familiar with the F# syntax, and what adjustments might be needed to my thinking for me to code with it.
Before I summarise the lessons learned from the porting exercise, a few words on the original project: I worked on this during 1994-5, with my project partner, Barry Kiernan, supervised by Dr. Steve Hill, from the University of Kent at Canterbury (UKC) Computing Laboratory. I've unfortunately lost contact with both Barry and Steve, so if either of you are reading this, please get in touch!
We adopted Miranda, as this was the functional programming language being taught at UKC at the time. I'm fairly sure that the original code would work with very little modification in Haskell, though, as Miranda is a simpler language and the two appear to have a similar syntax.
The project was to model the behaviour of a Motorola 6800 processor: a simple yet popular, 8-bit processor from the 1970s. The intent behind the project was to validate the use of purely functional programming languages when modelling hardware systems such as micro-processors. What was very interesting was our ability to adjust the level of abstraction: our first implementation used integers to hold op-codes, memory values, register contents, etc., but we later refined it to deal with individual bits of data, moving them around using buses. We also implemented an assembler using Miranda, which was both fun and helpful for testing. That's another strength of functional programming, generally: it is well-suited to language-oriented programming.
I have to admit many specifics of the project are now somewhat vague to me, but I was still able to migrate the code with relatively little effort: despite the fact we're talking about nearly 2,800 lines of source (including comments), it took me several hours, rather than days. I should also point out that I'm certain I haven't used F#'s capabilities optimally - I still consider myself to be a learner when it comes to F# - but I expect I'll come back to the code a tweak it, once in a while.
Here are some notes regarding the migration process:
- F#'s type inference was great: rather than having to define algebraic types for the various functions, these were inferred 100% correctly. The few times I added type information to force the system to understand what I'd done, it turned out to be a logic error I needed to fix.
- F# Interactive was very helpful, although when I first started out with the migration I didn't really use it (I've only since realised how useful a feature it is). I've now come to love the ease with which you can load and test F# code fragments within Visual Studio using F# Interactive.
- For now I've created one monolithic source file. In time I'll probably split this into separate files, but for now this was the simplest way to proceed.
The only big change needed to the code was to remove the use of multiple signatures to define a function's behaviour. With Miranda and Haskell it's standard practice to pattern match at the function signature level. For instance, here's the implementation of a function that performs a "two's complement negate" operation on a list of binary digits:
neg1 :: [num] -> (bool, [num])
neg1 [] = (False, [])
neg1 (1:t)
= (True, (0:comt)) , if inv
= (True, (1:comt)) , otherwise
where
(inv, comt) = neg1 t
neg1 (0:t)
= (True, (1:comt)) , if inv
= (False, (0:comt)) , otherwise
where
(inv, comt) = neg1 t
In F# the pattern matching is performed within the function:
let rec neg1 lst =
match lst with
| [] -> (false, [])
| 1 :: t ->
let (inv, comt) = neg1 t
if inv then
(true, 0::comt)
else
(true, 1::comt)
| 0 :: t ->
let (inv, comt) = neg1 t
if inv then
(true, 1::comt)
else
(false, 0::comt)
| _ -> failwith "neg1 problem!"
These changes were not especially hard to implement, but it did take some time for me to get used to the difference in approach. Note also the final wildcard match ('_') needed to prevent F# from warning me of an incomplete pattern match: this is presumably because the type included in the list was not officially constrained to be binary (0 or 1).
Alright - thanks for bearing with me... here's the F# source file, in case you're still interested. The simplest way to see it in action is to open the file inside Visual Studio (with F# installed, of course), select its entire contents and hit Alt-Enter. This will load it into F# Interactive, at which point you should see some automated test results displayed and be able to run the test assembly language program by typing the following line into the F# Interactive window:
run mult;;
January 29, 2008 in F# | Permalink | Comments (0) | TrackBack
Using F# Asynchronous Workflows to simplify concurrent programming in AutoCAD
In the last post we saw some code that downloaded data - serially - from a number of websites via RSS and created AutoCAD entities linking to the various posts.. As many - if not all - of you are aware, the days of raw processor speed doubling every couple of years are over. The technical innovations that enabled Moore's Law to hold true for half a century are - at least in the area of silicon-based microprocessor design - hitting a wall (it's apparently called the Laws of Physics :-)., as information requests across a network inevitably introduce a latency that can be mitigated by the tasks being run in parallel.
The big problem is that concurrent programming is - for the most-part - extremely difficult to do, and even harder to retro-fit into existing applications. Traditional lock-based parallelism (where locks are used to control access to shared computing resources) is both unwieldy and prone to blocking. New technologies, such as Asynchronous Workflows and Software Transactional Memory, provide considerable hope (and this is a topic I have on my list to cover at some future point).), but that these tasks are indeed independent: we want to wait until they are all complete, but we do not have the additional burden of them communicating amongst themselves or using shared resources (e.g. accessing shared memory) during their execution. does all the heavy lifting. Phew.
Here's the modified F# code, with the modified/new lines coloured in red:
1 // Use lightweight F# syntax
2
3 #light
4
5 // Declare a specific namespace and module name
6
7 module MyNamespace.MyApplication
8
9 // Import managed assemblies
10
11 #I @"C:\Program Files\Autodesk\AutoCAD 2008"
12
13 #r "acdbmgd.dll"
14 #r "acmgd.dll"
15
16 open Autodesk.AutoCAD.Runtime
17 open Autodesk.AutoCAD.ApplicationServices
18 open Autodesk.AutoCAD.DatabaseServices
19 open Autodesk.AutoCAD.Geometry
20 open System.Xml
21 open System.Collections
22 open System.Collections.Generic
23 open System.IO
24 open System.Net
25 open Microsoft.FSharp.Control.CommonExtensions
26
27 // The RSS feeds we wish to get. The first two values are
28 // only used if our code is not able to parse the feed's XML
29
30 let feeds =
31 [ ("Through the Interface",
32 "",
33 "");
34
35 ("Don Syme's F# blog",
36 "",
37 "");
38
39 ("Shaan Hurley's Between the Lines",
40 "",
41 "");
42
43 ("Scott Sheppard's It's Alive in the Lab",
44 "",
45 "");
46
47 ("Lynn Allen's Blog",
48 "",
49 "");
50
51 ("Heidi Hewett's AutoCAD Insider",
52 "",
53 "") ]
54
55 // Fetch the contents of a web page, asynchronously
56
57 let httpAsync(url:string) =
58 async { let req = WebRequest.Create(url)
59 use! resp = req.GetResponseAsync()
60 use stream = resp.GetResponseStream()
61 use reader = new StreamReader(stream)
62 return reader.ReadToEnd() }
63
64 // Load an RSS feed's contents into an XML document object
65 // and use it to extract the titles and their links
66 // Hopefully these always match (this could be coded more
67 // defensively)
68
69 let titlesAndLinks (name, url, xml) =
70 let xdoc = new XmlDocument()
71 xdoc.LoadXml(xml)
72
73 let titles =
74 [ for n in xdoc.SelectNodes("//*[name()='title']")
75 -> n.InnerText ]
76 let links =
77 [ for n in xdoc.SelectNodes("//*[name()='link']") ->
78 let inn = n.InnerText
79 if inn.Length > 0 then
80 inn
81 else
82 let href = n.Attributes.GetNamedItem("href").Value
83 let rel = n.Attributes.GetNamedItem("rel").Value
84 if href.Contains("feedburner") then
85 ""
86 else
87 href ]
88
89 let descs =
90 [ for n in xdoc.SelectNodes
91 ("//*[name()='description' or name()='content' or name()='subtitle']")
92 -> n.InnerText ]
93
94 // A local function to filter out duplicate entries in
95 // a list, maintaining their current order.
96 // Another way would be to use:
97 // Set.of_list lst |> Set.to_list
98 // but that results in a sorted (probably reordered) list.
99
100 let rec nub lst =
101 match lst with
102 | a::[] -> [a]
103 | a::b ->
104 if a = List.hd b then
105 nub b
106 else
107 a::nub b
108 | [] -> []
109
110 // Filter the links to get (hopefully) the same number
111 // and order as the titles and descriptions
112
113 let real = List.filter (fun (x:string) -> x.Length > 0)
114 let lnks = real links |> nub
115
116 // Return a link to the overall blog, if we don't have
117 // the same numbers of titles, links and descriptions
118
119 let lnum = List.length lnks
120 let tnum = List.length titles
121 let dnum = List.length descs
122
123 if tnum = 0 || lnum = 0 || lnum <> tnum || dnum <> tnum then
124 [(name,url,url)]
125 else
126 List.zip3 titles lnks descs
127
128 // For a particular (name,url) pair,
129 // create an AutoCAD HyperLink object
130
131 let hyperlink (name,url,desc) =
132 let hl = new HyperLink()
133 hl.Name <- url
134 hl.Description <- desc
135 (name, hl)
136
137 // Use asynchronous workflows in F# to download
138 // an RSS feed and return AutoCAD HyperLinks
139 // corresponding to its posts
140
141 let hyperlinksAsync (name, url, feed) =
142 async { let! xml = httpAsync feed
143 let tl = titlesAndLinks (name, url, xml)
144 return List.map hyperlink tl }
145
146 // Now we declare our command
147
148 [<CommandMethod("rss")>]
149 let createHyperlinksFromRss() =
150
151 // Let's get the usual helpful AutoCAD objects
152
153 let doc =
154 Application.DocumentManager.MdiActiveDocument
155 let db = doc.Database
156
157 // "use" has the same effect as "using" in C#
158
159 use tr =
160 db.TransactionManager.StartTransaction()
161
162 // Get appropriately-typed BlockTable and BTRs
163
164 let bt =
165 tr.GetObject
166 (db.BlockTableId,OpenMode.ForRead)
167 :?> BlockTable
168 let ms =
169 tr.GetObject
170 (bt.[BlockTableRecord.ModelSpace],
171 OpenMode.ForWrite)
172 :?> BlockTableRecord
173
174 // Add text objects linking to the provided list of
175 // HyperLinks, starting at the specified location
176
177 // Note the valid use of tr and ms, as they are in scope
178
179 let addTextObjects pt lst =
180 // Use a for loop, as we care about the index to
181 // position the various text items
182
183 let len = List.length lst
184 for index = 0 to len - 1 do
185 let txt = new DBText()
186 let (name:string,hl:HyperLink) = List.nth lst index
187 txt.TextString <- name
188 let offset =
189 if index = 0 then
190 0.0
191 else
192 1.0
193
194 // This is where you can adjust:
195 // the initial outdent (x value)
196 // and the line spacing (y value)
197
198 let vec =
199 new Vector3d
200 (1.0 * offset,
201 -0.5 * (Int32.to_float index),
202 0.0)
203 let pt2 = pt + vec
204 txt.Position <- pt2
205 ms.AppendEntity(txt) |> ignore
206 tr.AddNewlyCreatedDBObject(txt,true)
207 txt.Hyperlinks.Add(hl) |> ignore
208
209 // Here's where we do the real work, by firing
210 // off - and coordinating - asynchronous tasks
211 // to create HyperLink objects for all our posts
212
213 let links =
214 Async.Run
215 (Async.Parallel
216 [ for (name,url,feed) in feeds ->
217 hyperlinksAsync (name,url,feed) ])
218
219 // Add the resulting objects to the model-space
220
221 let len = Array.length links
222 for index = 0 to len - 1 do
223
224 // This is where you can adjust:
225 // the column spacing (x value)
226 // the vertical offset from origin (y axis)
227
228 let pt =
229 new Point3d
230 (15.0 * (Int32.to_float index),
231 30.0,
232 0.0)
233 addTextObjects pt (Array.get links index)
234
235 tr.Commit()
You can download the new F# source file from here.
A few comments on the changes:
Lines 57-62 define our new httpAsync() function, which uses GetResponseAsync() - a function exposed in F# 1.9.3.7 - to download the contents of a web-page asynchronously [and which I stole shamelessly from Don Syme, who presented the code last summer at Microsoft's TechEd].
Lines 141-144 define another asynchronous function, hyperlinksAsync(), which calls httpAsync() and then - as before - extracts the feed information and creates a corresponding list of HyperLinks. This is significant: creation of AutoCAD HyperLink objects will be done on parallel; it is the addition of these objects to the drawing database that needs to be performed serially.
Lines 214-217 replace our very simple "map" with something slightly more complex: this code runs a list of tasks in parallel and waits for them all to complete before continuing. What is especially cool about this implementation is the fact that exceptions in individual tasks result in the overall task failing (a good thing, believe it or not :-), and the remaining tasks being terminated gracefully.
Lines 221 and 233 change our code to handle an array, rather than a list (while "map" previously returned a list, Async.Run returns an array).
When run, the code creates exactly the same thing as last time (although there are a few more posts in some of the blogs ;-)
A quick word on timing: I used "F# Interactive" to do a little benchmarking on my system, and even though it's single-core, single-processor, there was a considerable difference between the two implementations. I'll talk more about F# Interactive at some point, but think of it to F# in Visual Studio as the command-line is to LISP in AutoCAD: you can very easily test out fragments of F#, either by entering them directly into the F# Interactive window or highlighting them in Visual Studio's text editor and hitting Alt-Enter.
To enable function timing I entered "#time;;" (without the quotations marks) in the F# Interactive window. I then selected and loaded the supporting functions needed for each test - not including the code that adds the DBText objects with their HyperLinks to the database, as we're only in Visual Studio, not inside AutoCAD - and executed the "let links = ..." assignment in our two implementations of the createHyperlinksFromRss() function (i.e. the RSS command). These functions do create lists of AutoCAD HyperLinks, but that's OK: this is something works even outside AutoCAD, although we wouldn't be able to do anything much with them. Also, the fact we're not including the addition of the entities to the AutoCAD database is not relevant: by then we should have identical data in both versions, which would be added in exactly the same way.
Here are the results:
I executed the code for serial querying and parallel querying twice (to make sure there were no effects from page caching on the measurement):
val links : (string * HyperLink) list list
Real: 00:00:14.636, CPU: 00:00:00.15, GC gen0: 5, gen1: 1, gen2: 0
val links : (string * HyperLink) list array
Real: 00:00:06.245, CPU: 00:00:00.31, GC gen0: 3, gen1: 0, gen2: 0
val links : (string * HyperLink) list list
Real: 00:00:15.45, CPU: 00:00:00.46, GC gen0: 5, gen1: 1, gen2: 0
val links : (string * HyperLink) list array
Real: 00:00:03.832, CPU: 00:00:00.62, GC gen0: 2, gen1: 1, gen2: 0
So the serial execution took 14.5 to 15.5 seconds, while the parallel execution took 3.8 to 6.3 seconds.
January 25, 2008 in AutoCAD, AutoCAD .NET, Concurrent programming, F#, Weblogs | Permalink | Comments (1) | TrackBack
Turning.
January 23, 2008 in AutoCAD, AutoCAD .NET, F#, Weblogs | Permalink | Comments
Understanding the properties of textual linetype segments in AutoCAD
In the last post we looked at using .NET to define complex linetypes containing text segments. In the post I admitted to not knowing specifics about the properties used to create the text segment in the linetype, and, in the meantime, an old friend took pity on me and came to the rescue. :-)
Mike Kehoe, who I've known for many years since we worked together in the Guildford office of Autodesk UK, sent me some information that I've reproduced below. Mike now works for Micro Concepts Ltd., an Autodesk reseller, developer and training centre. He originally wrote the below description in the R12/12 timeframe, but apparently most of it remains valid; and while it refers to the text string used to define a linetype in a .lin file, these are also mostly properties that are exposed via the .NET interface.
Example: Using Text within a Linetype.
A,.5,-.2,["MK",STANDARD,S=.2,R=0.0,X=-0.1,Y=-.1],-.2
The key elements for defining the TEXT are as follows:
"MK" - These are the letters that will be printed along the line.
STANDARD -This tells AutoCAD what text style to apply to the text. NB: This is optional. When no style is defined AutoCAD will use the current text style – TextStyle holds the setting for the current text style.
[Note from Kean: I found the text style to be mandatory when using the .NET interface.]
S=.2 - This is the text scaling factor. However, there are 2 options: (1) when the text style's height is 0, then S defines the height; in this case, 0.2 units; or (2) when the text style's height parameter is non-zero, the height is found by multiplying the text style's height by this number; in this case, the linetype would place the text at 20% of the height defined in the text style.
R=0.0 - This rotates the text relative to the direction of the line; e.g.: 0.0 means there is no rotation. NB: This is optional. When no rotation is defined AutoCAD will assume zero degrees. The default measurement is degrees; NB: you can use r to specify radians, g for grads, or d for degrees, such as R=150g.
[Note from Kean: just like ObjectARX, the .NET interface accepts radians for this value, in SetShapeRotationAt(). A quick reminder: 360 degrees = 2 x PI radians. So you can pass 90 degrees using "System.Math.PI / 2".]
A=0.0 - This rotates the text relative to the x-axis ("A" is short for Absolute); this ensures the text is always oriented in the same direction, no matter the direction of the line. The rotation is always performed within the text baseline and capital height. That's so that you don't get text rotated way off near the orbit of Pluto.
[Note from Kean: to use this style of rotation using .NET, you need to use SetShapeIsUcsOrientedAt() to make sure the rotation is calculated relative to the current UCS rather than the direction of the line.]
X=-0.1 - This setting moves the text just in the x-direction from the linetype definition vertex.
Y=-0.1 – This setting moves the text in the y-direction from the linetype definition vertex.
These 2 settings can be used to center the text in the line. The units are defined from the linetype scale factor, which is stored in system variable LtScale.
Thanks for the information, Mike!
January 11, 2008 in AutoCAD, AutoCAD .NET, Drawing structure, Object properties | Permalink | Comments (0) | TrackBack
Creating a complex AutoCAD linetype containing text using .NET
In my last post we saw some code to create a simple linetype using .NET. As a comment on that post, Mark said:
Kean, i tried you code and it works great and it also got me thinking... is it possible to programmitically add text in as well? I've tried using ltr.SetTextAt(1, "TEST") but so far i've had no luck, any suggestions???
It turned out to be quite a bit more complicated to make a linetype containing text than merely calling SetTextAt() on one of the segments. In order to understand what properties needed setting, I first loaded the HOT_WATER_SUPPLY linetype from acad.lin (using the LINETYPE command):
I then looked at the contents of the linetype table using ArxDbg (the ObjectARX SDK sample that is very helpful for understanding drawing structure). Here's what the SNOOPDB command - defined by the ArxDbg application - showed for the loaded linetype:
From there it was fairly straightforward to determine the code needed to create our own complex linetype containing text segments. I decided to call the new linetype "COLD_WATER_SUPPLY", and have it resemble the original in every way but placing "CW" in the middle segment, rather than "HW" (with the descriptions updated to match, of course). As I've simply copied the properties of an existing linetype, please don't ask me to explain what they all mean. :-)
Here's the C# code:
using Autodesk.AutoCAD.Runtime;
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.DatabaseServices;
using Autodesk.AutoCAD.Geometry;
using Autodesk.AutoCAD.EditorInput;
namespace Linetype
{
public class Commands
{
[CommandMethod("CCL")]
public void CreateComplexLinetype()
{
Document doc =
Application.DocumentManager.MdiActiveDocument;
Database db = doc.Database;
Editor ed = doc.Editor;
Transaction tr =
db.TransactionManager.StartTransaction();
using (tr)
{
// We'll use the textstyle table to access
// the "Standard" textstyle for our text
// segment
TextStyleTable tt =
(TextStyleTable)tr.GetObject(
db.TextStyleTableId,
OpenMode.ForRead
);
// Get the linetype table from the drawing
LinetypeTable lt =
(LinetypeTable)tr.GetObject(
db.LinetypeTableId,
OpenMode.ForWrite
);
// Create our new linetype table record...
LinetypeTableRecord ltr =
new LinetypeTableRecord();
// ... and set its properties
ltr.Name = "COLD_WATER_SUPPLY";
ltr.AsciiDescription =
"Cold water supply ---- CW ---- CW ---- CW ----";
ltr.PatternLength = 0.9;
ltr.NumDashes = 3;
// Dash #1
ltr.SetDashLengthAt(0, 0.5);
// Dash #2
ltr.SetDashLengthAt(1, -0.2);
ltr.SetShapeStyleAt(1, | http://through-the-interface.typepad.com/through_the_interface/2008/01/index.html | crawl-001 | en | refinedweb |
Simplify report views with Groovy's template engine framework
Document options requiring JavaScript are not displayed
Sample code
Help us improve this content
Level: Intermediate
Andrew Glover (aglover@stelligent.com), President, Stelligent Incorporated
15 Feb.
I.
String? But as you can see in Listing 1, Groovy
drops those +s, leaving you with much
cleaner, simpler code.
+
String example1 = "This is a multiline
string which is going to
cover a few lines then
end with a period."
Groovy also supports the notion of here-docs, as shown in
Listing 2. A here-doc is a convenient mechanism for creating formatted
Strings, such as HTML and XML. Notice
that here-doc syntax isn't much different from that of a normal String declaration, except that it requires
Python-like triple quotes.
itext =
"""
This is another multiline String
that takes up a few lines. Doesn't
do anything different from the previous one.
""".
GString
${}
Template engines have been around for a long time and can be found in almost every modern language. Normal Java language has Velocity and FreeMarker, to name two; Python has Cheetah and Ruby ERB; and Groovy has its own engine. See Resources to learn more about template engines.?
lang
${lang}.
length()
lang = "Groovy"
println "I dig any language with ${lang.length()}.
GroovyTestCase
import groovy.util.GroovyTestCase
class <%=test_suite %> extends GroovyTestCase {
<% for(tc in test_cases) {.
person
p
fname
lname.
map
For example, if a simple template had a variable named favlang, I'd have to define a map with a key value of favlang. The key's value would be whatever I
chose as my favorite scripting language (in this case, Groovy, of course).
favlang
In Listing 7, I've defined this simple template, and in
Listing 8, I'll show you the corresponding mapping code.
My favorite dynamic language is ${favlang}
Listing 8 shows a simple class that does five things, two of
which are important. Can you tell what they are?
package com.vanward.groovy.tmpl
import groovy.text.Template
import groovy.text.SimpleTemplateEngine
import java.io.File
class SimpleTemplate{
static void main(args) {
fle = new File("simple-txt.tmpl")
binding = ["favlang": "Groovy"]
engine = new SimpleTemplateEngine()
template = engine.createTemplate(fle).make(binding)
println template.toString()
}
}
Mapping the values for the simple template in Listing 8 was
surprisingly easy.
First, I created a File
instance pointing to the template, simple-txt.tmpl.
File.
binding.
SimpleTemplateEngine.
Person
class Person{
age
fname
lname
String toString(){
return "Age: " + age + " First Name: " + fname + " Last Name: " + lname
}
}
In Listing 10, you can see the mapping code that maps an
instance of the above-defined Person class.
import java.io.File
import groovy.text.Template
import groovy.text.SimpleTemplateEngine
class TemplatePerson{
static void main(args) {
pers1 = new Person(age:12, fname:"Sam", lname:"Covery")
fle = new File("person_report.tmpl")
binding = ["p":pers1]
engine = new SimpleTemplateEngine()."
pers1
When the code in Listing 10 is run, the output will be XML defining the person element, as shown in Listing 11.
.
list
fle = new File("unit_test.tmpl")
coll = ["testBinding", "testToString", "testAdd"]
binding = ["test_suite":"TemplateTest", "test_cases":coll]
engine = new SimpleTemplateEngine().
coll.
println
nfile.withPrintWriter{ pwriter |
pwriter.println("<md5report>")
for(f in scanner){.
nfile
PrintWriter.
<md5report>
<% for(clzz in clazzes) {.
ChecksumClass
The model then becomes the ChecksumClass defined in Listing 15.
class CheckSumClass{
name
value
String toString(){
return "name " + name + " value " + value
}
}
Class definitions are fairly easy in Groovy, no?
Creating a collection
Next, I need to refactor the section of code that previously
wrote to a file -- this time with logic to populate a list with the new
ChecksumClass, as shown in Listing 16.
clssez = []
for(f in scanner){
f.eachLine{ line |
iname = formatClassName(bsedir, f.path)
clsse.
[]
for
line
CheckSumClass
Adding the template mapping
The last thing I need to do is add the template engine-specific
code. This code will perform the run-time mapping and write the
corresponding formatted template to the original file, as shown in
Listing 17.
fle = new File("report.tmpl")
binding = ["clazzes": clzzez]
engine = new SimpleTemplateEngine():
/**
*
*/")
clssez = []
for(f in scanner){
f.eachLine{ line |
iname = formatClassName(bsedir, f.path)
clssez << new CheckSumClass(name:iname, value:line)
}
}
fle = new File("report.tmpl")
binding = ["clazzes": clzzez]
engine = new SimpleTemplateEngine()
template = engine.createTemplate(fle).make(binding)
nfile.withPrintWriter{ pwriter |
pwriter.println template.toString()
}
}
About the author
Andrew Glover is the President of Stelligent Incorporated, a Washington, D.C., metro area company specializing in the construction of automated testing? | http://www.ibm.com/developerworks/java/library/j-pg02155/ | crawl-001 | en | refinedweb |
Some more great news from Microsoft. Popfly is now in beta mode and can be downloaded for free. They've also announced a Mashup contest where you can win a Zune or XBox 360 Halo 3 Special Edition.. "
Pretty cool stuff. Link: Microsoft Announces Popfly Beta and Mashup and Win contest.
I meant to post this a while ago. This is a presentation that I did for the Wisconsin .NET User's Group in Sept, focusing on Microsoft's implementation of Ruby (IronRuby). The session went through the features of Ruby, Ruby on Rails, the DLR architecture, IronRuby, and several different tools to get you up and running.
One of my main thrusts of the presentation is that I believe Ruby is going to be a truly cross-platform and cross-runtime language in the near future. Ruby itself already runs on almost every OS to include Windows, Linux and Mac and via Ruby on Rails can work with almost any database out there (MySql, Postgres, Firebird, Oracle, DB2, Sql Server, Sybase, etc). JRuby is providing a Ruby-runtime that integrates with the Java framework, similar to what IronRuby is doing for Ruby and .NET. Shortly you'll be able to develop a Ruby application and deploy it on any OS...and on any major run-time, be it by itself, on a Java stack or on a .NET stack. On top of that, you're already seeing significant support from the industry at large, to include ThoughtWorks (one of the first commercial JRuby implementations via Mingle), Borland (3rdRail Ruby IDE), JetBrains (Ruby IDE), Microsoft (IronRuby and the DLR), several Google Summer of Code projects, and industry icons such as Martin Fowler and Pragmatic Dave. Pretty powerful...in my humble view, Ruby has reached the tipping point.
DirectSupply hosted the meeting. It was an awesome facility and perfect for our audience (~150 in attendance).
One cool new development since my presentation is that Microsoft has decided to host IronRuby on RubyForge...making a truly open-source implementation. That's great news!
Download IronRuby.ppt
A minor release is out for Adapdev.NET, in support of the new Codus 1.4 release. Changes are primarily around the Adapdev.Data.Schema namespace to include adding support for native Oracle drivers, MySql foreign key retrieval, Sql Server Express 2005, and some small bug fixes.
Download is here.
This is a small update to address several minor bugs, the most important one being around sql generation for composite keys.
Latest binaries are here.
I'm pleased to announce the final release of Codus 1.4!
If you aren't familiar with Codus, it's a comprehensive code generation tool for object-relational mapping. It takes an existing database and automatically generates all of the code for updating, deleting, inserting and selecting records. In addition, it creates web services for distributed programming, strongly-typed collections, and a full set of unit tests.
What's New
This new release brings tons of new features, to include:
Thank Yous
This is an exciting new release that's been a long time coming. Many thanks to everyone that helped make it possible through extensive beta testing and input. In particular I'd like to thank Bhaskar Sharma of HCL Technologies for his code donations around VS2005 generation and n-n mappings. He also had several other improvements that hopefully will make it into a later release, to include support for optimistic locking and better code retention. Other kudos go to the following community members for their bug reports and feedback:
Codus Has Gone Commercial!
Codus is now being released as a commercial product. The support demands have grown substantially over the past year - Codus now averages almost 8,000 downloads per month - so a commercial model is the best option for continuing to grow and improve Codus and meet the increasing support demands. It will also open the doors for the completion of several other super secret products that are currently being worked on.
So, if you've used Codus in the past, I'd encourage you to purchase the latest version - details are here. All of the previous versions are still free and available for download. The commercial version is available in a Single Developer and Site License option. Purchase includes full source code, all minor updates, and priority support. In the near future you'll also get access to a member portal with nightly builds and access to the source code repository.
What's Next?
Two quick releases are planned over the next few months to support generation of Castle ActiveRecord mappings and our emerging Elementary framework. Those will comprise 1.5 and 1.6 respectively (and we may even sneak a few extras in!). After that, the focus is on version 2.0 which will be a ground up rewrite. Focus for 2.0 is:
Current direction is WPF for the interface and ClickOnce for deployment - but it's up for debate and we're definitely open to suggestions. Something else that's being floated is a model designer...Beyond that, we're looking at generation of ASP.NET websites and WinUI apps for database administration, along with support for several other ORM frameworks. Current roadmap is here. Lot's of opportunities!
Thanks again to everyone that helped get this out the door!
Attached are the slides and videos from the Wisconsin .NET User's Group launch of .NET 3.0 and Vista.
Download Net30Launch.zip
Also, here are links to the 2 videos I showed:
German Coastguard
Interview in Northern Iraq
A new beta has been released following on the heals of 031307. Most important in this release is a bug fix for an issue that popped up in 031307. Codus wasn't copying the Oracle.DataAccess.dll to the generated output folder, so depending on your environment, if you try running the compiled code you'll get an error stating that it can't find the Oracle.DataAccess.dll. (Thanks to Darren Sellner for the screen shot and bug report)
To solve this, simply copy the Oracle.DataAccess.dll from your Codus install folder to the generated output folder and you should be good to go. Or, you can download the new beta. :)
There are also two major additions in this beta, making it feature complete:
Throw on top of that a few minor bug fixes, and it's pretty close to gold! I'll be addressing bugs over the next 1-2 weeks, with the goal of a final release at the end of the month.
Latest Beta (under Current Development Release):
The latest release of Elementary, an advanced ORM framework, is available. Elementary is currently in the early stages, but is being used in several commercial environments with great success. All feedback so far has been very positive and the framework is quite stable, so a 1.0 release isn't very far off.
This release addresses several minor bugs. The three major bugs that were fixed:
The download is available here:
If you're using Elementary, shoot me an email and let me know what you think and what you'd like to see added!
The latest 1.4 beta build of Codus is now available. This build addresses the following:
What's new in 1.4?
What's left:
IMPORTANT:
The naming for the Sql Server database connection options have changed. When you click on a saved database connection that's using Sql Server, you'll get an error saying the key can't be found. That's because it's looking for "Sql Server" and there are now two options "Sql Server 2000" and "Sql Server 2005". Simply select the new Sql Server option that you want to use and you're good to go.
Latest download is available here (under Current Development Release):
I'm currently on track for releasing the final 1.4 version by the end of this month. Please try out the beta and provide feedback! Thanks to everyone that identified the items above and provided suggestions so far.
Here are the slides from last week's presentation to the Wisconsin Fox Valley .NET User's Group. The session covered SubSonic and MonoRail, with a brief mention of the Patterns and Practices Web Client Software Factory.
Download 022107.ppt | http://feeds.feedburner.com/AdapdevTechnologies | crawl-001 | en | refinedweb |
Integrating Azure Container Instances in AKS
In a previous blog post, I talked about how excellent the managed Kubernetes service is in Azure and in another blog post I spoke about Azure Container Instances. In this blog post, we will be combining them so that we get the best of both worlds.
We know that we can use ACI for some simple scenarios like task automation, CI/CD agents like VSTS agents (Windows or Linux), simple web servers and so on but it’s another thing that we need to manage. Even though that ACI has almost no strings attached, e.g. no VM management, custom resource sizing and fast startup, we still may want to control them from a single pane of glass.
ACI doesn’t provide you with auto-scaling, rolling upgrades, load balancing and affinity/anti-affinity, that’s the work of a container orchestrator. So if we want the best of both worlds, we need an ACI connector.
The ACI Connector is a virtual kubelet that get’s installed on your AKS cluster, and from there you can deploy containers just by merely referencing the node.
If you’re interested in the project, you can take a look here.
To install the ACI Connector, we need to cover some prerequisites.
The first thing that we need to do is to do is to create a service principal for the ACI connector. You can follow this document here on how to do it.
When you’ve created the SPN, grant it contributor rights on your AKS Resource Group and then continue with the setup.
I won’t be covering the Windows Subsystem for Linux or any other bash system as those have different prerequisites. What I will cover in this blog post is how to get started using the Azure Cloud Shell.
So pop open an Azure Cloud Shell and (assuming you already have an AKS cluster) get the credentials.
az aks get-credentials -g RG -n AKSNAME
After that, you will need to install helm and upgrade tiller. For that, you will run the following.
helm init helm init --upgrade
The reason that you need to initialize helm and upgrade tiller is not very clear to me but I believe that helm and tiller should be installed and upgraded to the latest version every time.
Once those are installed, you’re ready to install the ACI connector as a virtual kubelet. Azure CLI installs the connector using a helm chart. Type in the command below using the SPN you created.
az aks install-connector -g <AKS RG> -n <AKS name> --connector-name aciconnector --location westeurope --service-principal <applicationID> --client-secret <applicationSecret> --os-type both
As you can see the in command from above, I typed both for the –os-type. ACI supports Windows and Linux containers so there’s no reason not to get both 🙂
After the install, you can query the Kubernetes cluster for the ACI Connector.
kubectl --namespace=default get pods -l "app=aciconnector-windows-virtual-kubelet-for-aks" # Windows kubectl --namespace=default get pods -l "app=aciconnector-linux-virtual-kubelet-for-aks" # Linux
Now that the kubelet is installed, all you need to do is just to run kubectl -f create YAML file, and you’re done 🙂
If you want to target the ACI Connector with the YAML file, you need to reference a nodeName of virtual-kubelet-ACICONNECTORNAME-linux or windows.
apiVersion: apps/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 resources: requests: memory: 1G cpu: 250m nodeName: virtual-kubelet-aciconnector-linux
You run that example from above and the AKS cluster will provision an ACI for you.
What you should know
The ACI connector allows the Kubernetes cluster to connect to Azure and provision Container Instances for you. That doesn’t mean that it will provision the containers in the same VNET as the K8 is so you can do some burst processing or those types of workloads. This is let’s say an alpha concept which is being built upon and new ways of using it are being presented every day. I have been asked by people, what’s the purpose of this thing because I cannot connect to it, but the fact is that you cannot expect that much from a preview product. I have given suggestions on how to improve it, and I suggest you should too.
Well that’s it for today. As always have a good one! | https://florinloghiade.ro/tag/azure-container-instances/ | CC-MAIN-2021-04 | en | refinedweb |
Important: Please read the Qt Code of Conduct -
moc cpp generation fails with "No relevant classes found. No output generated."
I have a class D11 that inherits from class B1 that inherits QObject. B1 has declared Q_OBJECT, and so does D11. (Well, D11 tries to). But I'm running into the infamous
'undefined reference to vtable'error in the constructor and destructor of D11.
This issue has popped up many times before on these forums, and I have tried all the usual recommendations:
- The failing class is indeed in separate files (D11.h and D11.cpp).
- I have tried Build > Run qmake from QtCreator.
- Deleted the build dir, run qmake from QtCreator, and run build again.
The moc build step to generate moc_D11.cpp results in the following:
D11.h:0: Note: No relevant classes found. No output generated.
and indeed, moc_D11.cpp is an empty file.
I have many other files in my project with the exact same hierarchies:
QObject <-- B1 <-- D11
QObject <-- B1 <-- D12
QObject <-- B1 <-- D13
QObject <-- B2 <-- D21
None of these have any problems. In particular, D12 and D13 that derive from B1 and are (obviously) fairly similar to D11 are fine and have their moc_D12.cpp and moc_D13.cpp generated just fine.
This is being cross-compiled to an RPi, but not sure that that should matter. The moc command is fairly long, but I have checked it meticulously, switch-by-switch, between the versions that work (D12, D13) vs D11, which doesn't work.
Unfortunately the project is both large as well as proprietary, so I cannot post it - and of course a toy example is not going to show this (and/or would be easy to resolve). Still, if there is anything else I can try, I will welcome suggestions.
- jsulm Lifetime Qt Champion last edited by
Not sure if it will help, but here is an obfuscated version. D11.h:
#ifndef D11_H #define D11_H #include "b1.h" class B2; class D11 : public B1 { Q_OBJECT // <-- this is causing "undefined reference to vtable for D11" WHY!?? public: D11(const B2& s); ~D11() override; void processRawData(const QByteArray& etba) override; protected: void initFSMs() override; void setLock(byte prefix) override; void dropLock() override; void updateFsm1(byte prefix) override; void updateFsm2(byte prefix) override; bool updateFsm3(byte dataByte) override; bool extractVals(byte dataByte) override; void signExtend(int& val) override; signals: void signal1(); void signal2(); private: const int CONST1 = 0b1000'0000; const int CONST2 = 0b0100'0000; const int MASK = 0b0010'0000; // About 15-20 int and bool private members here... enum class Fsm1State { ...names here... }; enum class Fsm2State { ...names here... }; enum class Fsm3State { ...names here... }; Fsm1State mFsm1State; Fsm2State mFsm2State; Fsm3State mFsm3State; bool fn1(byte dataByte); bool fn2(byte dataByte); bool fn3(byte dataByte); bool fn4(byte dataByte); }; #endif // D11_H
D11.cpp:
#include "d11.h" #include "B2dir/b2.h" D11::D11(const B2& s) : B1(s) { initFSMs(); } D11::~D11() { qDebug("D11 dtor"); } void D11::initFSMs() { mFsm1State = Fsm1State::NAME1; mFsm2State = Fsm2State::NAME1; } void D11::processRawData(const QByteArray &etba) { for (byte dataByte : etba) { . . . } emit mBase2.b2signal(); } // Other vanilla member function definitions here...
B1.h:
#ifndef B1_H #define B1_H #include <QObject> ...other QIncludes here, like QByteArray etc... class B2; class B1 : public QObject { Q_OBJECT public: explicit B1(const B2& s, QObject *parent = nullptr); virtual ~B1(); using byte = unsigned char; signals: public slots: public: virtual void processRawData(const QByteArray& etba) = 0; protected: // some int/bool members here const B2& mBase2; virtual void initFSMs() = 0; virtual void setLock(byte dataByte) = 0; virtual void dropLock() = 0; virtual void updateFsm1(byte dataByte) = 0; virtual void updateFsm2(byte dataByte) = 0; virtual bool updateFsm3(byte dataByte) = 0; virtual bool extractVals(byte dataByte) = 0; virtual void signExtend(int& val) = 0; }; #endif // B1_H
B1.cpp:
#include "b1.h" #include "B2dir/b2.h" B1::B1(const B2& s, QObject *parent) : QObject(parent), mBase2(s) { qDebug("B1 abstract base class ctor"); } B1::~B1() { qDebug("B1 abstract base class dtor"); }
I'm happy to provide any specific additional information. I can comment out the
Q_OBJECTin D11 and of course it builds just fine.
Hi,
Did you re-run qmake after adding the Q_OBJECT macro ?
@sgaist yes, see my first post. In particular, I have tried deleting the build directory, rerunning qmake and rerunning the entire build after that.
Is there a verbose mode to the moc command that will elaborate on the
No relevant classes found?
Quite incredibly, it turned out that the moc error was because of moc parsing not being able to comprehend C++14 digit separators in the constants I had in the private section of my class! Removing the digit separators enabled the moc compilation to go through without errors! So this doesn't work:
private: const int CONST1 = 0b1000'0000;
This works:
private: const int CONST1 = 0b10000000;
I have found C++14 digit separator support to be abysmal within the Qt ecosystem, from Qt Creator to now moc. I will personally just stop using it till Qt 6 at least; hopefully it will improve by then :P
Opening a feature request will be a better idea, it will make the issue known to moc's developers.
- aha_1980 Lifetime Qt Champion last edited by
Please post a link to the report here too, so others can follow later. Thanks!
Remove (no clean) build directory and build project.
@SGaist @aha_1980
Bug report here:
I also found other bugs filed against moc that one would need to watch out for:
- moc does not support raw string literals:
- moc does not support (certain?) unicode files:
These will also result in the
No relevant classes founderror.
Thanks for sharing your additional findings ! | https://forum.qt.io/topic/105770/moc-cpp-generation-fails-with-no-relevant-classes-found-no-output-generated | CC-MAIN-2021-04 | en | refinedweb |
Spacemacs is an Emacs (configuration) distribution. It mainly consists of pre-configured sets of packages that are organized in layers (e.g. there is a
haskell layer). With Spacemacs you can relatively quickly get an "IDE experience" for GHC development.
Topics regarding Emacs configuration in general can be found here: Emacs
If you want to see, what you may get, there is a Docker-based showcase with a Spacemacs environment that is fully configured for GHC development: github.com/supersven/ghc-spacemacs-docker
ghcide (ghc.nix argument:
withIde = true;) is not supported by
ghc.nix: . Most of this page still applies, but you have to provide your own
ghcide and
bear binaries.
ghcide must have been built with exactly the same GHC version you use to build your GHC project.
Table of ContentsTable of Contents
- Table of Contents
- Prerequisites
- Haskell
- ghcide
- How to enable
- Troubleshooting
- C
- Historical
PrerequisitesPrerequisites
ghc.nixghc.nix
This page assumes that you are using
nix and
ghc.nix.
The installation of Nix depends on your system. Please see .
ghc.nix is "installed" by cloning it into your GHC source folder, e.g.
cd /home/sven/src/ghc git clone
Spacemacs on
develop branch
Support for the
lsp backend in the
haskell-layer is currently only available on the
develop-branch.
To get it, you need to check it out. The Spacemacs code usually resides in
~/.emacs.d.
cd ~/.emacs.d git checkout --track origin/develop git pull
If Spacemacs is already running, restart it and update all packages.
HaskellHaskell
ghcideghcide
ghcide implements the Language Server Protocol (LSP). It is a tool that provides IDE features like type checking, symbol navigation and information on hover.
In simple words: Emacs doesn't understand Haskell, ghcide does.
How to enableHow to enable
Get
ghcide via
ghc.nix
To use
ghcide you have to make sure that it's in your environment.
ghc.nix provides a parameter -
withIde - for this.
Later we'll see that we need it in a
nix-shell environment. So, add a
shell.nix file with
withIde = true.
./shell.nix:
import ./ghc.nix/default.nix { bootghc = "ghc865"; withIde = true; withHadrianDeps = true; cores = 8; withDocs = false; }
The other parameters are optional and only provided as examples that you can configure much more with
ghc.nix.
CachixCachix
You can save a lot of compilation time by using a pre-built ("cached")
ghcide.
To enable the
cachix cache for
ghcide:
nix-env -iA cachix -f cachix use ghcide-nix
Of course, you only need the first line if
cachix isn't already installed.
Configure Spacemacs to use
ghcide with
nix-shell
Configure two layers,
lsp and
haskell, to use
ghcide in a
nix-shell environment:
... ;; List of configuration layers to load. dotspacemacs-configuration-layers '( (lsp :variables default-nix-wrapper (lambda (args) (append (append (list "nix-shell" "-I" "." "--pure" "--command" ) (list (mapconcat 'identity args " ")) ) (list (nix-current-sandbox)) ) ) lsp-haskell-process-wrapper-function default-nix-wrapper lsp-haskell-process-path-hie "ghcide" lsp-haskell-process-args-hie '() ) (haskell :variables haskell-enable-hindent t haskell-completion-backend 'lsp haskell-process-type 'cabal-new-repl ) ...
And load the
nix-sandbox package on statup:
... ;; List of additional packages that will be installed without being ;; wrapped in a layer. If you need some configuration for these ;; packages, then consider creating a layer. You can also put the ;; configuration in `dotspacemacs/user-config'. ;; To use a local version of a package, use the `:location' property: ;; '(your-package :location "~/path/to/your-package/") ;; Also include the dependencies as they will not be resolved automatically. dotspacemacs-additional-packages '(nix-sandbox) ...
compiler/ and
hadrian/ are two distinct projects
Unfortunately
projectile recognizes GHC and hadrian as one project.
To make calls to
ghcide with different parameters, the distinction between GHC and hadrian is important.
Add an empty
hadrian/.projectile file:
touch hadrian/.projectile
Configure different
ghcide command line arguments
To test if
ghcide works, you can call it directly.
For GHC:
nix-shell --command "ghcide compiler"
You should see a lot of output and finally a success message:
... Files that worked: 469 Files that failed: 0 Done
For hadrian:
nix-shell --command "ghcide --cwd hadrian ."
Files that worked: 96 Files that failed: 0 Done
To configure different
ghcide parameters per source folder, we can use
.dir-locals.el files.
.dir-locals.el:
((nil (indent-tabs-mode . nil) (fill-column . 80) (buffer-file-coding-system . utf-8-unix)) (haskell-mode (lsp-haskell-process-args-hie . ("--cwd /home/sven/src/ghc compiler"))) )
(Replace
/home/sven/src/ghc with the path to your GHC source directory.)
hadrian/.dir-locals.el:
((haskell-mode (lsp-haskell-process-args-hie . ("--cwd /home/sven/src/ghc/hadrian ."))))
(Replace
/home/sven/src/ghc with the path to your GHC source directory.)
--cwd (Current Working Directory) makes sure that
ghcide runs on the root of the project and not in the directory of the file.
The settings for
indent-tabs-mode,
fill-column and
buffer-file-coding-system are those preferred by the GHC project. "dir-local" variables are inherited from parent directories to their childs.
StartingStarting
Please make sure that you open first a Haskell file under
hadrian/ and then a file under
compiler/. Otherwise Spacemacs will automatically assume, that files under
hadrian/ belong to the same workspace as
compiler/.
TroubleshootingTroubleshooting
This setup is pretty complicated. To find errors, you can check several layers.
I would propose this order:
- Nix - Can I correctly instantiate the nix-environment? Does it contain
ghcide?
- ghcide - Can
ghcidebe run on the command line?
- lsp-mode (Emacs) - Are there any error messages in the
lspbuffers?
NixNix
nix-shell --pure shell.nix --command "which ghcide"
ghcideghcide
nix-shell --pure shell.nix --command "ghcide compiler" nix-shell --pure shell.nix --command "ghcide --cwd hadrian ."
lsp-mode (Emacs)lsp-mode (Emacs)
Enable message tracingEnable message tracing
M-x customize-mode
lsp-mode
Menu entry: Lsp Server Trace
Increase response timeoutIncrease response timeout
M-x customize-mode
lsp-mode
Menu entry: Lsp Response Timeout
BuffersBuffers
*lsp-log**lsp-log*
Shows how
ghcide is called.
For example:
Command "nix-shell -I . --pure --command ghcide --lsp --cwd /home/sven/src/ghc compiler /home/sven/src/ghc/shell.nix" is present on the path. Found the following clients for /home/sven/src/ghc/compiler/simplCore/CoreMonad.hs: (server-id hie, priority 0) The following clients were selected based on priority: (server-id hie, priority 0) Command "nix-shell -I . --pure --command ghcide --lsp --cwd /home/sven/src/ghc/hadrian . /home/sven/src/ghc/hadrian/shell.nix" is present on the path. Found the following clients for /home/sven/src/ghc/hadrian/UserSettings.hs: (server-id hie, priority 0) The following clients were selected based on priority: (server-id hie, priority 0) Buffer switched - ignoring response. Method textDocument/hover
**lsp-log*: hie: [SESSION_NUMBER]**lsp-log*: hie: [SESSION_NUMBER]
If you've enabled message tracing (see above), these buffers contain all requests and responses of the Language Server Protocol regarding one session.
CC
There are three LSP backends for C to choose from:
clangd (default in Spacemacs),
ccls and
cquery.
The
cquery project seems to be abandoned.
Both,
clangd and
ccls (can) use a
compile_commands.json (JSON Compilation Database) file as configuration.
Because I (@supersven) got the best results with
ccls (it was able to handle header files better), we'll continue with it. But configuring
clangd should be very simple, too.
Install
ccls
nix-env -i ccls
Generate compile_commands.jsonGenerate compile_commands.json
nix-shell --command 'bear hadrian/build.sh -j12 --flavour=Devel2'
bear intercepts all calls to the C compiler. This way it can write a
compile_commands.json that contains all compilation arguments and flags needed for each C file.
The
--flavour doesn't really matter, but you need to run a full (re-)build. Run
hadrian/build.sh clean
before, if you've already built GHC.
Configure
c-c++ layer
In
.spacemacs:
;; List of configuration layers to load. dotspacemacs-configuration-layers '( ... (c-c++ :variables c-c++-backend 'lsp-ccls) ... )
For more details about LSP backend configuration, please see:
HistoricalHistorical
DanteDante
ghcide support is pretty good now and the project is gaining momentum. If you aren't sure that you want to use
dante, you probably want to use
ghcide (at least for GHC development).
The author of this section (@supersven) switched to
ghcide, so it might be outdated.
Description: This section is a bit special because it applies to a very specific setup: Using Spacemacs (an Emacs configuration distribution) with
dante-mode as editor and
nix-shell for building GHC.
The initial setup is a bit cumbersome, but you'll gain syntax highlighting, type checking / info and navigation ("Jump to Definition").
Dante is currently only available on the
develop branch of Spacemacs.
cd ~/.emacs.d git checkout develop
Create a file
.dir-locals.el in the root folder of the GHC project (e.g.
~/src/ghc/.dir-locals.el on my machine):
((haskell-mode (dante-repl-command-line . ("nix-shell" "--arg" "cores" "8" "--arg" "version" "8.9" "--arg" "withHadrianDeps" "true" "--arg" "bootghc" "\"ghc864\"" "--pure" "ghc.nix" "--run" "hadrian/ghci.sh"))))
As you easily recognize,
dante-repl-command-line is set to running
hadrian/ghci.sh in a
nix-shell environment. The
--args are how I use
ghc.nix, of course you can and should adjust them to your needs.
If you now open a Haskell file in the GHC project,
dante-mode should automatically start and use
nix-shell to call
hadrian/ghci.sh.
Troubleshooting
- Configure
dante-modeto print debug information in a separate buffer:
Meta+x customize-group
dante
- Try to run
hadrian/ghci.shwith
nix-shellmanually to see if this works. I.e. run
nix-shell [args omitted] ghc.nix --run hadrian/ghci.shin your shell.
ToDo: Some features of
dante-mode don't seem to work. Maybe using
utils/ghc-in-ghci/run.sh would lead to better results, but I haven't tested this, yet. | https://gitlab.haskell.org/ghc/ghc/-/wikis/spacemacs | CC-MAIN-2021-04 | en | refinedweb |
Kernel functions. More...
Kernel functions.
This namespace contains kernel functions, which evaluate some kernel function
for some arbitrary vectors
and
of the same dimension. The single restriction on the function
is that it must satisfy Mercer's condition:
for all square integrable functions
.
The kernels in this namespace all implement the KernelType policy. For more information, see The KernelType policy documentation. | https://mlpack.org/doc/mlpack-git/doxygen/namespacemlpack_1_1kernel.html | CC-MAIN-2021-04 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.